text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
QUASILINEARIZATION FOR THE PERIODIC BOUNDARY VALUE PROBLEM FOR SYSTEMS OF IMPULSIVE DIFFERENTIAL EQUATIONS
The method of quasilinearization of Bellman and Kalaba [2] has been extended, refined, and generalized when the forcing function is the sum of a convex and concave function using coupled lower and upper solutions. This method is now known as the method of generalized quasilinearization. It has all the advantages of the quasilinearization method such that the iterates are solutions of linear systems and the sequences simultaneously converge to the unique solution of the nonlinear problem. See [1–4, 6–8] for details. In this paper, we extend the method of generalized quasilinearization to system of nonlinear impulsive differential equations with periodic boundary conditions. For this purpose, we develop a linear comparison theorem for system of impulsive differential equations with periodic boundary conditions. We develop two iterates which are solutions of linear impulsive system with periodic boundary conditions which converge monotonically and quadratically to the unique solution of the nonlinear problem. Results related to different types of coupled lower and upper solutions are developed.We note that the results of [1] are a special case of our results where the forcing function is made to be convex and in addition they obtain semiquadratic convergence only. The results of [3] can be obtained as the scalar case of our result.
Introduction
The method of quasilinearization of Bellman and Kalaba [2] has been extended, refined, and generalized when the forcing function is the sum of a convex and concave function using coupled lower and upper solutions.This method is now known as the method of generalized quasilinearization.It has all the advantages of the quasilinearization method such that the iterates are solutions of linear systems and the sequences simultaneously converge to the unique solution of the nonlinear problem.See [1][2][3][4][6][7][8] for details.
In this paper, we extend the method of generalized quasilinearization to system of nonlinear impulsive differential equations with periodic boundary conditions.For this purpose, we develop a linear comparison theorem for system of impulsive differential equations with periodic boundary conditions.We develop two iterates which are solutions of linear impulsive system with periodic boundary conditions which converge monotonically and quadratically to the unique solution of the nonlinear problem.Results related to different types of coupled lower and upper solutions are developed.We note that the results of [1] are a special case of our results where the forcing function is made to be convex and in addition they obtain semiquadratic convergence only.The results of [3] can be obtained as the scalar case of our result.
2 Method impulsive systems with PB conditions Consider the system of nonlinear impulsive differential equations (PBVP) x = f t,x(t) + g t,x(t) for t ∈ [0,T], t = τ k , with periodic boundary conditions where
., p).
We consider the set PC(X,Y ) of all functions u : X → Y ,(X,Y ⊂ R n ) which are piecewise continuous in X with points of discontinuity of first kind at the points τ k ∈ X, that is, there exist the limits lim t↓τk u(t) = u(τ k + 0) < ∞ and lim t↑τk u(t) = u(τ k − 0) = u(τ k ).
We consider the set PC 1 (X,Y ) of all functions u ∈ PC(X,Y ) that are continuously differentiable for t ∈ X, t = τ k .
Consider the sets α 0 (0) ≤ α 0 (T). (2.4) If the inequalities are reversed, then α 0 (t) is called an upper solution.This is referred to as natural upper and lower solutions.
(2.5) S. G. Hristova (2.6) One can define other kinds of coupled lower and upper solutions of (2.1)-(2.2) on the same lines.
We will prove some preliminary results for linear systems of impulsive differential equations.
Let A = {a i j } N i, j=1 be a matrix, N a natural number.We will say that A > 0 if a i j > 0 for i, j = 1,2,...,N.
We will use the following notation: e = (1,1,...,1).We will note that the vector e is the unit vector according to the operation @.
For our main results, we need the following lemma for linear systems of impulsive differential inequalities.Lemma 2.6.Assume that (1) the matrix ) (2.9) Proof.Let τ 0 = 0, τ p+1 = T. Consider the numbers (2.11) The obtained contradiction proves that this case is not possible.Consider the case when for every k : 0 ≤ k ≤ p there exists a natural number j k such that lim t→τk+0 m jk (t) = k+1 (2.12) and m i (t) < k+1 for t ∈ (τ k ,τ k+1 ], i = 1,2,...,N.Then from the jump condition (2.8) we . The last inequality contradicts the condition (2.9).Therefore, this case is impossible.Case 2. There exists a natural number l : According to the jump condition (2.8), we obtain m(τ k + 0) ≤ B k m(τ k ) ≤ 0. Therefore, there exist a natural number j k and a point ξ k ∈ (τ k ,τ k+1 ] such that m jk (ξ k ) = k+1 and m jk (ξ k ) ≥ 0. From the inequality (2.7), we have (2.13) The obtained contradiction proves that k = p + 1.Therefore, m(T) ≤ 0 and from the boundary condition (2.9) it follows that m(0) ≤ 0. As in the proof above, we obtain that As an application of Lemma 2.6, the following corollary is implied.This will be useful in proving the existence and uniqueness of the linear nonhomogeneous impulsive system with periodic boundary condition.
Then the PBVP for the homogeneous linear system has only the trivial solution.
S. G. Hristova and A. S. Vatsala 5 We note that the solution of the linear system of impulsive equations (2.14), (2.15) with the initial condition m(0) = m 0 is m(t) = W(t,0)m 0 , where and U k (t,s) is the fundamental matrix of the linear system m = A(t)m(t), t ∈ (τ k ,τ k+1 ] (for more details, see [5,9]).
Consider the periodic boundary value problem for the nonhomogeneous linear systems of impulsive differential equations (2.20) Lemma 2.9 (see [ where In our main result, we will use the following integral mean-value theorem.
Main results
In this section, we develop the method of quasilinearization for the periodic boundary value problem for the system of nonlinear impulsive differential equations (2.1)-(2.2).We obtain two monotone sequences which are solutions of appropriately chosen linear impulsive differential systems with periodic boundary conditions.These monotone sequences converge quadratically to the unique solution of (2.1)-(2.2).
Theorem 3.1.Let the following conditions hold.
(2) The functions f x , g x exist and are continuous on where where The function f x (t,α 0 (t))x is quasimonotone nondecreasing in x and the function ( f x (t,β 0 ) − g x (t,β 0 ))e@x is strictly decreasing in x on [0,T].
Proof.Consider the periodic boundary value problem for the system of impulsive linear differential equations (3. 3) The PBVP (3.3) can be written in the form ) where Consider the matrices From conditions (2), ( 3), (4), and (5) of Theorem 3.1, it follows that C 0 (t) ∈ Ξ and Therefore, according to Lemma 2.9, the boundary value problem (3.4) has a unique solution, which can be written in the form (2.21)-(2.22).We denote the solution of (3.4) by α 1 (t), β 1 (t).
We can prove in the same way as for the function α 1 (t) and We will prove the convergence of the sequences {α m (t)} ∞ 0 and {β m (t)} ∞ 0 .
The next theorem is for the case when the lower and upper solutions are completely opposite to those in Theorem 3.1.Theorem 3.2.Let the following conditions hold.
( Proof.Consider the periodic boundary value problem for the system of impulsive linear differential equations (3.46) The PBVP (3.46) can be written in the form where (3.48) Consider the matrices of Theorem 3.2, it follows that C 0 (t) ∈ Ξ and Therefore, according to Lemma 2.9, the boundary value problem (3.46) has a unique solution α 1 (t),β 1 (t).
We will prove that α 0 (t) ≤ α 1 (t) and β 0 (t) ≥ β 1 (t) on [0,T].Set p(t) = α 0 (t) − α 1 (t), q(t) = β 1 (t) − β 0 (t).Then we have (3.50) The PBVP (3.50) can be written in the form where where the functions are the unique solution of the boundary value problem for the linear system of impulsive differential equations (3.53) The inequality α m (t) ≤ β m (t),t ∈ [0,T], holds and both sequences are uniformly convergent.Denote From the uniform convergence and the definition of the functions α m (t) and β m (t), the validity of the inequalities follows.
Since the functions α m (t) and β m (t) are solutions of the PBVP (3.53), we obtain that the functions u(t) and v(t) are solutions of the PBVP As in the proof of Theorem 3.1, we can obtain that u(t) = v(t) on [0,T].We will prove that the convergence is quadratic.
The next theorem is about the case when the PBVP (2.1)-(2.2) has a lower solution as well as an upper solution.
Theorem 3.3.Let the following conditions hold.
(2) The functions f x , g x exist and are continuous on Ω(α 0 ,β 0 ), f x (t,x) is nondecreasing in x, g x (t,x) is nonincreasing in x for t ∈ [0,T], and for x ≥ y, where S 1 > 0, S 2 > 0 are constant matrices.
As particular cases of the proved theorems, we can obtain some results for the PBVP for systems of nonlinear ordinary differential equations. | 2,263.8 | 2006-04-13T00:00:00.000 | [
"Mathematics"
] |
Humor as a Reward Mechanism: Event-Related Potentials in the Healthy and Diseased Brain
Humor processing involves distinct processing stages including incongruity detection, emotional response, and engagement of mesolimbic reward regions. Dysfunctional reward processing and clinical symptoms in response to humor have been previously described in both hypocretin deficient narcolepsy-cataplexy (NC) and in idiopathic Parkinson disease (PD). For NC patients, humor is the strongest trigger for cataplexy, a transient loss of muscle tone, whereas dopamine-deficient PD-patients show blunted emotional responses to humor. To better understand the role of reward system and the various contributions of hypocretinergic and dopaminergic mechanisms to different stages of humor processing we examined the electrophysiological response to humorous and neutral pictures when given as reward feedback in PD, NC and healthy controls. Humor compared to neutral feedback demonstrated modulation of early ERP amplitudes likely corresponding to visual processing stages, with no group differences. At 270 ms post-feedback, conditions showed topographical and amplitudinal differences for frontal and left posterior electrodes, in that humor feedback was absent in PD patients but increased in NC patients. We suggest that this effect relates to a relatively early affective response, reminiscent of increased amygdala response reported in NC patients. Later ERP differences, corresponding to the late positive potential, revealed a lack of sustained activation in PD, likely due to altered dopamine regulation in reward structures in these patients. This research provides new insights into the temporal dynamics and underlying mechanisms of humor detection and appreciation in health and disease.
Introduction
Research in the field of humor processing has taken several key steps over the past two decades, both in terms of its underlying neurobiology and its psychological functions [1,2]. However, two major dimensions of humor processing have been left relatively unexplored to date. Firstly, neuroimaging studies have largely focused on the spatial characteristics of humor processing using functional magnetic resonance imaging (fMRI; [3][4][5]), while few studies have examined the temporal dynamics of these processes using magneto/electroencephalography (MEG/EEG). EEG and MEG studies to date have focused almost exclusively on the dynamics of verbal humor comprehension; with a particular focus on the so-called N400 component [6][7][8]. Only recently has visual humor been assessed using EEG in a study on emotional suppression [9]. This study focused on participant's active manipulation of the late positive potential (LPP), which has been linked to the underlying activity in reward related structures [10,11]. Secondly, many fMRI studies found that regions implicated in humor appreciation and experiencing positive rewards are largely overlapping and include dopaminergic regions of the midbrain and ventral striatum, as well as the amygdala [3][4][5], yet no study has used humorous stimuli as a specific reward signal.
To closer examine the underlying mechanisms at each processing stage, in particular the engagement of the dopaminergic and hypocretinergic reward system, we tested humor as feedback stimuli in patients with Narcolepsy-Cataplexy (NC) and idiopathic Parkinson's disease (PD). These patient populations are of interest because of their striking clinical symptoms in response to humorous stimuli and well characterized deficits in reward processing. Specifically, humor and laughing are the strongest trigger for cataplexy, a sudden loss of muscle tone triggered by emotions and the clinical hallmark of the NC [12]. NC is caused by a deficit in the hypothalamic hypocretin system, which also has strong interactions with the reward system [13][14][15]. While the motor components of cataplexy have been extensively investigated and attributed to inhibition of spinal alpha motoneurons mediated by ponto-medullary activity, emotional processing itself and the mechanisms of how emotions compromises the control of motor system remains essentially unknown [16]. In previous fMRI studies of NC we identified dysfunctional activation patterns in midbrain and ventral striatal reward related circuits and an increased amygdala activation in response to humorous pictures [17,18].
Using EEG, we aimed to compare the temporal dynamics and various stages of humor processing in NC and healthy controls with those of PD because of the latter groups well know impairments to the normal functioning of dopaminergic system and their blunted response to humor. PD is characterized by a progressive loss of dopamine producing neurons in the dorsal striatum; predominantly leading to a disturbance in motor functioning. However, as the disease progresses to ventral portions of the striatum, or as a result of treatment with dopaminergic agents, PD patients also show deficits in mesolimbic reward functions [19,20], as well as blunted emotional responses and deficits in joke comprehension [21]. Despite these previously researched impairments to humor and reward processing on both the behavioral and neuroimaging levels, this study is the first to examine these issues using EEG in these populations. Based on our previous fMRI finding of enhanced activity in the amygdala and right inferior parietal cortex in NC during on humor processing we hypothesized that narcolepsy-cataplexy patients would show ERP differences in distinct stages of processing of humor. In particular we expected that these effects might elicit early increases in ERP amplitude during humor feedback related to a rapid emotional and attentional orienting response. Furthermore, we expected a reduction of the rewarding value of humor due to impaired dopaminergic activity in the PD group, which would result in a reduction of amplitude for later ERP components.
Participants
Nineteen NC patients, 15 PD patients and 19 healthy controls were recruited from the University Hospital Zurich and Clinic Barmelweid. The final statistical analysis of the ERP datasets was performed on 12 participants from each group after EEG and behavioral inclusion criteria was met (see analysis section). Table 1 provides the demographic information for the participants. HLA typing was positive for HLA-DQB1*0602 in all 11 NC patients tested (no data for one patient). Hypocretin in the CSF could be obtained in 8 of the 12 patients and all showed undetectable levels. International criteria was used in the diagnosis of Parkinson's disease [22]. Each participant signed an informed consent form prior to the start of the experiment. The study was independently approved by both the cantonal ethical commissions of Zurich and Aarau, Switzerland.
As shown in Table 1 and expected from the distinct pathologies, NC and PD patient groups differed in their levels of sleepiness, with NC patients rating significantly higher on the Epworth Sleepiness Scale [23](ESS; a scale from 0 to 24 points indicative of long-term daytime sleepiness), depressive symptoms (measured using the Beck Depression Inventory), and ages, with PD patients being older than the other two groups. Given the inherent differences between patient groups, and our previous finding for NC, healthy controls were selected and matched for age and gender with respect to the NC group. Crucially, due to both ethical and clinical restrictions, 7 of the 12 NC patients maintained their regular level of medication during the experiment. Furthermore, 11 of the 12 PD patients kept to their normal dosages of medication, with one patient being drug-naive. These 11 patients were all taking medication which in different forms increases the amount of available dopamine (L-Dopa (MadoparH), Rotigotine (NeuproH), or Rasagiline (AzilectH)). Although the particular effects of continuing dopaminergic treatment in our PD patients are fairly complex and difficult to predict [24], maintaining patients medication reduced the likelihood of complete apathy in this patient group [25], and provides a more realistic everyday perspective as the vast majority of PD patients do indeed receive treatment. The inherent demographic and treatment differences between groups are further considered in the analysis and discussion section.
Task
Participants completed a time estimation task and then were given subsequent feedback based on their performance. The task was presented in a total of 6 blocks of 30 trials each. Prior to each block the participant was informed that they would be required to estimate durations of either 1, 2 or 5 seconds. Each trial consisted of a neutral picture presented as a cue indicating when they should start their estimation. Participants were instructed that once they believed the indicated duration of time had passed to that they should press a button with their index finger of their dominant hand. After approximately one second (randomly jittered), participants were presented with either a horizontally flipped version the same neutral image, or a slightly altered version of the image which made the picture a humorous one (see Figure 1). Each trial ended with a fixation cross lasting 1-4 seconds (normally distributed jitter around 2.5 seconds), leading directly to the next cue-picture. Participants were made aware that estimations within a certain window around the target time would result in changes to the picture to make it potentially humorous, whereas the image would simply be flipped if their response was outside this window. Importantly, the criteria for successful completion of the trial were constantly changed so that learning in this task is minimal. Trial success or failure, and hence whether the humorous or neutral version of the picture was presented, was determined by whether the participant's estimate was within a certain +/2 time window of the target. The window for success was initially set to 500 ms around the target and adjusted on a trial to trial basis with correct responses shortening this window by 33%, while incorrect responses would lengthen this window by 33%. This adjustment ensured that participants received approximately 50% successful feedback over the course of the experiment, and that feedback remained linked to their actual performance. Note that the time estimation task itself was unrelated to the humor experiment and it was not our intent to implement a learning algorithm. Thirty-six distinct images were selected as the funniest images (mean humor intensity of 2.2/3), from the database of 100 total images used in our previous study [17]. The order of images presented was pseudo-randomized in that the same image was never presented consecutively and all images were presented a total of 5 times.
EEG Recording and Processing
EEG was recorded from 125 sites on the scalp using a HydroCel Geodesic Sensor Net by Electrical Geodesics, Inc. (EGI) [26], sampled at 1000 Hz. Impendences were kept below 30V on all channels. All EEG pre-processing was performed using BrainVision's Analyzer (version 2; Brain Products, Munich, Germany) and Matlab (MathWorks, Natick, MA). For all participants, bandpass filters were applied between 1 and 30 Hz using a modest 12/ 24 dB slope including a notch filter at 50 Hz to remove mainspower noise. Data was then down-sampled to 250 Hz, individual bad channels were removed after visual inspection (never more than 6 channels per participant), and classic independent component analysis was performed over the entire length of the continuous data in order to remove components of the EEG associated with artifacts (i.e. electrocardio/oculo/myography artifacts; rhythmic tremor related artifacts in the PD group). The activity in the missing channels was then estimated through topographical interpolation using 3D splines [27]. Channels were then re-referenced to the average electrical activity over all channels [28]. Semi-automatic criteria were used to determine the presence of any remaining spurious artifacts which were then marked and eliminated from segmentation on an individual channels basis (maximal allowed voltage step of 25 mV/ms; maximal absolute difference of 75 mV/ms over 200 ms window; EEG within 2150 mV and 150 mV in amplitude). A mean of 5.0 (SD = 2.4) trials of the 180 total were removed for each participant using this criteria. ERPs were created using a baseline period of 200 ms prior to the picture presentation to 1000 ms after the event. For ERPs locked to the cue picture presentation, only the segments for the 2 s and 5 s estimation trials were used so as minimize any overlap of brain activity associated with the decision making and motor preparation required for the actual task response.
Current limitations in the analysis procedure as well as general guidelines in statistical analysis meant that the ANOVA required equal sized groups. Data from two PD participants could not be calculated due to technical artifacts in the recording leading to several missing blocks of data. A further PD patient was removed due uncorrectable motor artifacts by filtering or ICA leading to fewer than 30 trials in the final ERP waveform. Thus an upper limit of 12 participants for the two other groups was set by the PD population. Apart from technical problems in the EEG recording itself, a further 3 participants final ERP waveforms from the controls and NC group were not used in the final analysis and selected based on the highest standard deviation in the baseline period of 200 ms; an indicator for the overall quality of the ERP waveform resulting in 12 participant datasets for each group.
EEG Analysis
Statistical analysis of the ERP dataset comparisons was performed using a threshold-free cluster-enhancement technique (TFCE), followed by maximum non-parametric permutation statistics for significance testing [29]. This robust statistical approach allows us to analyze all channel-sample pairs across participant groups and conditions while both controlling for multiple comparisons and maintaining optimal sensitivity to potential signal differences. In order to examine both group differences and the effect of condition together, an analysis-ofvariance approach (TFCE ANOVA ) was used as the initial statistic for further permutation analysis [30,31]. Here, F-ratios for the main effects of group (NC, PD or healthy controls), and condition (neutral or humorous pictures), as well as their interaction effect are calculated for the original group datasets. The datasets are then randomly permuted across conditions first, then groups second (to ensure that a single participants condition files are never separated), and the F-ratios for this new dataset are calculated as well. The neighborhood of each channel-sample pair, both in terms of nearby channels and time points are then examined in order to calculate the amount of statistical support provided by its neighbors. Dependent on the amount of support (or not) each data-point has, its value is either enhanced or suppressed to give Figure 1. Experimental task. At the start of each block (of 30 trials), the participant was instructed to estimate 1, 2, or 5 seconds as soon as the first image was presented. 6 blocks in total. The first picture presented was always a neutral image, then depending on the accuracy of the participant's estimation either a positive, humorous picture was presented, or the initial neutral picture was horizontally flipped. A fixation cross was presented for a random duration between 1 and 4 seconds (mean 2.5+/21) before the onset of the next trial. doi:10.1371/journal.pone.0085978.g001 rise to new TFCE values which represent not only the statistical strength found specifically for that channel in the mass uni-variate approach, but also whether neighboring channels and time points show a similar pattern of activity (see [29], for complete details). This process was repeated with 10000 randomly permuted datasets to obtain an empirical distribution of TFCE values from which to determine statistical significance. Although the method principally relies on detecting local differences in amplitude, by enhancing these statistics using both information from neighboring channels and time points we are able to still detect smaller changes in amplitude reflective of larger shifts of peak location (topography differences), or time (latency shifts).
Two separate TFCE ANOVA analyses were carried out: a oneway TFCE ANOVA examined the presentation of the cue-picture across the three experimental groups; the second, a three-by-two mixed-factor TFCE ANOVA on the feedback picture presentation examined the group effect for both the neutral and humorous pictures. As with standard ANOVA analyses, separate post-hoc analyses using independent t-tests as the initial statistics followed by TFCE and permutation statistics (TFCE T ), were used when appropriate to determine which of the three groups differed from one another. Finally, a single independent TFCE T was performed to compare the medicated NC patients against the non-medicated NC patients for potential effects of treatment differences on humor processing. Since PD patients maintained their levels of medication, clinically adjusted to their specific motor symptoms, such an analysis is not possible for this group of patients.
Statistical alpha thresholds of interest were set at 0.05 for main effects and 0.20 for interaction effects. This low threshold for interactions was chosen because permutation of raw data has been shown to be particularly weak in the detection of interaction effects [30]. Any interaction of interest was then subjected to more classical analysis-of-covariance (ANCOVA), in order to confirm or reject the initial finding, as well as to allow for the inclusion of covariates into the model. This then allowed us to assess whether participant's age or sleepiness may have explained any ERP differences. Since both ESS and age are inherently linked to the group differences, new constructs of each were created by subtracting away the group means [32,33]. This essentially leaves the individual variation of each measure within the group intact but statistically makes the constructs mathematically orthogonal with respect to the group differences as not to violate the basic rule of independent predictor variables in the ANCOVA. Depression scores significantly correlated with a participants ESS and were thus left out of this analysis (r 35 = 0.33, p = 0.047). In addition main effects found were subject to additional post-hoc testing for the significant regions of interest while also including ESS, age as covariates. Additionally for the later differences, the amplitude values of early components were also included in the model to investigate whether early differences were predictive of late ERPs.
Cue-Picture Presentation
The one-way TFCE ANOVA with group as the main factor found no significant channel time pairs at the 0.05 level. Non-significant differences showed two late peaks at 480 ms and 600 ms over channels E75 (central-posterior; F 2,33 = 5.174, p = 0.341) and E37 (left-central; F 2,33 = 13.822, p = 0.219), respectively. Both peaks reflected higher ERP amplitudes for patient groups with respect to the healthy control participants. Figure 2 summarizes these nonsignificant findings.
Feedback Presentation: Group Differences
The three-by-two TFCE ANOVA showed several time points of significance at the 0.05 level. The main effect of group revealed a single cluster of significant channel-sample pairs (Figure 3). This cluster involved 31 distinct electrodes and ranged from 460 ms to 550 ms after feedback presentation. Group differences peaked at E86 (right-parietal) at 504 ms (F 2,33 = 11.450, p = 0.026).
Post hoc analysis was performed using a single ANCOVA analysis with the orthogonal constructs of age, ESS, as well as two regions of interest for earlier components around 110 and 170 ms were used as covariates to examine the effect of group on a region of interest effectively describing the significant cluster indicated by the TFCE ANOVA test (channels E78, E79, E85, E86, E87, E92, E93; from 488 to 512 ms). This test revealed that although a stronger earlier component at 170 ms significantly predicted stronger later amplitudes (F 1,29 = 7.152, p = 0.012), the late potential was still primarily dependent on the participant's group (F 2,29 = 7.826, p = 0.002). No other covariate reached significant levels. Planned contrasts to the ANCOVA revealed that the PD group was significantly different to both healthy controls (p = 0.008), and NC patients (0.001), with no group differences between controls and NC here (p = 0.499). Figure 4 summarizes the main differences found between conditions, that is, whether a neutral or a humorous picture was given as reward feedback. Although the two ERPs began differentiating themselves from baseline levels as early as 20 ms post-presentation, the first significant cluster of differences peaked over channel E76 (central-posterior) at 112 ms (F 1,33 = 80.900, p,0.001), reflecting a higher positive ERP amplitude for neutral pictures. Shortly thereafter around 170 ms, humorous pictures showed a higher overall negative amplitude peaking over channel E66 (left-posterior, F 1,33 = 54.660, p,0.001). From 190 ms to 210 ms both conditions show a generally similar pattern of activity except for a small cluster of significant channels-sample pairs over the left posterior-temporal region, where humorous pictures have a significantly stronger negative amplitude (F 1,33 = 10.339, p = 0.018).
Feedback Presentation: Condition Differences
After 240 ms, the main topographies of both conditions began differentiating in that neutral pictures generated a more leftposterior localized ERP, while humorous pictures generated a centralized ERP. Statistically this resulted into two main clusters of differences at the peaks of each ERP; E58 at 270 ms (F 1,33 = 31.835, p = 0.002); and E105 at 280 ms (F 1,33 = 19.140, p = 0.003) respectively. Around the same time a slower rightcentral positive ERP component was also found to show significant differences between 280 ms to 400 ms on the increasing initial slope of the ERP. The differences between the two conditions peaked over channel E93 at 360 ms (F 1,33 = 31.592, p = 0.002). Here humorous pictures showed both a larger positive amplitude as well as shorter latency within the same topography. The negative reflection of this late positive potential was also found to be highly significant around the left fronto-temporal electrodes from 320 ms to 420 ms post-feedback (peak at E48, F 1,33 = 43.943, p = 0.001). Finally, the two ERPs differentiated once again with the humorous condition showing sustained higher amplitudes whereas the neutral condition had returned to baseline levels. This effect started around 630 ms until 770 ms with a positive peak over channel E96 (right-posterior), and a negative peak over E11 (fronto-central), around 670 ms.
Feedback Presentation: Group Condition Interaction
Group and condition interaction peaked over channel E4 (rightfronto-central), around 270 ms post-feedback (F 2,33 = 11.631, p = 0.158; Figure 5). The fact that permutation of raw data is known to be insensitive in the detection of interaction effects [30], as well as the consistent topography of this interaction between 240 ms and 280 ms, calls for its further investigation. We thus submitted the difference ERP between the neutral and humorous feedback for a region of interest around the peak interaction channel-time point (channels E3, E4, E5, E10, E11, E118, and E124 from 264 ms to 276 ms), to a separate ANCOVA including age, ESS, as well as the two earlier components to examine whether the later differences were related to the earlier condition differences. We found that, only the main effect of group significantly accounted for the difference between the two conditions (F 2,29 = 5.146, p = 0.012). The earlier ERP component at 170 ms showed trend levels (F 1,29 = 3.194, p = 0.084), while all other covariates were not significant. Planned contrasts indicated the interaction was primarily driven by the divergence of values for the NC group in that maximal differences were found between the NC and PD group (p = 0.004), then NC and the healthy controls (p = 0.073), while the healthy controls and PD patients showed overall similar condition differences (p = 0.267). Moreover, when examining the effect of condition on each group separately, only the healthy controls (p = 0.036), and the NC patients (p = 0.004), showed significant differences.
Medicated vs Unmedicated Narcolepsy-Cataplexy
An independent TFCE T test reported no significant differences between medicated and unmedicated patients. However, with only five patients tested against seven, the power would clearly have been too low to expect for such a conservative approach to yield significant results. We therefore examined the maximal significance which peaked at E23 around 155 ms; T 10 = 4.391, p = 0.492). The topography around this peak corresponded well to the main effect of condition around 170 ms described earlier. However, while the condition effect found between neutral and humorous pictures relied on amplitude differences, the main difference between medicated and unmedicated patients was clearly a latency shift. Perhaps somewhat counterintuitive is that the medicated patients showed a delayed time of onset and peak of this ERP by a mean of 32 ms (negative peak NC med = 152 ms, NC unmed = 184 ms). Importantly, this split group did not show ERP differences over E4 at 270 ms (the point of significant group and condition interactions), with mean differences of only 0.16 mV (uncorrected T-test; T 10 = 0.258, p = 0.802).
Discussion
Here we used high-density EEG to assess the temporal and topographical dynamics of humor processing as a reward signal. We included healthy participants and two further clinical groups of interest since NC patients are known to have abnormal emotional response to humor, and PD patients show impaired humor appreciation and reward processing. We were especially interested in humor processing in narcolepsy patients since humor is the main trigger of cataplexy, indicating a strong interaction of emotions and the motor system in NC. While our previous studies identified recruitment of amygdalae-hypothalamic and frontal areas during humor processing our findings now indicate that distinct stages of humor processing itself may contribute to mechanisms underlying cataplexy. The ERP results here suggest that the processing of humorous pictures may involve rapid differences in early processes followed by an emotional response to stimulus incongruity, then by a humor appreciation phase, during which the positive reinforcement value of the stimulus is processed. The later ERP component differences found in our patient groups provide important clues to the origins and functions of these components. The ERP at 270 ms found an increased response to humorous pictures in NC patients at 270 ms, compared to PD and healthy controls, while PD patients showed a late overall reduction in response amplitude to both neutral and humorous feedback after 500 ms. Given that the earlier components are not predictive of the later ERP differences, and thus are likely to be caused by distinct underlying mechanisms, each is discussed separately.
Early evoked responses
The earliest ERPs generated by feedback presentation were found at 110 ms and 170 ms with larger positive amplitudes for neutral pictures and later larger negative amplitudes for humorous pictures respectively. Given that for both ERPs, differences were maximal over central posterior channels we hypothesized that both of these peaks correspond to visual processes. In terms of latency, magnitude, and topography, these ERPs correspond well to the well-researched visual evoked potentials P100 and N170. Previous research has found that low-frequency spatial characteristics (global) processing primarily occurs at the P100 mark while high-frequency spatial characteristics, fine-feature processing occurs at the N170 mark [34][35][36]. These theories may also best explain the differences found here since neutral pictures were created through a global transformation of the cue-picture (horizontal flip), and hence a higher P100 amplitude, while humorous pictures generally entailed a smaller, local addition to the cue-picture, hence resulting in the higher N170 amplitudes found. This may also explain the rapid shift of topography at the 200 ms mark, with its central positivity probably representing the less understood P2 ERP. This component is thought to handle more advanced processing of stimuli, such as feature detection of salient stimuli [37], and a further attention-lock on a relevant stimulus [38]. Although speculative, the fact that for the P2 we only found significantly stronger amplitudes for humorous stimuli in electrodes over the left temporal-occipital junction may reflect the typically higher BOLD activity commonly found in this area [3,4,39].
Later Differences in Response to Humor feedback
The significant differences found around 270 ms have two important aspects. The first is that here topographic changes between the two conditions emerged, as opposed to the amplitude differences involved in earlier components, thus suggesting a divergence in the brain areas involved in processing. Moreover, the first group differences also appeared at this stage with overall reduced amplitudes in PD patients while NC patients tended to show specifically increased ERP amplitudes to humorous feedback. We interpret the increased response in NC patients as an increased sensitivity to humor indicating that dysfunctions in emotional processing at a relative early stage may contribute to the pathophysiology of cataplexy. The brain's increased sensitivity to humor may represent the initial step in triggering downstream processes that lead to an affective loss of muscle tone control. While downstream pathways of cataplexy have been extensively investigated and attributed to descending ponto-medullo-spinal activity similar to those underlying REM-sleep atonia, the mechanism of how humor induces motor weakness remains essentially unknown. Given that early ERPs are similar between the groups and differences emerge later at 270 ms we conclude that attentional or cognitive processes such as ambiguity resolution or appreciation of humor, but not initial visual processing, are critically implicated. The observed trend for increased ERP amplitudes for NC patients precludes attentional resources to explain the ERP differences that have been found in other studies with reduced amplitudes in these patients for a variety of tasks [40][41][42]. Since we used humor to activate reward system including the ventral striatum and amygdalae it is likely that increased sensitivity to humor is related to reward processing itself. The increase in NC may be the electrophysiological counterpart of our previous finding using fMRI which found a clear hyperactivity of the amygdala in these patients in response to humorous stimuli . Individual event-related potential (ERP) amplitudes for each group and condition at peak interaction. Average amplitudes are shown for a right cento-frontal region of interest at 270 ms post-feedback presentation. For patients with Narcolepsy-Cataplexy and healthy controls humorous feedback produce significantly higher ERP amplitudes compared to neutral feedback. Parkinson's patients do not show this ERP pattern. Moreover, Narcolepsy-Cataplexy patients tended to show an even higher difference between the two feedback conditions than healthy controls. The imbedded topography indicates the region of interest as well as the statistical differences. Red values indicate more reliable statistical differences for the interaction between group and condition. doi:10.1371/journal.pone.0085978.g005 [17]. This raises two important possibilities in relation to patient's cataplexy, is this increased activity a reflection of an oversensitive amygdala which in turns acts on the motor system [43,44]; or might it be an active, voluntary, and possibly learned suppression of emotional response in NC in order to avoid a cataplectic attack [45]. PD patients reduced amplitude here are also in line with this ERP reflecting amygdala activity in that these patients have shown structural [46,47] and functional brain abnormalities [48,49], as well as changes in behavior where PD patients are impaired on tasks known to involve the amygdala such as the Iowa Gambling Task and Game of Dice Task [50,51].
The right central-positive ERP (280-650 ms) initially differed by main condition effects with humorous stimuli showing an earlier initial slope with higher amplitudes, and then again later when PD patients ultimately show reduced amplitude in line with an inability to sustain activation of the ERP. The properties of this ERP fit well with the LPP, primarily found in research on affective picture and reward processing [52]. This ERP generally consists of a large positive deflection over central electrodes between 300 and 600 ms, and has been reported to be more lateralized to the right hemisphere, as was also found in this research [53,54]. This potential has been shown to be reduced when examining neutral pictures in comparison to those with emotionally salient stimuli and LPP amplitude has been shown to be positively correlated with the fMRI signal in mesolimbic reward structures for pleasant pictures [10,11]. Hence, the earlier effect of condition likely reflects a faster and stronger association of the humorous pictures as a more emotionally salient reward; whereas the delayed response for neutral pictures may reflect the fact that although in and of itself emotionally neutral, it nonetheless represents a negative feedback to the reward system. In this framework, the finding that the ERP amplitudes of PD patients returns to baseline levels faster than either NC patients or controls may reflect their general DA dysregulation in structures of the reward system [19,20,55], especially for those patients on dopaminergic medication as most are [24,56].
Limitations
NC and PD patients and healthy controls differed on several clinical aspects, beyond alterations in dopamine and hypocretin systems. As expected, NC and PD patients scored higher for chronic sleepiness than healthy controls, with NC patients' sleepiness ratings still higher than those of PD patients. However, it is unlikely that sleepiness explains group differences in these data because maximum differences were observed between both patients groups (whereas the sleepiness pattern would predict strongest difference when comparing the patients to the control group). Furthermore, PD patients were significantly older than both NC patients and controls, but two lines of reasoning argue against age as a direct cause for the significant late ERP differences shown by PD patients. Firstly, we found no such significant differences in ERP for the presentation of the cue-picture, with NC and PD patients actually showing higher, albeit nonsignificant, overall amplitudes compared to controls for the late ERP component. Secondly, when within-group age variation was included as a covariate in post-hoc analyses it was shown to have no significant independent effect on the ERP amplitudes.
A second limitation of the present study relates to the potential influence of DA modifying medication in both patient groups. Although 5 of the NC patients were drug-free, 4 patients regularly took modafinil, and 6 were under sodium oxybate (3 patients were taking both medications). Modafinil is thought to increase the availability of extra-cellular DA levels by inhibiting DA transporters [57][58][59], while sodium oxybate primarily acts on the GABAergic system, but may also lead to the increase of DA levels in mesolimbic reward structures through the downstream disinhibition of DA neurons [60]. However, when comparing medicated and non-medicated NC patients, we found similar ERP amplitudes between both subgroups at the peak channel and time point of the interaction. It therefore seems unlikely that medication in the NC group played a major role in the results or their interpretation here. In the PD group, patients maintained their prescribed levels of medication. Although it is clear that under the effects of medication, the amount of available extracellular DA is bound to increase compared to baseline, it is nonetheless difficult to predict whether medication was sufficient for normal functioning of the mesolimbic reward areas, or in fact created a detrimental excess of DA [61,62]. DA levels and effect postmedication have been shown to depend on baseline levels of DA in different portions of the ventral and dorsal striatum [63], disease progression [24], genetic variations [64], and the cognitive process under evaluation [65]. Further studies should examine drug-naïve PD patients as well as those on and off dopaminergic treatment in order to determine whether the lack of a sustained ERP was indeed due to dysregulation of their DA system, or perhaps even a desensitization of the reward system induced by prolonged DA treatment [24]. | 7,973 | 2014-01-29T00:00:00.000 | [
"Psychology",
"Biology"
] |
Electrical transport in lead-free Na0.5Bi0.5TiO3 ceramics
Lead-free Na0.5Bi0.5TiO3 (NBT) ceramics were prepared via a conventional oxide-mixed sintering route and their electrical transport properties were investigated. Direct current (DC, σDC) and alternating current (AC, σAC) electrical conductivity values, polarization current (first measurements) and depolarization current, current–voltage (I–U) characteristics (first measurements), and the Seebeck coefficient (α) were determined under various conditions. The mechanism of depolarization and the electrical conductivity phenomena observed for the investigated samples were found to be typical. For low voltages, the I–U characteristics were in good agreement with Ohm’s law; for higher voltages, the observed dependences were I–U2, I–U4, and then I–U6. The low-frequency σAC followed the formula σAC–ωs (ω is the angular frequency and s is the frequency exponent). The exponent s was equal to 0.18–0.77 and 0.73–0.99 in the low- and high-frequency regions, respectively, and decreased with temperature increasing. It was shown that conduction mechanisms involved the hopping of charge carriers at low temperatures, small polarons at intermediate temperatures, and oxygen vacancies at high temperatures. Based on AC conductivity data, the density of states at the Fermi-level, and the minimum hopping length were estimated. Electrical conduction was found to undergo p–n–p transitions with increasing temperature. These transitions occurred at depolarization temperature Td, 280 ℃, and temperature of the maximum of electric permittivity Tm is as typical of NBT materials.
Introduction
Lead zirconate titanate (PZT) and PZT-based ceramics have been widely utilized in various electromechanical devices [1,2]. Nevertheless, ceramics with lead content present a considerable environmental hazard during the manufacturing process, usage, and subsequent recycling. Lead-free materials have therefore attracted attention as potential replacements for lead-based ceramics. In and rhombohedral phases appear to coexist in a wide temperature range of 350-200 ℃ [15][16][17][18][19]. The tetragonal phase is non-polar (or weakly polar) and the rhombohedral phase is ferroelectric. The temperature dependence of electric permittivity of NBT has a maximum T m ≈ 320 ℃, related to relaxation processes which should originate from both electrical and mechanical interactions between the polar and nonpolar phases [20], and a local anomaly at the so-called depolarization temperature T d ≈ 190 ℃, which shows weak relaxation behavior [19]. Transmission electron microscopic (TEM) investigations suggest that the rhombohedral-to-tetragonal phase transformation may involve an intermediate modulated orthorhombic (Pnma) phase [21]. Neutron scattering measurements indicate that at high temperatures (far above T m ), unstable polar regions arise [22]. The correlation radius of these regions increases with decreasing temperature and below 280 ℃ they are stable. They act as centers for the nucleation of the ferroelectric phase, which occurs below a temperature of about 200 ℃.
In ferroelectrics, the study of electrical transport characteristics (particularly the order and nature of electrical conductivity) is extremely important, since they affect other associated properties such as piezoelectricity, pyroelectricity, and poling conditions for these materials. However, electrical transport in NBT materials is a subject that has rarely been investigated [23][24][25]. There are no data on the polarization current or I-U characteristics, and only a few data of the Seebeck coefficient (α) [26] for this material.
In this paper, investigations of the charging currents, depolarization currents, I-U characteristics, electrical conductivity, and α of NBT ceramics are presented. The study provides deep insight into the charge transport mechanism in NBT.
Experimental
NBT ceramic samples were prepared via the conventional solid-state reaction method [3,5]. High-purity Na 2 CO 3 (99.99%), Bi 2 O 3 (99.99%), and TiO 2 (99.9%) reagents were used. Hygroscopic Na 2 CO 3 was first dried at 200 ℃ for 1 h in order to remove the absorbed water. A mixture of the powders in stoichiometric ratios was homogenized in an agate mortar for 4 h, uniaxially pressed into pellets at 120 MPa, and calcined at 700 ℃/1.5 h, 750 ℃/1 h, and 800 ℃/2 h in air. After the third calcination, the product was crushed into fine powders, pressed, and subsequently sintered at 1100 ℃/1 h and 1160 ℃/1.5 h. These conditions are very similar to those applied by other groups [27][28][29].
The formation and quality of the compounds were verified using X-ray diffraction (XRD) analysis. The relative density of the specimens was measured using the immersion method based on Archimedes' principle and was found to be greater than 95% of the theoretical density (which was assumed to be 5.998 g·cm -3 [30]). This implies that the conditions selected for the preparation of the specimens were appropriate.
Direct current (DC, σ DC ) electrical conductivity was measured over the temperature range from room temperature (RT) to 600 ℃ using a Keithley 6517A electrometer, whereas AC electrical conductivity alternating current (AC, σ AC ) and impedance spectroscopy measurements were performed over the frequency and temperature ranges from 100 Hz to 2 MHz and from RT to 600 ℃ , respectively, for silver electrode samples, using a GW 821LCR meter.
The polarization/depolarization currents and I-U characteristics were measured using the Keithley 6517A electrometer. These measurements require precise temperature stabilization to avoid the influence of pyrocurrents. Stabilization to within 0.001 ℃ was achieved. In the case of polarization currents, the current flowing from the sample under an applied electric field (5,15, and 20 kV/cm) was measured. When measuring depolarization current, the sample was polarized at 5, 15, and 25 kV/cm for 1 or 2 h at the chosen temperatures. It was then short-circuited and the depolarization current flowing from the sample was measured.
The α was measured in the temperature range of 50-370 ℃. The sample was placed in a thermostat between heated silver blocks, which made it possible to create a temperature gradient inside the sample (for more details see Ref. [26]). After the sample reached the selected temperature, the temperatures of the electrodes were controlled in such a way as to achieve a 10 ℃ difference (∆T) between them. The values of the α were determined from the slope of the linear dependences between the electromotive force (E) and ∆T.
Results and discussion
To characterize the obtained samples, X-ray, microstructural, and electric permittivity investigations were performed.
The obtained samples have a pure perovskite structure ( Fig. 1(a)). The splitting of (110) c and (111) c peaks indicates rhombohedral symmetry. However, a small shoulder on the left side of the (200) c peak (indicated by the arrow) suggests a more likely mixed-phase nature of NBT (e.g., regions of the tetragonal phase are presented in the rhombohedral matrix [3]). The weak reflection below the (111) c peak (indicated by the arrow) is the evidence of the presence of space group R3c. Figure 1(b) shows the microstructure of polished and chemically etched surfaces and energy disperse spectroscopy (EDS) analysis of NBT ceramics. As can be seen, grains of different sizes are densely packed and are homogeneously distributed throughout the sample's surface. No secondary phase or unreacted starting reagents were observed in the sample. A small number of scattered pores are observed, which indicates that the samples exhibit a certain degree of porosity. The average grain size, determined by counting the number of grains along the diagonal, is about 1.7 μm. EDS analysis indicated that the distribution of all elements throughout the grains was homogeneous and that the composition within experimental error was very close to the stoichiometric one.
Two anomalies are visible in the temperature dependence of electric permittivity of the investigated NBT ceramics: One at a lower temperature (at T d ≈ 190 ℃) and the other at a higher one T m ≈ 318 ℃ (Fig. 2). The first anomaly-at the so-called depolarization temperature (T d )-is related to the disappearance of the long-range ferroelectric phase, and the second one-at T m -is associated with the maximum of electric permittivity. Large thermal hysteresis can be the evidence of a first-order phase transition and can reflect internal stress caused mainly by the coexistence of the rhombohedral and tetragonal phases (inset (a) of Fig. 2). Note that this hysteresis appears below T m , which indicates that the material is metastable in this temperature range. The anomaly at T d indicated relaxor-like features (inset (b) in Fig. 2). A broad anomaly in the temperature dependence of the dielectric www.springer.com/journal/40145 loss tanδ(T) was observed near T d (Fig. 2).
The I-U characteristics, which were plotted for current values measured 10, 100, and 1000 s after the electric field had been applied, and under steady-state time (t s ) current conditions, are shown in Fig. 3. As can be seen, Ohm's law was obeyed at low electric field strength and depends on temperature. However, for the strongest field, the dependence I-U 2 is followed. Finally, four slopes of I(U) plots are clearly visible: I-U, I-U 2 , I-U 4 , and I-U 6 (see the steady-state current conditions presented in Fig. 3(d)). The slopes were obtained based on linear fits of the I(U) plots. According to the space-charge-limited current (SCLC) theory [31], strongly defected surfaces of grains provide a source of deep trap states, in which the trapped charge carriers are excited by the applied electric field and thermal energy. Below the onset electric field, ohmic current flows. For higher electric fields, the current characteristics start to follow the tendency I-U α , where α = 1.2-2 [31]. The concentration of excess injected charge carriers in the case of a high electric field is greater than that at thermal equilibrium. When temperature increases, the concentration of equilibrium charges increases, which can lead to shifting conditions for the appearance of SCLC into higher electric fields. The results shown in Fig. 3(d) clearly confirm this tendency. When the electric field reaches a trap-filled limit value, the current increases more rapidly and the I-U characteristics depend on trap distribution of the form I-U n (n = 2, 4, 6, …), as in the case of the presented results. Figure 4 presents the time dependence of the depolarization current of NBT ceramics. As can be seen, each curve features two straight lines with different slopes, separated by an intermediate interval [32]. As temperature increases, the time of this crossover shifts towards lower values. The depolarization current (I d ) follows an inverse function: (1) where A is a constant dependent on temperature and t is the polarization time. The power index p depends on field strength, polarization time, and temperature [32].
The activation energy determined from depolarization current values increases from 0.29 ± 0.02 eV, for the temperature range from RT to 190 ℃ (the range for which the ferroelectric phase is observed), to 0.48 ± 0.02 eV, for the temperature range of 190-220 ℃ (the range for which the rhombohedral/tetragonal phases are observed) [32]. This is the result of the appearance of new energy levels with higher activation energy, mainly due to changes in the structure of the material. The processes that take place during depolarization are related to the processes that are characteristic of the electrical conduction.
In general, two components of the depolarization current may be distinguished: (i) The current related to polarization, which disappears due to the reorientation of domains (ferroelectric component) and (ii) the current resulting from the decay of other kinds of polarization, e.g., dipole and ionic (non-ferroelectric component). Since there are correlations between certain types of polarization, there is a connection between ferroelectric and non-ferroelectric currents. Localized states (potential centers) of different origin may occur in NBT. Free charges and charges injected from electrodes (in high electric fields) may be trapped by these centers, which may include domain walls, point defects, or dislocations. In NBT, point defects associated with oxygen deficiencies-among others-may occur, which leads to the subsequent appearance of long-range potential centers. After the electric field is deactivated, the domains randomize and charges trapped by domain walls and point defects, and dislocations are released. This means that the sample undergoes depolarization, and the depolarization current flows through it [33]. The migration of charge carriers between potential centers can be an important transport mechanism for polarization/depolarization/electrical conduction processes. Figure 5 shows the polarization current at several temperatures and for applied electric fields of 5, 15, and 20 kV/cm. As can be seen, the time required for polarization saturation depends on the strength of the polarizing electric field and the temperature. In a low polarizing field, polarization reaches saturation after a long time (65,000 s). In general, the electric field applied to a ferroelectric material causes a change in the domain configuration and the displacement of free/localized charge carriers. The changes in domain configuration result from the movement of domains/ domain walls and the nucleation of new domains, with the rates of these two processes being dependent on electric field strength and temperature. In weak electric fields, the polarization process takes a long time [34]. As free charges undergo localization due to the trapping and screening of polarization, their number decreases with time and the sample relaxes towards equilibrium. In this state, only DC conductivity current flows through the sample (Fig. 6). Before the state of equilibrium is reached, and immediately after the application of the electric field, several transient currents are observed [35]. Examples include the spacecharge-limited transient current, which decays rather rapidly. As the transient current associated with the movement of the domains/domain walls decays rather slowly (particularly for low fields), the total transient current for longer time is determined by the rate of this movement.
In perovskite ferroelectric materials, oxygen vacancies are considered mobile carriers, and potential centers associated with them may be observed. The ionization of these vacancies creates conduction electrons, a process which is represented by the Kröger-Vink notation [36]: They may bond to Ti 4+ in the form: Ti 4+ + e ↔ Ti 3+ (e.g., hopping of electrons between localization sites) (6) Electrons trapped by Ti 4+ ions or oxygen vacancies may be thermally activated, enhancing conduction. Doubly charged oxygen vacancies are considered to be the most mobile charges in perovskites and to play an important role in the conduction process [37]. In NBT, the volatilization of A-site elements during the sintering process results in the generation of oxygen vacancies, which compensate for negatively charged A-site vacancies.
Whilst DC conductivity requires the migration of carriers from one electrode to another, AC conductivity is connected with their short-range motion. The latter type of conductivity depends on temperature and the frequency of the electric field.
The plot representing lnσ as a function of 1000/T for various frequencies including σ DC is shown in Fig. 6. As can be seen, electrical conductivity increases with temperature increasing, indicating the negative temperature coefficient of resistance (NTCR) of a semiconductor, with an anomaly at T d and T m . σ DC increases with temperature increasing and the linear correlation between lnσ DC and 1000/T in some temperature regions suggests the validity of the relation: where ϭ 0 is the pre-exponential factor and E c , k B , and T are the activation energy of conduction, Boltzmann's constant, and absolute temperature, respectively. A careful inspection of Fig. 6 reveals two linear parts of the lnσ DC (1000/T) curve in the low-temperature range: from RT to about T d and from T d to about T m . These two linear parts with two different activation energy values may be predominantly related to different scattering mechanisms in different temperature ranges (in the low-temperature region, a long-range rhombohedral ferroelectric state is observed; at higher temperatures, the rhombohedral and tetragonal phases coexist). This may be related to a certain change in the conduction mechanism at T d . Finally, five linear parts of the curve for DC conductivity can be distinguished with five different activation energies: (i) ca. 0.06 eV for the range of the lowest temperatures, (ii) ca. 0.04 eV in the temperature range from T d to T m , (iii) ca. 0.29 eV for the temperature range from T m to 450 ℃, (iv) ca. 0.65 eV for the temperature range of 450-540 ℃, and (v) about 1.96 eV for the high-temperature range, i.e. 540-600 ℃ (Table 1). Thus, electrical conductivity measurements also showed the intermediate temperature of 450 ℃ to be significant aside from the well-known temperatures (540 ℃, T m , and T d ). These activation energies are markedly lower than the optical energy gap of 3.2 eV [38], which is why electrical conduction in NBT may be said to be mediated by impurities. Since the activation energies calculated from electrical conductivity correspond to those obtained from depolarization current values, it can be concluded that both phenomena share the same underlying mechanism. The ion distribution in NBT is expected to be inhomogeneous, which causes a certain degree of disorder. In this case, charge carriers (electrons, polarons, holes, ions, etc.) can move between localized states. This movement is mainly caused by hopping, trapping/de-trapping, or excitation. The value of activation energy for the ferroelectric phase (0.06 eV) may be related to carrier hopping between localized sites (e.g., Ti 4+ + e ↔ Ti 3+ ). The value of 0.04 eV may be associated with the small polarons created by the electron and/or hole-phonon interactions and is reinforced by lattice mismatch in the range in which the rhombohedral and tetragonal phases coexist. The values of 0.29, 0.65, and 1.96 eV suggest a possibility that conduction in the range of higher temperatures for ionic charge carriers may be mediated by oxygen vacancies (motion of first and/or second ionization oxygen vacancies). The increase in conductivity in this temperature range can be attributed to the increase in the concentration of ionized vacancies.
The increase of σ AC with temperature increasing can be caused by the increased density and/or mobility of free carriers. In the low-frequency range, the electric field acts on charge carriers over a long period, which favors their increased localization. Since localized charge carriers are eliminated from further transport, electrical conductivity decreases in this frequency interval (Fig. 6). AC conductivity varies with frequency (Fig. 7), which may be associated with the existence of free as well as bound carriers. Conductivity increases with frequency, which leads to the conclusion that bound carriers play a predominant role in the conduction process in NBT. The increase in conductivity with frequency may be explained by the hopping of carriers between the trap levels situated in the energy gap, as expressed by the following law [39]: where s is the frequency exponent, which is a function of both frequency and temperature and generally ranges from 0 to 1. The exponent s was calculated from the slopes of the lnσ AC vs. lnf dependence (Fig. 7). The s decreases with increasing temperature, which is predicted by the hopping model. AC conduction in NBT can therefore be concluded to be of the hopping type (e.g., the short-range translational-type hopping of charge carriers). This suggests that the conduction process is thermally activated. For the low-frequency range (up to about 10 kHz), s is in the range of 0.18-0.77. For the higher-frequency region, s ranges from 0.73 to 0.99. In both the low-and high-frequency regions, the value of s tends to unity at low temperatures, which is another indication that the conduction mechanism in NBT is of the hopping type. If s = 1, the interaction between neighboring dipoles is almost negligible (Debye behavior). The temperature dependence of α is shown in Fig. 8. The fact that the α is almost completely independent of temperature in the low-temperature range up to about T d is an indication that carrier mobility rather than carrier concentration is thermally activated in this temperature interval. Those are expected results for the hopping model. The positive value of α in this temperature (from RT to about 200 ℃) range indicates that p-type conductivity is predominant. The exposure of oxygen vacancies to an oxygen-containing environment during the cooling of the samples after the sintering process or during the annealing step creates holes (h • ), leading to a p-type conduction behavior (½O 2 . These holes have higher mobility than oxygen vacancies. The α decreases with a further increase in temperature and at about 220 ℃ it changes sign from positive to negative and increases up to 235 ℃. This means that first carrier-mediated conduction (e.g., the excitation of minority carriers at high temperature) increases and that the conductivity mechanism changes to n-type. In the temperature from ~220 to 235 ℃, the carrier concentration decreases and/or their mobility increases. At about 310 ℃, α changes sign from negative to positive and slightly increases up to about 320 ℃ (near T m ), after which it starts to decrease. This dynamic temperature behavior of α, which indicates a change in the sign of carriers and their concentration/ mobility, takes place in the wide temperature range in which the rhombohedral and tetragonal phases coexist. This behavior also allows to distinguish the temperatures T d and T m that are characteristics of NBT. For perovskites (non-degenerate semiconductors), the α can be expressed using the following equation [40]: α = (k B /e)ln(N c /n) (9) where N c is the effective density of states in the conduction band, n is the carrier concentration, and e is the electronic charge. It was assumed that N c for the transport level is temperature independent and is equal to the number of ionic sites per 1 cm 3 (1.56 × 10 28 m -3 for NBT).
Electrical conductivity is given by the equation [41]: (10) where μ is the mobility of charge carriers and n is the carrier concentration. The values n and μ were calculated using Eqs. (9) and (10) and their temperature dependences are shown in Fig. 9. As can be seen, both n and μ increase with temperature increasing. The low value of the mobility of charge carriers also suggests that the hopping mechanism contributes significantly to electrical conduction [39,42].
Based on the obtained σ AC values, the density of states at the Fermi-level (N(E f )) [43] was calculated: where f 0 represents photon frequency (10 13 Hz) and α is the localized wave function (10 10 m 1 ). Figure 10 shows the temperature dependence of N(E f ) at different frequencies. In general, N(E f ) increases with temperature increasing, while anomalies at temperatures roughly equal to T d and T m are clearly visible. As can be seen from the inset in Fig. 10, the frequency-related changes in N(E f ) have a different character in the temperature range from RT to approximately T d and in the temperature range above T d . The high values of N(E f ) indicate that hopping between pairs of sites is the predominant constituent of the charge transport in the investigated samples. The minimum hopping length (R min ) was estimated using the following equation [44]: where W m represents binding energy, which was estimated from the equation s = 1 -(6k B T/W m ). W m is the energy required to remove charge carrier from one site and relocate it to another. It first increases up to a temperature roughly equal to T m , and then decreases as temperature rises (Fig. 11(a)). R min changes in the opposite manner. In addition, R min is almost completely independent from the frequency at RT, which suggests that charge carrier transport proceeds along an infinite percolation path [45]. However, R min increases with frequency at higher temperatures (above T m ), which suggests that in this range transport is mediated predominantly by hopping in the finite cluster [45]. SCLC are observed when electrodes in contact are capable of injecting electrons in the conduction band (E c ) or holes in the valence band of the material. At low external electric fields (voltages), the injection of excess charge carriers into the sample is weak, and Ohm's law is obeyed. However, in the case of a high external electric field, injection is far more pronounced. With increasing temperature, the range of electric field strength for which Ohm's law valid shifts towards higher values (Fig. 3(d) and Table 2). The voltage at which the transition from ohmic behavior to SCLC occurs, V SCLC can be calculated using the following Fig. 9 Temperature dependence of (a) n and (b) μ in NBT ceramics.
where ε represents the electric permittivity of the material, n is the carrier concentration stands for the density of free carriers at thermal equilibrium and θ is the coefficient which determines to what degree charge carrier undergoes trapping. The coefficient θ is equal to where n t is the density of trapped carriers, N c is the density of states in the conduction band, N t is the density of traps, g represents the degradation coefficient (g = 2), and E t is the trap energy level below the edge of E c . The electrical conductivity (σ) for materials that obey Ohm's law is given by Eq. (10). Finally, the relation for θ is as follows: The value of estimated from Eq. (15) is 4.84 × 10 -4 and 1.74 × 10 -4 at temperatures of 150 ℃ (below T d ) and 210 ℃ (above T d ), respectively. These values are consistent with those obtained for other perovskites [48][49][50].
When the strength of the electric field (voltage) applied to the sample is very high, the Fermi-level passes through the trap level and all traps become filled. In this case, the current flowing through the sample rapidly increases. The voltage at which this occurs is known as the V TFL , and it is expressed as follows [46,47]: where the value of N t was calculated from Eq. (16) and was then used as input for Eq. (14). The energy of trapping states (E t ) was thus obtained ( Table 3). The time of movement (diffusion) of charge carriers from one electrode to another can be estimated by taking the process of their trapping into account. Perovskite materials can contain anywhere from 10 23 to 10 25 m -3 oxygen vacancies, which are one type of trapping centers. The active cross-section (S) for charge carrier (electron) trapping by these vacancies is equal to 10 -18 m 2 [51]. The mean free path for trapping is λ = 1/(n·S) = 10 -5 -10 -7 m. Thus, a charge carrier is trapped many times before crossing the path determined by the specimen's thickness. The time of movement (diffusion) of charge carriers (electrons) (t) can be estimated from the following equation: where τ is the relaxation time for which charges are held in a trap with depth E t , and can be calculated as follows: where υ represents the thermal velocity of charge. For values of d = 0.00017 m, υ = 10 5 m/s, N c = 1.56 × 10 28 m -3 , and E t = 0.86, 0.92, and 1.01 eV for 150, 170, and 210 ℃, respectively, t values ranging from 10 -7 to 10 -3 s were obtained depending on the assumed number of defects. However, aside from oxygen vacancies, there are other trapping centers, which can increase the value of t.
Impedance spectroscopy is widely used as a standard characterization technique for many polycrystalline ferroelectric materials. The response of the system as a function of the perturbation frequency can provide insight into the internal behavior of dipolar structures. In addition, this method is a reliable tool for the optimization of the properties of dielectric materials and the procedure of their preparation.
The data in the complex plane can be represented in any of the four basic formalisms, namely complex impedance (Z*), complex admittance (Y*), complex permittivity (ε*), and complex electric modulus (M*). They are defined by the following equations [52,53]: where ω = 2πƒ is the angular frequency, and Z′, M′, Y′, ε′ and Z″, M″, Y″, ε″ are the real and imaginary components of impedance, electrical modulus, admittance and permittivity, respectively.
Modulus spectroscopy plots are particularly useful for separating the spectral components of materials that exhibit similar resistance but different capacitances. The electrical modulus corresponds to the relaxation of the electric field in the material when the electric displacement remains constant, and represents the real dielectric relaxation process. It was originally introduced by Macedo et al. [52] to study space charge relaxation. M* representation is now widely used when analyzing ionic conductivity [53].
The complex modulus spectra M″ = f(M′) at different temperatures are given in Fig. 12. The spectra are characterized by two semicircular arcs with a pattern that changes with increasing temperature. The intercept of the first semicircle with the real axis indicates the total capacitance contributed by the grain interior capacitance (on the high-frequency side), whereas the intercept of the second one indicates the total capacitance contributed by the grain boundary capacitance (on the low-frequency side). The shape of the curves undergoes a marked change as temperature increases. The observed changes suggest that the capacitance of the grain and grain boundary decreases with temperature increasing. This observation indicates that the electrical properties of this material are controlled by the temperature/ microstructure and this behavior might be explained by combined parallel and serial arrangement of grain interiors and boundaries.
The frequency dependence of the imaginary parts of the electrical modulus is shown in Fig. 13. Frequency response was examined in the range from 20 Hz to 2 MHz. In this frequency range, well-known electrical phenomena such as space-charge (0-10 2 Hz) and relaxation phenomena (10 2 -10 6 Hz) were observed. In the low frequency range, the charge of the depletion region is affected, because oscillation is optimal. A depletion region forms between two grains. The perturbation observed in this frequency region is likely to result from an elastic clamping effect. That is the interior grain volumes by undergoing the ferroelectric transformation and thus somewhat disordered grain boundary region. Additionally, as can be seen, the spectra are broad and rather complex (see the inset in Fig. 13). In such cases, separating the contribution of the primary process from those of the secondary ones is somewhat problematic. To solve this problem, the Gaussian function was used (see the inset in Fig. 13). The dependence of the position of the M″ peak on frequency at various temperatures may be used to determine the most probable τ using the following equation: τ = 1/2πf. The dependence of τ for NBT as a function of reciprocal temperature 1/T (K -1 ) is shown in Fig. 14. The plot appears to be partially linear, which means that it follows the Arrhenius equation, τ = τ 0 exp(-E a /k B T), where τ 0 is the pre-exponential factor, and E a stands for activation energy. The τ is related to a thermally activated process. The E a values calculated from the slope of curves representing lnτ vs. 1/T for NBT are 1.25, 0.34, 0.10 for the grain boundaries and 0.76, 0.16, 0.46 eV for grain interiors, respectively. Thus, the E a of grain boundaries is higher than that of the grain interiors. This suggests a difference in the structure/composition of grain interiors and grain boundaries. The obtained values of activation energy are similar to those obtained from DC electrical conductivity measurements. The structure of NBT is a very complex system, especially considering how ions occupy the A-site position. Any non-stoichiometry that might be induced at the A-site during material processing can cause the formation of oxygen vacancies. An obvious consequence of the increased concentration of oxygen vacancies is an increase in ionic conductivity in NBT, as evidenced by the value of activation energy obtained for grain boundaries at high temperatures (1.25 eV). For bulk material and the same temperature range, the calculated E a is 0.76 eV (values in the range from ca. 0.7 to 0.8 were obtained by other scientific groups [54,55]). Such electrical phenomena in the material may be modelled appropriately by means of an equivalent resistance-capacity RC electrical circuit.
Conclusions
A conventional solid-state method was applied to synthesize lead-free NBT ceramics. The structural test results showed rhombohedral symmetry with the space group R3c. σ DC and σ AC , polarization and depolarization currents, I-U characteristics, and α of these ceramics were determined. Both σ DC and σ AC exhibited a thermally activated character and were characterized by anomalies at T d and T m . The activation energy of dielectric polarization was similar to the activation energy of the DC electrical conductivity.
It was found that Ohm's law satisfies the power dependence σ AC -ω s . The s was in the range of 0-1 and decreased as a function of temperature increasing. Hopping, small polarons, and oxygen vacancies were proposed as mechanisms underlying electrical conduction for ranges of low, middle, and high temperatures, respectively.
It was shown that Ohm's law is satisfied for low voltages; for high voltages, the observed dependences are I-U 2 , I-U 4 , and subsequently, I-U 6 .
The Seebeck coefficient measurements showed that with increasing temperature, charge carriers change the conduction type (p → n → p) and their density and mobility also change. These measurements also allowed the T d and T m temperatures that are specific to NBT to be distinguished.
The complex modulus, relaxation time, and activation energy were determined from impedance spectroscopic measurements. These measurements showed that as temperature rises, the grain capacity and grain boundary decrease. In addition, the determined activation energy values revealed differences in the structure/composition of grain interiors and grain boundaries. These values were similar to those determined from direct current measurements. | 7,782.6 | 2021-01-18T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Estimation of Kullback-Leibler losses for noisy recovery problems within the exponential family
We address the question of estimating Kullback-Leibler losses rather than squared losses in recovery problems where the noise is distributed within the exponential family. Inspired by Stein unbiased risk estimator (SURE), we exhibit conditions under which these losses can be unbiasedly estimated or estimated with a controlled bias. Simulations on parameter selection problems in applications to image denoising and variable selection with Gamma and Poisson noises illustrate the interest of Kullback-Leibler losses and the proposed estimators.
Introduction
We consider the problem of predicting an unknown d-dimensional vector μ ∈ R d from its noisy measurements V ∈ R d . Given a collection of parametric predictors of μ, we focus on the selection of the predictorμ that minimizes the discrepancy with the unknown vector μ. For instance, this includes the problem of selecting the best predictors from the set of Least Absolute Shrinkage and Selection Operator (LASSO) solutions [44] obtained for all possible choices of regularization parameters. To this end, the common approach is to selectμ that minimizes an unbiased estimate of the expected squared loss E||μ −μ|| 2 , typically, with the Stein unbiased risk estimator (SURE) [43]. Such estimators are classically built on some statistical modeling of the noise, e.g., as being distributed within the exponential family. In this context, we investigate the interest of going beyond squared losses by rather estimating a loss function grounded on an information based criterion, namely, the Kullback-Leibler divergence. We will first recall some basic properties of the exponential family, give a quick review on risk estimation and motivate the use of the Kullback-Leibler divergence.
Exponential family. We assume that in the aforementioned recovery problem the noise distribution belongs to the exponential family. Formally, the recovery problem can be reparametrized using two one-to-one mappings ψ : R d → R d and φ : R d → R d such that Y = ψ(V ) has a probability measure P θ characterized by a probability density or mass function with respect to the Lebesgue measure dy of the following form p(y; θ) = h(y) exp ( y, θ − A(θ)) (1.1) where θ = φ(μ) ∈ R d . The distribution P θ is said to be within the natural exponential family. We call θ the natural parameter, Y a sufficient statistic for θ, h : R d → R + the base measure, and A : R d → R the log-partition function. Classical and important properties of the exponential family include A that is convex, E[Y ] = ∇A(θ) and Var[Y ] = ∇∇ t A(θ) (see, e.g., [3]). Here and in the following, E[Y ] = Y dP θ denotes the expectation of the random vector Y with respect to the measure dP θ , and Var ) t ] is its so-called variance-covariance matrix. Without loss of generality, we consider that Y is a minimal sufficient statistic. As a consequence, ∇A is one-to-one and we can choose φ as the canonical link function satisfying φ = (∇A) −1 (as coined in the language of generalized linear models). An immediate consequence is that Y has expectation E[Y ] = μ and its variance is a function of μ given by Var[Y ] = Λ(μ) where Λ = (∇∇ t A) • φ. The function Λ : R d → R d×d is the so-called variance function (see, e.g., [36]), also known as the noise level function (in the language of signal processing). Table 1 gives five examples of univariate distributions of the exponential family -two of them are defined in a continuous domain, the other three are defined in a discrete domain.
Risk estimation. We now assume that the predictorμ of μ is a function of Y only, hence, we write itμ(Y ), and we focus on estimating the loss associated toμ(Y ) with respect to μ. When the noise has a Gaussian distribution with independent entries, SURE [43] can be used to estimate the mean squared error (MSE), or in short the risk, defined as: MSE μ = E||μ −μ(Y )|| 2 . The resulting estimator, being independent on the unknown predictor μ, can serve in practice as an objective for parameter selection. Eldar [15] builds on Stein's lemma [43], a generalization of SURE valid for some continuous distributions of the exponential family. It provides an unbiased estimate of the "natural" risk, defined as: i.e., the risk with respect to θ = φ(μ). In the same vein, when the distribution is discrete, Hudson [26] provides another result for estimating the "exp-natural" risk: the risk with respect to η = exp θ, where exp : R d → R d is the entry-wise exponential. As φ is assumed one-to-one, there is no doubt that if such loss functions cancel thenμ(Y ) = μ. In this sense, they provide good objectives for selectinĝ μ(Y ). However, within a family of parametric predictors and without strong assumptions on μ, such a loss function might never cancel. In such a case, it becomes unclear what its minimization leads it to select, all the more when φ or exp • φ are non-linear. Furthermore, even when they are linear (e.g., exp • φ = id for Poisson noise), minimizing MSE μ = E||μ−μ(Y )|| 2 might not even be relevant as it does not compensate for the heteroscedasticity of the noise (this will be made clear in our experiments). Estimating the reweighted or Mahanalobis risk given by E||Λ(μ) −1/2 (μ −μ(Y ))|| 2 could be more relevant in this case, but its estimation is more intricate.
Kullback-Leibler divergence. The Kullback-Leibler (KL) divergence [27] is a measure of information loss when an alternative distribution P 1 is used to approximate the underlying one P 0 . Its formal definition is given by D(P 0 P 1 ) = dP 0 log dP0 dP1 . Unlike squared losses, it does not measure the discrepancy between an unknown parameter and its estimate, but between the unknown distribution P 0 of Y and its estimate P 1 . As a consequence, it is invariant with one-to-one reparametrization of the parameters and, hence, becomes a serious competitor to squared losses. Remark that it is also invariant under one-toone transformations of Y because such transforms do not affect the quantity of information carried by Y . Interestingly, provided P 0 and P 1 belongs to the same member of the natural exponential family respectively with parameters θ 0 and θ 1 , the KL divergence can be written in terms of the Bregman divergence associated with A for points θ 0 and θ 1 , i.e., While squared losses are defined irrespective of the noise distribution, the KL divergence adjusts its penalty with respect to the scales and the shapes of the deviations. In particular, it accounts for heteroscedasticity.
Contributions.
In this paper, we address the problem of estimating KL losses, i.e., losses based on the KL divergence. As it is a non symmetric discrepancy measure, we can define two KL loss functions. The first one will be referred to as the mean KL analysis loss as it can be given the following interpretation: "how well might Pθ (Y ) explain independent copies of Y ". The mean KL analysis loss is inherent to many statistical problems as it takes as reference the true underlying distribution. It is at the heart of the maximum likelihood estimator and is typically involved in non-parametric density estimation, oracle inequalities, mini-max control, etc. (see, e.g., [22,17,41]). The second one will be referred to as the mean KL synthesis loss given by which can be given the following interpretation: "how well might Pθ (Y ) generate independent copies of Y ". The mean KL synthesis loss has also been considered in different statistical studies. For instance, the authors of [48] consider this loss function to design a James Stein-like shrinkage predictor. Hannig and Lee address a very similar problem to ours, by designing a consistent estimator of MKLS used as an objective for bandwidth selection in kernel smoothing
problems subject to Gamma [24] and Poisson noise [25]. Table 2 gives a summary of our contributions. It highlights which loss can be estimated and under which conditions of the exponential family. The main contributions of our paper are: 1. provided y →μ(y) and the base measure h are both weakly differentiable, MKLS can be unbiasedly estimated (Theorem 4.1), 2. for any mapping y →μ(y), MKLA can be unbiasedly estimated for Poisson variates (Theorem 4.2), 3. provided y →μ(y) is k ≥ 3 times differentiable with bounded k-th derivative, MKLA can be estimated with vanishing bias when Y results from a large sample mean of independent random vectors with finite k-th order moments (Theorem 4.3).
It is worth mentioning that a symmetrized version of the mean Kullback-Leibler loss: MKLA + MKLS, can be estimated as soon as MKLA and MKLS can both be estimated (e.g., for continuous distributions according to Table 2).
Risk estimation under Gaussian noise
This section recalls important properties of the MSE and the definition of SURE under additive noise models of the form Y = μ + Z where Z ∼ N (0, σ 2 Id d ) and Id d denotes the d × d identity matrix. Before turning to the unbiased estimation of MSE μ , it is important to recall that for any additive models and zero-mean noise with variance σ 2 Id d , provided the following quantities exists, we have is the cross-covariance matrix between Y andμ(Y ). Equation (2.1) gives a variational interpretation of the minimization of the MSE as the optimization of a trade-off between overfitting (first term) and complexity (second term). In fact, σ −2 tr Cov(Y,μ(Y )) is a classical measure of the complexity of a statistical modeling procedure, known as the degrees of freedom (DOF), see, e.g., [13]. The DOF plays an important role in model validation and model selection rules, such as, Akaike information criteria (AIC) [1], Bayesian information criteria (BIC) [42], and the generalized cross-validation (GCV) [20].
For linear predictors of the formμ(y) = W y, W ∈ R d×d (think of least-square or ridge regression), the DOF boils down to tr W . As a consequence, the random quantity ||Y −μ(Y )|| 2 − dσ 2 + 2 tr W becomes an unbiased estimator of MSE μ , that depends solely on Y without prior knowledge of μ. If W is a projector, the DOF corresponds to the dimension of the target space, and we retrieve the well known Mallows' C p statistic [35] as well as the aforementioned AIC. The SURE provides a generalization of these results that is not only restricted to linear predictors but can be applied to weakly differentiable mappings. A comprehensive account on weak differentiability can be found in e.g., [16,18]. Let us now recall Stein's lemma [43].
Lemma 1 (Stein lemma). Assume f is weakly differentiable with essentially bounded weak partial derivatives on
A direct consequence of Stein's Lemma, providedμ fulfills the assumptions of Lemma 1, is that satisfies ESURE = MSE μ . Applications of SURE emerged for choosing the smoothing parameters in families of linear predictors [30] such as for model selection, ridge regression, smoothing splines, etc. After its introduction in the wavelet community with the SURE-Shrink algorithm [11], it has been widely used to various image restoration problems, e.g., with sparse regularizations [2,38,6,37,5,32,39,45] or with non-local filters [46,12,9,47].
Risk estimation for the exponential family and beyond
In this section, we recall how SURE has been extended beyond Gaussian noises towards noises distributed within the natural exponential family.
Continuous exponential family. We first consider continuous noise models, e.g., Gamma noise. To begin, we recall a well known result derived by Eldar [14], that can be traced back to Hudson 1 in the case of independent entries [26], and that can be seen as a generalization of Stein's lemma.
Lemma 2 (Generalized Stein's lemma).
Assume f is weakly differentiable with essentially bounded weak partial derivatives on R d and Y follows a distribution of the natural exponential family with natural parameter θ, provided h is also weakly differentiable on R d , we have Lemma 2, whose proof can be found in [14], provides an estimator of the dot product E θ, f (Y ) that solely depends on Y without reference to θ. As a consequence, the Generalized SURE (as coined by [14]) defined by is an unbiased estimator of MSE θ , i.e., EGSURE = MSE θ , providedθ, h and ∇h are weakly differentiable 2 . Note that omitting the last term in (3.1) leads to the seminal definition of GSURE given in [14] which provides an unbiased estimate of MSE θ − ||θ|| 2 , even though ∇h is not weakly differentiable. The GSURE can be specified for Gaussian noise, and in this case GSURE = σ −4 SURE and the "natural" risk boils down to the risk as In general, such a linear relationship between the "natural" risk and the risk of interest might not be met. For instance, under Gamma noise 3 with scale parameter L (see Table 1), with expectation μ and independent entries, the GSURE reads as which, as soon as L > 2 andμ fulfills the assumptions of Lemma 2, unbiasedly We will see in practice that minima of MSE θ can strongly depart from those of interest. As the GSURE can only measure discrepancy in the "natural" parameter space, its applicability in real scenarios can thus be seriously limited.
follows a Gamma distribution with scale parameter L if it results from the mean of L independent and identically distributed exponential random variables. For this reason, L is often referred to as the number of looks and controls the spread of the distribution as Var[Y ] = Λ(μ) = μ 2 L . This distribution is widely used to describe fluctuations of speckle in coherent laser imagery [21]. 4 L > 2 implies that h and ∇h are weakly differentiable. By omitting the last term of GSURE, an unbiased estimate of Discrete exponential family. We now consider discrete noises distributed within the natural exponential family, e.g., Poisson or binomial. Before turning to the general result, let us focus on Poisson noise with mean μ and independent entries for which the Poisson unbiased risk estimator (PURE) defined as unbiasedly estimates MSE μ , see, e.g., [7,26]. The vector e i is defined as (e i ) i = 1 and (e i ) j = 0 for j = i. The PURE is in fact the consequence of the following lemma also due to Hudson [26].
Lemma 3 (Hudson's lemma).
Assume Y follows a discrete distribution on Z d of the natural exponential family with natural parameter θ, then holds for every mapping f : Hudson's lemma provides an estimator of the dot product E exp θ, f (Y ) that solely depends on Y without reference to the parameter η = exp θ. As a consequence, we can define a Generalized PURE (GPURE) as which unbiasedly estimates MSE η for the discrete natural exponential family 5 . As for GSURE, GPURE cannot in general measure discrepancy in the parameter space of interest, and for this reason, its applicability in real scenarios can also be limited. However, under Poisson noise, the "exp-natural" space coincides with the parameter space of interest as η = exp(φ(μ)) = μ, hence, leading to the PURE. Another interesting case, already investigated in [26], is the one of noise with a negative binomial distribution with mean μ and independent entries, for which the "exp-natural" space does not match with the one of μ but with the one of the underlying probability vector p ∈ [0, 1] d as defined in Table 1 (we have θ i = log p i ). In such a case, GPURE reads, for and is an unbiased estimator of E ||p(Y ) − p|| 2 . Other related works. It is worth mentioning that there have been several works focusing on estimating mean squared errors in other scenarios. For instance, when Y has an elliptical-contoured distribution with a finite known covariance matrix Σ, the works of [28,23] provide a generalization of Stein's lemma that can also be used to estimate the risk associated to μ. In [40], the authors provide a versatile approach that provides unbiased risk estimators in many cases, including, all members of the exponential family (continuous or discrete), the Cauchy distribution, the Laplace distribution, and the uniform distribution [40]. The authors of [33] use a similar approach to design such an estimator in the case of the non-centered χ 2 distribution [33].
Kullback-Leibler loss estimation for the exponential family
We now turn to our first contribution that provides, for continuous distributions of the natural exponential family, an unbiased estimator of the Kullback-Leibler synthesis loss.
which concludes the proof.
Example 1.
Under Gamma noise with expectation μ, shape parameter L (as defined in Table 1) and independent entries, SUKLS reads as which, up to a constant, and provided L > 1, unbiasedly estimates In our experiments, we will see that minimizing MKLS (or its SUKLS estimate) leads to relevant selections, unlike minimizing MSE θ (or its GSURE estimate).
Note that the authors of [24] have proposed a consistent estimator of MKLS when L = 1 (they did not study the case where L > 1), their estimator has been however designed only for kernel smoothing problems.
is an unbiased estimator of MKLA and log is the entry-wise logarithm.
Proof. The expression of MKLA follows directly from Table 1 and Equation which concludes the proof as h ↓ (y)/h(y) = y andθ ↓ (Y ) = logμ ↓ (Y ).
With such results at hand, only the Poisson distribution admits an unbiased estimator of the mean Kullback-Leibler analysis loss. In order to design an estimator of MKLA for a larger class of natural exponential distributions, we will make use of the following proposition. Proposition 1. For any probability density or mass function y → p(y; θ) of the natural exponential family of parameter θ, the Kullback-Leibler analysis loss associated to y →θ(y) can be decomposed as follows Proof. Subtracting and adding Y,θ(Y ) − θ in the MKLA definition leads to , this concludes the proof.
In the same vein as for the decomposition (2.1), Proposition 1 provides a variational interpretation of the minimization of MKLA, valid for noise distributions within the exponential family. Minimizing MKLA leads to a maximum a posteriori selection promoting faithful models with low complexity. It boils down to (2.1) when specified for Gaussian noise. As for the MSE, the fidelity term can always be unbiasedly estimated, up to an additive constant, without knowledge of θ. Only the complexity term tr Cov(θ(Y ), Y ), which generalizes the notion of degrees of freedom, is required to be estimated. Except for the Poisson distribution, none of the previous lemmas can be applied to unbiasedly estimate this term. However, we will show that it can be biasedly estimated, with vanishing bias depending on both the "smoothness" ofθ and the behavior of the moments of Y . Towards this goal, let us first recall the Delta method.
is an infinite sequence of independent and identically distributed random vectors in R d with EZ i = μ, Var[Z i ] = Σ and finite moments up to order
Lemma 4 is a direct d-dimensional extension of [29] (Theorem 5.1a, page 109), that allows us to introduce our biased estimator of MKLA.
. is an infinite sequence of independent random vectors in R d
identically distributed within the natural exponential family with natural parameter θ, log-partition function A, expectation μ, variance function Λ and finite moments up to order k ≥ 3. As a result, the distribution of Y n is also in the natural exponential family parametrized by θ n = nθ with log-partition function A n (θ n ) = nA(θ n /n), expectation μ and variance function Λ n = Λ/n. Provided θ n reads asθ n = nθ, andθ : R d → R d is k times totally differentiable with bounded k-th derivative, then where MKLA n is the KL analysis loss associated toθ n with respect to θ n .
It is worth mentioning that Theorem 4.3 can be applied to Gaussian noise, with DKLA boiling down to SURE, as DKLA = (2σ 2 ) −1 (SURE − ||Y || 2 + dσ 2 ). However, the conclusion is not as strong, as by virtue of Lemma 1, DKLA would be in fact an unbiased estimator provided only thatμ is weakly differentiable. More interestingly, consider the two following examples.
Example 2.
Gamma random vectors Y n with expectation μ ∈ (R + * ) d and shape parameter L n = n (as defined in Table 1) results from the sample mean of n independent exponential random vectors with expectation μ (entries of the vectors are supposed to be independent). As exponential random vectors have finite moments, providedμ is sufficiently smooth and since φ is continuously differentiable in (R + * ) d , Theorem 4.3 applies and we get
Example 3.
Consider Y n the sample mean of n independent Poisson random vectors with expectation μ ∈ (R + * ) d . We have that Y n , for all n, belongs to the natural exponential family with A n (θ n ) = n exp(θ n /n) and θ n = n log μ (entries of the vectors are supposed to be independent). As Poisson random vectors have finite moments, providedμ is sufficiently smooth and since φ is continuously where MKLA n Interestingly, remark that PUKLA(μ, Y ) ≈ DKLA(μ, Y ), as soon as we have
Reliability study
In this section, we aim at studying and comparing the sensitivity of the previously studied risk estimators. Little is known about the variance of SURE: It is in general an intricate problem and some studies [37,31] Proof. This is a straightforward consequence of Cauchy-Schwartz's inequality.
Proposition 2 allows us to compare the relative sensitivities of the different estimators. Comparing GSURE and SUKLS, one can notice that the bounds are similar but the first one is controlled byθ(Y ) while the second one is controlled byμ(Y ). While it is difficult to make a general statement, we believe SUKLS estimates might be more stable than GSURE sinceμ(Y ) has usually better control thanθ(Y ), given the non-linearity of the canonical link function φ.
Implementation details for the proposed estimators
In this section, we explain how the proposed risk estimators can be evaluated in practice within a reasonable computation time.
All risk estimators designed for continuous distributions rely on the computation of tr g(y) ∂f (y) ∂y y for some mappings g : For instance, SURE requires to compute such a a quantity with g(y) = Id d and f =μ (see eq. (2.2)). In general, the computation of these terms requires at least O(d 2 ) operations and thus prevents the use of such risk estimators in practice. Fortunately, following [19,38], we can approximate such terms by using Monte-Carlo simulations, thanks to the following relation tr g(y) ∂f (y) ∂y y = E ζ, g(y) ∂f (y) ∂y y ζ for ζ ∼ N (0, Id d ), (6.1) where the directional derivatives in the direction ζ ∈ R d can be computed by using finite differences or algorithm differentiations as described in [10]. This leads in general to a much faster evaluation in O(d) operations.
In the Poisson setting, risk estimators rely on the computation of y, f ↓ (y) for some mapping f : R d → R d . For instance, PUKLA requires to compute such a quantity with f = − logμ (see Theorem (4.2)). Again, the computation of such terms requires at least O(d 2 ) operations in general. Based on first order expansions, we have empirically chosen to perform Monte-Carlo simulations on the following approximation where ζ ∈ {−1, +1} d is Bernoulli distributed with p = 0.5. In our numerical experiments, this approximation led to O(d) operations and satisfactory results even though f was chosen to be non-linear. This approximation clearly deserves more attention but is considered here to be beyond the scope of this study.
Numerical experiments
In this section, we will perform numerical experiments showing the interest of the proposed Kullback-Leibler risk estimators in two different applications.
Application to image denoising
We first consider that Y and μ are d dimensional vectors representing images on a discrete grid of d pixels, such that entries with index i are located at pixel location δ i ∈ Δ ⊂ Z 2 . A realization y of Y represents a noisy observation of the image μ. The estimateμ of μ is a denoised version of y.
Performance evaluation. In order to evaluate the proposed loss functions and their estimates, visual inspection will be considered to assess the image quality in terms of noise variance reduction and image content preservation. In order to provide an objective measure of performance, taking into account heteroscedasticity and tails of the noise, we will evaluate the mean normalized absolute deviation error defined as The MNAE measures to which extentμ(Y ) might belong to a confident interval around μ with dispersion related to Λ(μ). The MNAE is expected to be 1 when μ(Y ) ∼ N (μ, Λ(μ)), and should get closer to 0 whenμ(Y ) improves on Y itself.
Simulations in linear filtering. We consider here thatμ is the linear filter where W ∈ R d×d is a circulant matrix encoding a discrete convolution with a Gaussian kernel of bandwidth τ > 0. In this context, we will evaluate the relevance of the different proposed loss functions and their estimates as objectives to select a bandwidth τ offering a satisfying denoising. Figure 1 gives an example of a noisy observation y of an image μ representing fingerprints whose pixel values are independently corrupted by Gamma noise with shape parameter L = 3. We have evaluated the relevance of the natural risk MSE θ given by ||μ −1 −μ(Y ) −1 || 2 , MKLS and MKLA in selecting the bandwidth τ . Visual inspection of the results obtained at the optimal bandwidth for each criterion shows that the natural risk MSE θ fails in selecting a relevant bandwidth while MKLS and MKLA both provide a better trade-off. The natural risk strongly penalizes small discrepancies at the lowest intensities while not being sensitive enough for discrepancies at higher intensities. As the noisy image has several isolated pixel values approaching 0, the natural risk will strongly penalize smoothing effects of such isolated structures preventing satisfying noise variance reduction. The Kullback-Leibler loss functions take into account that Gamma noise has a constant signal to noise ratio. Hence, it does not favor the restoration of either bright or dark structures more, allowing satisfying smoothing for both, as assessed by the MNAE. Finally, estimators of these loss functions with respectively GSURE, SUKLS and DKLA are given. Note that for L = 3, the Gamma distribution is far from reaching the asymptotic conditions of Theorem 4.3. As a result, bias is not negligible (it becomes obvious for the lowest values of Fig 2. (a,b,c) Risks and their estimates as a function of the bandwidth in the same setting as in Figure 1 but for Gamma noise with L = 100. The optimal bandwidth τ and the MNAE are indicated. Red shows unbiased estimation and blue biased estimation. τ in Figure 1.h). Nevertheless, minimizing DKLA can still provide an accurate location of the optimal parameter for MKLA. Figure 2 reproduces the same experiment but with Gamma noise with L = 100, i.e., with a much better signal to noise ratio. Interestingly, the bias of DKLA gets much smaller than in the previous experiment. This was indeed expected as with L = 100, the Gamma distribution fulfills much better the asymptotic conditions of Theorem 4.3. Remark that MNAE values are still in favor of Kullback-Leibler objectives, but the gains are much smaller. In fact, all MNAE values get closer to 1 since noise reduction with signal preservation using linear filtering becomes much harder in such a low signal to noise ratio setting.
Simulations in non-linear filtering. We consider here thatμ is the nonlocal filter [4] defined bŷ μ(y) = W (y)y with W i,j (y) = exp − d(P i y, P j y) τ (7.2) where P i ∈ R p×n is a linear operator extracting a patch (a small window of fixed size) at location δ i , d : R p × R p → R + is a dissimilarity measure (infinitely differentiable and adapted to the exponential family following [8]) and τ > 0 a bandwidth parameter. Remark as W (y) ∈ R d×d depends on y,μ(y) is nonlinear. In this context, we will evaluate again the relevance of the proposed loss functions and their estimates as objectives to select the bandwidth τ . Figure 3 gives an example of a noisy observation y of an image μ representing a bright two dimensional chirp signal shaded gradually into a dark homogeneous region. The noisy observation y is contaminated by noise following a Gamma distribution with shape parameter L = 3. We have again evaluated the relevance of the natural risk MSE θ given by ||μ −1 −μ(Y ) −1 || 2 , MKLS and MKLA in selecting the bandwidth parameter. Visual inspection of the results obtained at the optimal bandwidth for each criterion shows that the natural risk fails in selecting a relevant bandwidth while MKLS and MKLA both provide more satisfying results. As the image μ is very smooth in the darker region, the natural risk favors strong variance reduction leading to a strong smoothing of the texture in the brightest area. Again, the Kullback-Leibler loss functions find a good trade-off preserving simultaneously the bright texture and reducing the noise in the dark homogeneous region, as assessed by the MNAE. Finally, estimators of these loss functions with respectively GSURE, SUKLS and DKLA are given. Figure 4 gives a similar example where the image μ represents a two dimensional chirp signal shaded gradually into a bright homogeneous region. The image is displayed in log-scale to better assess the variations of the texture in the darkest region. The noisy observation y is corrupted by independent noise following a Poisson distribution. We have evaluated the relevance of the risks MSE μ , MKLS and MKLA in selecting the bandwidth parameter. Visual inspection of y shows that darker regions are more affected by noise than brighter ones. This is due to the fact that Poisson corruptions lead to a signal to noise ratio evolving as √ μ.
Application to variable selection
We now consider the problem of variable selection in linear regression problems, i.e., in finding the non-zero components of a vector β ∈ R q that fulfills the assumption that an observed vector y ∈ R d has expectation μ = Xβ where X ∈ R d×q is the so-called design matrix. To this aim, we consider the Least Absolute Shrinkage and Selection Operator (LASSO) [44] given, for λ > 0, bŷ β(y) ∈ argmin β∈R q − log p(y; θ = φ(Xβ)) + λ||β|| 1 .
In this case the predictorμ is given byμ(y) = Xβ(y). The LASSO is known to promote sparse solutions, i.e., such that the number of non-zero entries of β is small compared to q. The level of sparsity is indirectly controlled by the regularization parameter λ, the larger λ is, the sparserβ will be. Finding the optimal parameter λ, and then selecting the relevant variables (columns of X) explaining y, is a challenging problem that can be addressed by minimizing an estimator of the risk. In this context, we will evaluate again the relevance of the different proposed loss functions and their estimates as objectives to select a regularization parameter λ offering a relevant selection of variables. Figure 5 and Table 3 provide results obtained on such a linear regression problem where X is an orthogonal matrix and d = q = 16, 384. The vector β was chosen such that 28% of its entries are non-zero. We have generated 200 independent realizations y of Y using a Gamma distribution model with scale parameter L = 8. We have again evaluated the relevance of the natural risk MSE θ given by ||μ −1 −μ(Y ) −1 || 2 , MKLS and MKLA in selecting the regularization parameter. Figure 5 shows the evolution of these objectives as a function of λ. It shows that KL objectives lead to selecting a larger λ parameter than with the natural risk. Performance in terms of average percentages of false negatives (FN:β i = 0 and β i = 0), false positives (FP:β i = 0 and β i = 0) and errors (FP or FN) are reported in Table 3. It shows that tuning the parameter λ with respect to KL objectives leads to lower numbers of errors than with the natural risk. One can observe that the subsequent LASSO estimators work at different trade-offs: KL objectives favor FN over FP, while the natural risk favors FP over FN. Finally, performances with estimators of MSE θ with GSURE, MKLS with SUKLS, and MKLA with DKLA are also given. It can be observed that risk estimators offer in average comparable results than their oracle counterparts but have higher variance. Note that the LASSO is not differentiable, such that DKLA is not guaranteed to be asymptotically unbiased (as the conditions of Theorem 3 are not fulfilled), which explains the large discrepancies observed between the results obtained by MKLA and DKLA. Nevertheless, even though DKLA is not asymptotically unbiased in this case variable selections with the LASSO guided by DKLA still provides a good objective for variable selection, with similar results as if it was guided by the oracle MKLA objective. A last important question is to know whether our risk estimators are robust against model misspecification, i.e., when the generative model (1.1) is only approximately known. Indeed, Lv and Liu [34] demonstrated the advantage of using KL divergence principle for model selection problems in both correctly specified and misspecified models. Along these lines, we have also shown in Table 3 the results obtained under misspecification. We have chosen to evaluate the performance of the LASSO guided by the aforementioned estimators of the risk when the shape parameter L of the Gamma distribution is misestimated by a factor 0.1%: |1 −L/L| = 0.1. We found that the performance of all estimators drop in this case. Nevertheless, their relative performance are preserved: KL objectives lead to lower numbers of errors than with the natural risk.
Conclusion
We addressed the problem of using and estimating Kullback-Leibler losses for model selection in recovery problems involving noise distributed within the exponential family. Our conclusions are threefold: 1) Kullback-Leibler losses have shown empirically to be more relevant than squared losses for model selection in the considered scenarios; 2) Kullback-Leibler losses can be estimated in many cases unbiasedly or with controlled bias depending on the regularity of both the predictor and the noise; 3) Even though the estimation is subject to variance and bias, the subsequent selection has shown empirically to be close to the optimal one associated to the loss being estimated. Future works should focus on understanding under which conditions such a behavior can be guaranteed. This includes establishing tighter bounds on the reliability, consistency with respect to the data dimension d and asymptotic optimality results for some given class of predictors. Estimation of Kullback-Leibler losses and other discrepancies (e.g., Battacharyya, Hellinger, Mahanalobis, Rényi or Wasserstein distances/divergences) beyond the exponential family and requiring less regularity on the predictor should also be investigated. | 8,366.4 | 2015-12-27T00:00:00.000 | [
"Mathematics"
] |
Biological reproduction variables of scalloped spiny lobster ( Panulirus homarus Linnaeus, 1758) in the west coast of Lampung province
. The scalloped spiny lobster ( Panulirus homarus ) is one of the dominant lobster species caught by fishermen in the west coast of Lampung Province. The spiny lobster fishing seems to be high pressure due to the catch result having a reasonable price for domestic and export market. However, little information on the biology of this species in the west coast of Lampung Province is considered for management. This study aimed to analyze the biological reproduction variable of scalloped spiny lobster, i.e., sex ratio, length-weight relationship and relative condition factor, maturity, reproductive potential, and spawning type. The sample was collected by simple random sampling method on the main holding center from October 2021 to March 2022. Results showed that the caught of P. homarus carapace length was in the range of 41-94 mm, the sex ratio was about 1:1.12, there was a negative allometric growth pattern for both sexes, the condition factor, and the gonadosomatic index tended to be increased from October to December, and then decrease. Gonad with fully mature stage ( stage III ) was found every month for both sexes, and the peak of spawning season occurs in November-December. Fecundity as the reproductive potential of P. homarus was in the range of 762-182,000 eggs, while size at onset maturity or mean size of first spawning was at 63.10 mm carapace length. Spawning patterns tend to be partial spawners. Females of P. homarus below the size at onset maturity are often caught. It is recommended to this fishery that minimum legal size should be applied in the carapace length greater than 64 mm. Keywords
Introduction
Spiny lobster (Panulirus spp.) is an economically essential crustacean species in Indonesia due to the catch result having a good price for the domestic and export markets.There are at least seven lobster species can be found in Indonesian waters, i.e., scalloped spiny lobster (P.homarus), pronghorn spiny lobster (P.penicillatus), ornate spiny lobster (P.ornatus), painted spiny lobster (P.versicolor), longlegged spiny lobster (P.longipes longipes), stripeleg spiny lobster (P.femoristriga), and mud spiny lobster (P.polyphagus) [1].All species have been caught by the fisher in almost all its distribution areas, including in the Indonesian Fisheries Management Area (IFMA) 572 or part of the Indian Ocean in western Sumatera.
The potential of lobster resources in IFMA 572 is high, as well as the level of its utilization [2].The dominant lobster species obtained in this area included the scalloped spiny lobster (P.homarus).The waters of the west coast of Lampung Province are part of IFMA 752 and have a high potential for lobster resources.This area is also one of the main location for lobster trading transactions in Indonesia [3].However, the scalloped spiny lobster is not a species with high dominating potential in this area but has high selling value.In the international market, the selling price of lobsters, including scalloped spiny lobsters, ranges from US$ 50-80 per kilogram for individual sizes of 300 grams to 1 kilogram and US$ 100 per kilogram for sizes more than 1 kilogram [4].
Fishing intensity of lobsters in general and scalloped spiny lobsters in particular, using gill net fishing gear on the west coast of Lampung, is indicated to be high.Intensive lobster catching can affect the balance and existence of lobster stocks in nature, causing a decrease in stocks, an imbalance in the sex ratio of lobsters, to the extinction of species [5].Due to such consequences, there is a need for efforts to manage the existence of lobster resources based on scientific evidence.
Research on the biology, ecology, population dynamic, and fishery of scalloped spiny lobster has been carried out in some Indonesian waters [6,7,8,9,10,11], especially in West Aceh as part of the IFMA 572 [12].However, there is little information on the biology of this species on the west coast of Lampung Province, even though this area is part of IFMA 572.The scalloped spiny lobster population on the west coast of Lampung Province might be one stock unit in the IFMA 572 or not.Therefore, in maintaining the sustainable use of lobster resources in the west coast of Lampung waters, management efforts are needed.Management efforts that are applied need to be adjusted to the biological aspects of certain lobster species that exist in this area.Scalloped spiny lobster is one of the dominant species caught in the waters of the west coast of Lampung.It is necessary to study the reproductive biology of this species as a basis for resource management so that the sustainability of this lobster population is maintained and provides long-term community benefits.This study's purpose was to analyze the biological reproduction variables of scalloped spiny lobster, i.e. sex ratio, lengthweight relationship and condition factor, maturity, gonadosomatic index, reproductive potential, and size at onset of maturity or mean size at first spawning in the west coast waters of Lampung Province.
Data collection
Monthly sampling was made in the main holding center of caught spiny lobster at Krui Subdistrict, West Coast District of Lampung Province (Fig. 1) from October 2021 to March 2022.Lobsters from the landing site of the gill-net fishery in the Southeastern and Northwestern areas were brought for live packing to the main holding center in Krui, where Krui is also the center of the interregional market.Scalloped spiny lobster samples were taken by random sampling method.The data collected included sex, carapace length (CL), total weight (W), gonad weight, gonadal maturity level, fecundity, and condition of the gonads in females carrying eggs or berried females.The sex of the scalloped spiny lobster was observed visually based on the differences in the characteristics of the genitalia (gonopore), tip of the fifth prong, and ovigerous setae on the swimming legs (pleopod).Male lobsters have an oval gonopore at the base of the fifth pereopod, and the tip of the fifth proboscis has no pseudo-claws and no ovigerous setae.Meanwhile, female lobsters have a rounded gonopore at the base of the third pereopod, the tip of the fifth leg has pseudo claws and ovigerous setae on the pleopod [13].Quantifying the carapace length (CL) of each sample was the distance from the rostral sinus to the posterior edge of the carapace to the nearest 0.1 mm using a Vernier caliper, as well as total length (TL) using measuring tape.Meanwhile, total weight (W) was measured using digital balance to the nearest 0.1 g in the field laboratory.
The lobster sample was dissected after quantifying its CL and W, followed by observation of the texture, color, and measured the testes and ovaries weight to the nearest 0.01 g using digital balance.Subsequently, testes and ovaries development were classified into four macroscopically stages based on [14,15], i.e., stage I = immature, stage II = maturing (prematuration), stage III = mature (mature extrusion), and stage IV = spawning (post extrusion).Egg bearing female samples were also dissected to observe their ovaries and determine their condition, such as above the development.Meanwhile, the fecundity of scalloped spiny lobsters in one spawning season was determined by counting the number of eggs that have been fertilized and attached to the ovigerous setae, where eggs or ovum with an orange or yellow color were only considered in measurement [16].The egg mass on the pleopods was removed from the female's abdomen and preserved with 70% ethanol, and then dried, sieved, and weighed to the nearest 0.001 g in the field laboratory.Subsequently, three 0.1 g subsamples were measured for their fecundity, and the average number of eggs was multiplied by the total weight of the egg mass to obtain the number of fecundity for an individual sample per spawning [17].
Data analysis
The sex ratio value is the ratio of male to female lobsters, and chi-square analysis was applied to test.The relationship between carapace length and total weight (LWR) was calculated using the equation from [18] and followed by the t-test of the b value obtained [19].Meanwhile, condition factors were analyzed for both sexes using the formula of [20].The gonadosomatic index (GSI) was calculated as gonad weight in g/body weight in g × 100.Size at onset maturity or spawning was measured based on the length of the egg bearing females and it was analyzed using the Spearman-Karber method [21].
Results and Discussion
Scalloped spiny lobsters (P.homarus) caught on the west coast of Lampung Province waters during the study period had carapace lengths (CL) from 41 to 93 mm.Male lobsters size ranges from 42 to 81 mm CL, while females size from 41 to 93 mm CL.The dominant spiny lobster caught was in the size class of 53-58 mm CL.This lobster size tends to be larger than the scalloped spiny lobster caught in the southern part of Java, i.e., in Yogyakarta and Pacitan which, the size between 28.2 to 85.2 mm CL [7] and Palabuhanratu Bay with the size between 27 to 69 mm CL [11].However, this size is smaller or smaller range than in Tabanan-Bali, with the size of about 36.00 to 105.6 mm CL [8] and West Aceh waters with a size between 39.0 to 112.0 mm CL [12].
The sex ratio of the P. homarus obtained was generally 1:1.12 and, based on the Chisquare test was relatively balanced or 1:1 (Table 1).The balanced proportion of male and female lobsters indicates that male and female lobsters have the same chance of being caught.The ideal sex ratio is when the population is in a balanced condition or the frequency of females is more than males, under such conditions the sustainability of the population will be maintained [22].The above condition is different from the results of research on scalloped spiny lobsters in the waters of Ekas Bay, Lombok Island [6] and in the waters of West Aceh [12], where female P. homarus predominate.However, the sex ratio can change over time, due to the influence of an increased mortality rate and a decreased growth rate [23].Carapace length (CL) to body weight (W) relationships in P. homarus captured on the west coast of Lampung waters were W = 0.3674 CL 1,426 for males and W = 0.4066 CL 1,4479 for females (Fig. 2).Based on the t-test applied to b value, both male and female spiny lobsters had a negative allometric growth pattern or hypoallometric (b<3) and there was no significant difference (p>0.05) of the regression coefficients between male and female lobsters.This result indicated that male and female lobsters have similar gains in length and weight growth.Several research results in other regions in Indonesia (i.e., Yogyakarta, Bali, and Sorong) also show the same growth pattern [7,8,23].The same growth pattern in different areas can be caused by the similar characteristics of the waters in supporting the availability of food and suitable habitat for lobsters.The mean condition factor tends to increase from October to December for both males and females and then decrease.The highest condition factor for male scalloped spiny lobster was 1.21 and was found in November, while for female spiny lobster it was 1.48 in December (Fig. 3).Condition factors that vary temporally indicate the influence of the reproductive period.The condition factor for female lobsters increased in the November and December periods thought to have occurred because of spiny lobsters in the spawning period.After the spawning season, condition factor values decreased (January-March).This is supported by the increasing number of female spiny lobsters with GMS IV and berried females.Testes and ovaries development of scalloped spiny lobster were found such as in Fig. 4, while the temporal distribution of gonad development seem to show slight variation between male and female.Gonads with the fully mature stage (stage III) were found every month for both sexes and the percentage of male spiny lobsters that have mature gonad (GMS III) dominated from October 2021 to March 2022 (Fig. 5).The same thing was also found in female spiny lobsters, where the percentage of females with GMS III was relatively high every month, especially during November to December.Both male and female with the GMS IV tend to be increase from November to January/February which indicate the high occurrence of spawning.
The average value of the gonadal maturity index (GSI) for male scalloped spiny lobsters is in the range of 1.06-1.34while the average GSI for female scalloped spiny lobsters is in the range of 0.74-1.89(Fig. 6).The highest value of the gonad maturity index for male scalloped spiny lobsters was in October, while for females in December.The peak of the spawning season during the observation period seems to be occurred in November-December.December period coincides with the rainy season and the west monsoon.During the rainy season, high rainfall will decrease the salinity of the coastal water, so that spiny lobsters are encouraged to come out of the reef and look for partners to breed [25].
Male Gonad
Female Gonad Berried female or egg-bearing females were frequently found with gonad in maturity stage III, indicating that the spawning pattern tend to be partial spawner (Fig. 7a).Meanwhile, the percentage of berried females with gonad in maturity stage IV were also high during November-December, which support the hypothesis related to the peak of spawning season.Fecundity as reproductive potential was in the range between 762-182,000 eggs per spawning.The fecundity of female P. homarus in several other areas, such as Bali, Lombok, and India, showed varying results [6,8,26].Female scalloped spiny lobsters are capable of producing 100,000-900,000 eggs in one incubation period (Chan, 1998).The differentiating factor on the number of eggs is influenced by several factors including nutrition related to the quantity and quality of food, variations in the size of female individuals, and the age and average size of the first spawning [27].
Size at onset maturity or mean size of first spawning at 63.10 mm CL (Fig. 7b).The size at onset maturity is smaller than that found in Ekas Bay-Lombok, which is 77.44 mm CL [6], and in Tabanan-Bali is at 68.52 mm CL [8].The size of the first sexual maturity correlated positively with the observed maximum size of sexually mature female lobsters [28], and it can vary according to time and growth rate.Geographically, variation in size at first sexual maturity can be influenced by environmental parameters such as temperature and food quality [29].During the observation period, males and females scalloped spiny lobsters smaller than the size at onset of spawning were often caught.It is recommended to this fishery that a minimum legal size should be applied in the carapace length greater than 64 mm for the sustainability of resources.It is slightly different from the Regulation of the Minister of Marine Affairs and Fisheries Number 16 of 2022 that the lobsters (P.homarus) allowed to be caught are more than 60 mm CL or individual weights above 150 grams.
Conclusion
The caught scalloped spiny lobsters have a balanced sex ratio and a negative allometric growth pattern.Spawning occurred during the observation period with a peak in November-December and the increasing condition factor and gonadosomatic index, and tend to be partially spawner.The average reproductive potential was high per spawning.Many female scalloped spiny lobsters are caught below the size at the onset of spawning, so this fishery should manage with the minimum legal size at the size at onset of spawning.
Fig. 1 .
Fig. 1.Map of research location on the west coast of Lampung Province, Indonesia.
Fig. 2 .
Fig. 2. Length-weight relationship of male and female scalloped spiny lobster on the west coast of Lampung Province.
Fig. 3 .
Fig. 3. Condition factors for male and female scalloped spiny lobsters (P.homarus)on the waters of the west coast of Lampung Province
Fig. 4 .
Fig. 4. Gonad maturity stage (GMS) of male and female scalloped spiny lobster on the west coast of Lampung Province.
Fig. 5 .
Fig. 5. Temporal distribution of gonad maturity stage (GMS) of male and female scalloped spiny lobster on the west coast of Lampung Province.
Fig. 6 .
Fig. 6.Gonadosomatic index (GSI) of male and female scalloped spiny lobster on the west coast of Lampung Province.
Fig. 7 .
Fig. 7. Temporal distribution of berried female with gonad maturity stage III and IV (a) and the size at onset spawning of scalloped spiny lobster (b) in the west coast of Lampung Province.
Table 1 .
Sex ratio of scalloped spiny lobster (P.homarus) on the west coast of Lampung Province. | 3,757.6 | 2023-01-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Galectokines: The Promiscuous Relationship between Galectins and Cytokines
Galectins, a family of glycan-binding proteins, are well-known for their role in shaping the immune microenvironment. They can directly affect the activity and survival of different immune cell subtypes. Recent evidence suggests that galectins also indirectly affect the immune response by binding to members of another immunoregulatory protein family, i.e., cytokines. Such galectin-cytokine heterodimers, here referred to as galectokines, add a new layer of complexity to the regulation of immune homeostasis. Here, we summarize the current knowledge with regard to galectokine formation and function. We describe the known and potential mechanisms by which galectokines can help to shape the immune microenvironment. Finally, the outstanding questions and challenges for future research regarding the role of galectokines in immunomodulation are discussed.
Introduction
The straightforward mission of the immune system is to protect an organism from external and internal threats. However, the execution of this mission is far from straightforward. It relies on complex biological processes involving different organs, cells, and proteins that aim to recognize, adapt, and neutralize threats. Consequently, a dysfunctional or inadequate immune response can result in pathogen-induced infectious or non-infectious diseases like autoimmune disorders, diabetes, rheumatoid arthritis, and cancer. Regarding the latter, recent immunotherapy developments support the idea that the immune system is capable of eradicating malignant cells [1]. At the same time, the limited effectiveness of immunotherapy also illustrates our current lack of insight into the diverse immuno-modulatory mechanisms that can contribute to immune dysfunction [2,3]. Thus, uncovering the full spectrum of mechanisms that shape an adequate immune microenvironment remains a significant scientific challenge.
Research over the last two decades has revealed that galectins constitute a family of glycan-binding proteins that play a crucial role in immune homeostasis [4]. In 1995, Perillo and coworkers showed that galectin-1 could induce apoptosis of activated human T cells via interactions with N-glycans on CD45 [5]. Toscano and collaborators further established the relevance of glycosylation in galectin-mediated immunomodulation. They observed that distinct T helper cell stimuli resulted in different glycosylation of specific T helper subsets [6]. Consequently, galectin-1 could skew the T helper balance since Th2 cells were less susceptible to galectin-1-induced cell death as compared to Th1 and Th17 cells [6]. The association between immune cell glycosylation and galectin-mediated immunomodulation is broadly recognized [4,7].
Interestingly, recent studies have identified additional mechanisms by which galectins can control specific immune cell functions. These mechanisms involve a connection between galectins and cytokines, another family of proteins with immunoregulatory activity [8,9]. For example, many studies have shown a reciprocal relationship between galectins and cytokines with regard to expression regulation and protein secretion [10][11][12][13][14]. In addition, emerging evidence indicates that galectins and cytokines can also directly interact and form heterodimers. Such galectin-cytokine heterodimers -here referred to as galectokinescan affect the activity of both proteins, indicative of a mechanism that extends beyond transcriptional regulation [15][16][17]. These findings add a new layer of complexity to the regulatory mechanisms that shape the immune response.
The current review provides an overview of the intricate relationship between galectins and cytokines. We discuss the functional consequences of galectokine formation, focusing on immunomodulation. In addition, we highlight the outstanding research questions and challenges concerning unraveling the role of galectokines in (immune) cell biology.
The Cytokine Protein Family
Cytokines constitute a large family of (immuno)regulatory proteins, including interferons, interleukins, chemokines, lymphokines, and the tumor necrosis factor family ( Figure 1a). These relatively small soluble proteins (±5-20 kDa) can be expressed and recognized by almost every cell type, and they can exert paracrine, autocrine, and endocrine functions [18,19]. The regulatory pathways controlling cytokine expression are complex. Important initiators are so-called pattern recognition receptors (PRR) which can recognize, e.g., pathogens, damaged or dying cells. Downstream PRR pathways that subsequently trigger cytokine expression include NF-κB signaling, MAPK signaling, TBK1/IRF3 signaling, and inflammasome signaling [20]. Subsequently, cytokine receptor activation can control the expression and secretion of other cytokines via a plethora of signaling pathways, including the above, as well as Jak/STAT signaling, PI3K/AKT, and others [21][22][23][24].
To initiate responses, cytokines bind to a broad panel of transmembrane receptors (Figure 1b), some of which are specific for a single cytokine, while others are more promiscuous [25][26][27][28]. Since cytokine receptors are also broadly expressed, cytokines can show pleiotropic activity toward different cell types as well as complementary or redundant activity toward specific cells [25,26]. The pleiotropic and redundant activity allows cytokines to display both stimulatory and inhibitory effects. These effects depend on many factors, including the microenvironment, the timing of the release, receptor density, and the presence of competing or synergistic elements. Regarding the latter, many soluble cytokine receptors have been described that can scavenge cytokines, thereby affecting their activity on cell surface cytokine receptors [29][30][31][32].
Over the past 40 years, extensive research on cytokine function and activity has identified these versatile proteins as essential players in nearly every biological field, particularly in immunology [21,[33][34][35]. For example, T helper (Th) cells are known to be effective cytokine producers but with apparent differences between the specific subtypes. More specifically, Th1 cells are known for the production of proinflammatory cytokines, including interferon (IFN)-γ, interleukin (IL)-2, tumor necrosis factor-alpha (TNF-α,) and granulocyte-macrophage colony-stimulating factor (GM-CSF). On the other hand, Th2-cells are characterized by the production of anti-inflammatory cytokines like IL-4, IL-5, IL-9, IL-10, and IL-13 [36]. In addition, immune cells can communicate and regulate each other's function and/or activity through cytokine secretion. For example, IL-4 can induce the development of Th2 cells on the one hand and inhibit the expression of proinflammatory cytokines like IL-1, TNF-α, IL-6, and CXCL8 on the other hand. In addition, IL-2 generates cytotoxic T cells but is also a driver of graft-versus-host disease. At the same time, IFN-γ is essential in the immune response against intracellular pathogens but can also underlie the development of autoimmune disease [36].
the development of Th2 cells on the one hand and inhibit the expression of proinflammatory cytokines like IL-1, TNF-α, IL-6, and CXCL8 on the other hand. In addition, IL-2 generates cytotoxic T cells but is also a driver of graft-versus-host disease. At the same time, IFN-γ is essential in the immune response against intracellular pathogens but can also underlie the development of autoimmune disease [36].
The examples above only scratch the surface of the complex signaling networks involving cytokines. A full description of immunoregulation by cytokines can fill entire textbooks and is beyond the scope of this review. It suffices to conclude that cytokines are generally considered one of the key protein families that shape and control the immune response. The examples above only scratch the surface of the complex signaling networks involving cytokines. A full description of immunoregulation by cytokines can fill entire textbooks and is beyond the scope of this review. It suffices to conclude that cytokines are generally considered one of the key protein families that shape and control the immune response.
The Galectin Protein Family
Galectins, formerly S-type lectins, are a widely expressed class of lectins, i.e., carbohydratebinding proteins. Members of the galectin protein family are evolutionarily conserved, sharing a specific amino acid sequence motif in their carbohydrate recognition domain (CRD) and a binding affinity -not exclusive-for β-galactosides [37,38]. The evolutionary-conserved CRD [39] comprises approximately 130 amino acids which are organized in five-and sixstranded antiparallel β-sheets oriented in a β-sandwich configuration (Figure 2a) [40,41]. Although galectins can bind to a wide range of glycan ligands, each galectin has a different glycan-binding preference which contributes to their specific biological activities [42,43]. At the same time, since their glycan ligands are found on many cell types, and different galectins can bind the same ligands, they can show pleiotropic and redundant activity, similar to cytokines.
It has been found that galectins can exert biological functions both intra-and extracellularly (Figure 2c) (For a concise overview, see [40]). Intracellularly, galectins mainly engage in glycan-independent interactions with different cytoplasmic and nuclear proteins to regulate, e.g., signaling pathways, pre-mRNA splicing, apoptosis, and the cell cycle [40,50,51]. Despite the lack of a classical secretion signal and a yet-to-be-resolved secretory mechanism, galectins are also found on the cell surface and in the extracellular environment. Here, galectins can enable signaling by binding to glycans on cell surface receptors, thereby regulating, e.g., the clustering and/or retention of cell surface receptors [52,53]. In addition, extracellular galectin-glycan interactions can facilitate cell-cell interactions and cell-extracellular matrix adhesion [54]. Thus, while cytokines mainly engage in protein-protein interactions with their respective receptors, galectins are capable of binding to target proteins directly or via glycans on target proteins. The latter affects their functionality as glycosylation is a dynamic process.
In line with their versatile functionality, galectins have been linked to a broad range of (patho)physiological processes, including pregnancy [55][56][57], vascular biology [58], platelet biology [59], cancer [7], and immune homeostasis [4,60]. Their role in the latter involves regulating both immunosuppressive and immunostimulatory programs [4]. For example, galectins can serve as pattern recognition receptors by binding to glycans on the surface of pathogens and microorganisms. Translating alarming signals into an innate immune response can help resolve acute inflammation [61][62][63]. Only recently, it was shown that macrophage-derived galectin-9 binds LPS on gram-negative bacteria to enhance bacterial opsonization and stimulate innate immunity [57]. In addition, by interacting with glycoproteins on the surface of specific immune cell types, galectins can, e.g., control the activation, signaling, and survival of T cells, modulate the cytokine balance, shape the B cell compartment, and mediate the suppressive activity of regulatory T cells (for an extensive review see [4]). Known receptors involved in these regulatory functions include several checkpoint proteins, e.g., PD-1, Tim-3, and VISTA [64][65][66]. As such, galectins are now considered immune checkpoint proteins [67]. While an in-depth description of the mechanisms of immunomodulation by galectins is beyond the scope of this review, it is evident that galectins are nowadays considered an essential immunomodulatory protein family that shows distinct but similar activity compared to cytokines. Despite the clear functional parallels between galectins and cytokines, the two families have been mainly considered as two distinct immunoregulatory families. However, increasing evidence suggests a close relationship between galectins and cytokines, and this involves reciprocal expression regulation but includes direct galectin-cytokine interactions that have functional consequences. The following paragraphs will further discuss these direct and indirect relationships between the two protein families. description of the mechanisms of immunomodulation by galectins is beyond the scope of this review, it is evident that galectins are nowadays considered an essential immunomodulatory protein family that shows distinct but similar activity compared to cytokines. Despite the clear functional parallels between galectins and cytokines, the two families have been mainly considered as two distinct immunoregulatory families. However, increasing evidence suggests a close relationship between galectins and cytokines, and this involves reciprocal expression regulation but includes direct galectincytokine interactions that have functional consequences. The following paragraphs will further discuss these direct and indirect relationships between the two protein families.
Indirect Relationship between Galectins and Cytokines
As evident from the above, both cytokines and galectins play key roles in regulating the activity and function of immune cells. Since their immunoregulatory function is dependent on protein availability and levels in the microenvironment, it has been anticipated that members of each family can indirectly regulate the expression and/or secretion of the other. Indeed, summarized below, there is ample evidence for a reciprocal relationship between galectin and cytokine expression regulation.
Galectin-Mediated Effects on Cytokine Levels
Many studies have reported on the effects of galectins on the secretion of both proand anti-inflammatory cytokines by different (immune) cell types (See also [9]). Since it is not feasible to cover all the literature on this topic in the current review, we present a selection of findings to illustrate how cytokine levels are affected by different galectins. With regard to galectin-1, this galectin has been shown to target different immune cells and to display broad anti-inflammatory and pro-resolving activities, including, e.g., inhibition of eosinophil and neutrophil trafficking, modulation of T cell function, induction of tolerogenic dendritic cells, or modulation of macrophage polarization [68][69][70]. These activities are usually accompanied by galectin-1-dependent modulation of cytokine expression/secretion. For example, galectin-1 was found to shift the balance of cytokines secreted by T cells; it favors IL-10 and inhibits IFN-γ expression, consequently inhibiting T cell activation and inducing a shift from a Th1 to a Th2 type of response [71][72][73]. Also, dendritic cells exposed to galectin-1 acquired an IL-27-dependent regulatory function, promoting IL-10-mediated T cell tolerance with decreased IFN-γ levels [74]. In line with this, the lack of galectin-1 expression in B cells reduced IL-10 expression upon anti-CD40 stimulation, while TNF-α expression was increased [69]. At the same time, in macrophages, galectin-1 reduced the secretion of proinflammatory cytokines (TNF-α and IL-1β) as well as the anti-inflammatory IL-10 and the pleiotropic IL-6 [75,76]. In line with the above, mice deficient in galectin-1 expression exhibit a hyperinflammatory phenotype characterized by increased secretion of proinflammatory cytokines, such as IL-12 or TNF-α, and reduced secretion of antiinflammatory cytokines, such as IL-10 [77][78][79][80]. Consequently, treatment with galectin-1 was found to exert anti-inflammatory activity and reduce the severity of different animal models of acute and chronic inflammation, e.g., experimentally induced colitis [81,82], concanavalin A-induced hepatitis [83] or influenza A virus acute lung injury [84].
Similar to galectin-1, a shift in cytokines from a Th1 to a Th2 phenotype has been observed in response to galectin-2. In activated T cells, galectin-2 was found to inhibit the production of IFN-γ and TNF-α and simultaneously increase the secretion of IL-5 and IL-10 [85]. In monocytes and macrophages, galectin-2 induced a proinflammatory phenotype, increasing the expression of proinflammatory genes, including IL12p40, TNF-α, IL-6, and IFN-β [12,86]. In accordance, galectin-2 stimulation of macrophages resulted in gene transcription and presentation of surface proteins consistent with a polarized M1 phenotype. These effects were carbohydrate-binding independent and mediated through the CD14/toll-like receptor (TLR)-4 pathway [86]. Of note, galectin-2 treated monocytes also showed increased IL-10 secretion [4], which mimics the above observation that galectin-1 simultaneously decreased pro-and anti-inflammatory cytokines. Conversely, inhibition of galectin-2 by specific nanobodies was found to reduce the expression of inflammatory cytokines and polarize macrophages toward an anti-inflammatory phenotype, leading to decreased atherosclerosis in hyperlipidemic mice [87,88].
Galectin-3 has also been shown to exert many modulatory functions in the (tumor) immune microenvironment, e.g., reducing tumor-infiltrating lymphocytes, suppressing T cell activation, and inhibiting the expansion of plasmacytoid DCs [89]. Moreover, a role for galectin-3 has been described in several infectious, inflammatory, and autoimmune diseases (for an extensive review, see [90]). Like galectin-1 and galectin-2, the regulatory functions of galectin-3 include modulation of cytokine expression and secretion. For example, cellassociated galectin-3 was found recently found to trigger the release of secretion of IL-4 and IL-13 from basophils [91] as well as the secretion of IL-6 and TNF-α from dendritic cells (both plasmacytoid and myeloid) [92]. Likewise, galectin-3 was found to induce the secretion of IL-6 and TNF-α, as well as of GM-CSF, CXCL8, CCL2, CCL3, and CCL5 from fibroblasts [93]. A largely overlapping response was also observed in galectin-3 treated pancreatic stellate cells [94]. In particular, the induction of IL-6 appears to be a typical response to galectin-3, as Silverman and coworkers showed that galectin-3 was required for the induction of IL-6 expression in bone marrow mesenchymal stem cells [95]. Likewise, galectin-3 was described to induce the secretion of IL-6, G-CSF, and GM-CSF from endothelial cells [96,97]. Regarding the latter, comparable results were obtained with galectin-2/-4/-8, indicating that the observed response is not restricted to galectin-3 [10]. Indeed, it has been shown in a TCR mutational colitis mouse model that galectin-4 can trigger IL-6 expression/secretion from activated CD4+ T cells [98]. This was dependent on distinct intestinal inflammatory conditions, and the experimental model analyzed [98], which could explain why Paclik and coauthors in a wild-type colitis model demonstrated that galectin-4 reduced the secretion of IL-6 as well as TNF-α, CXCL8, IL-10, by activated T cells [99]. Since it has been shown that blocking galectin-4 in colorectal cancer cells induced the release of IL-6 and other cytokines, including CXCL1, CCL2, CCL5, and CXCL10 [100], it is tempting to speculate that these cytokines are responsible for the effects of galectin-4 on immune cells.
The effect of galectin-7 on cytokine expression and/or secretion is still poorly studied. Luo and collaborators showed an effect of galectin-7 on the Th1/Th2 balance. The authors found that unstimulated T cells did not respond to galectin-7, while activated CD4+ T cells expressed higher levels of IFN-γ and TNF-α while IL-10 levels were reduced [101].
Galectin-8 has also been shown to trigger cytokine expression and release in different cell types (For a recent review, see [102]). For example, galectin-8 was described to induce the proliferation of resting T cells [103,104] which was accompanied by increased expression of IL-2, IFN-γ, and IL-4 [104]. The galectin also triggered B cell proliferation and the production of IL-6 and IL-10 [105]. At the same time, galectin-8 was found to induce cell death of activated T cells and/or hamper the proliferation of activated T cells via increased IL-10 and CTLA-4 expression by regulatory T cells [103,106]. Bone marrowderived dendritic cells treated with galectin-8 also showed increased secretion of many cytokines, including IL-2, IL-3, IL-6, IL-13, TNF-α, CCL2, CCL12, G-CSF, and GM-CSF [107]. In line with this, galectin-8 knockout mice displayed a systemic reduction of expression of, e.g., IL-6, TNF-α, and MCP-1 [108]. Recently, it was hypothesized that galectin-8 could also enhance the development of the cytokine storm observed in COVID-19 patients [102].
The effects of galectin-9 on cytokine production and secretion can be both stimulatory and inhibitory. The dual activity appears to depend on the cellular localization of galectin-9 and the respective galectin-9 receptors, particularly Tim-3 (transmembrane immunoglobulin mucin domain 3). For example, galectin-9-treated Th1 cells were shown to undergo apoptosis as a result of the galectin-9/Tim-3 interaction [109,110]. The reduced numbers of Th1 cells resulted in reduced production of IFN-γ and, consequently, inhibition of the immune response [110,111]. However, it was found that, after a wave of T cell apoptosis, galectin-9 can activate and expand Th1 cells and shift the CD4+/CD8+ balance towards a CD4+ phenotype [112]. The resulting activated (helper) T cells affected cytokine levels by producing proinflammatory cytokines IL-2 and IFN-γ [112]. In line with this, it was observed that galectin-9/Tim-3 could generate a Th1 response by increasing the expression of proinflammatory cytokines like TNF-α by monocytes and dendritic cells [109]. After this response, galectin-9 can trigger Tim-3 on Th1 cells to cease the immune reaction [97]. Similarly, Ma et al. described how the galectin-9/Tim-3 interaction could change the production of IL-12 and IL-23 in monocytes and thereby affect the Th1-and Th17 response [110]. Intracellular galectin-9 induced IL-12 expression, which induced IL-2 and IFN-γ expressions to promote the Th1 response. At the same time, galectin-9 can inhibit Tim-3 and IL-23 gene expression, thereby reducing the differentiation of Th17 cells and regulatory T cells [110]. Of note, also independent of Tim-3, galectin-9 can induce the secretion of IFN-γ by Th1 cells and NK cells, as well as TNF-α by Th2 cells [113][114][115]. In addition, it was recently reported that galectin-9 facilitates the trafficking of cytokines to the cell surface in dendritic cells. Galectin-9 depletion resulted in the accumulation of cytokine-containing vesicles in the Golgi complex that eventually underwent lysosomal degradation [116].
It is evident that galectins affect the levels of pro-and anti-inflammatory cytokines. Of note, this regulation extends beyond immune cells as we and others have shown that galectins can also trigger the release of cytokines from non-immune cells, like platelets [11], fibroblasts [93], pancreatic stellate cells [94], endothelial cells [96,97]. In addition, it should be noted that most research explores how a single galectin affects a preselected set of cytokines commonly considered key regulators of pro-and anti-inflammatory responses. As such, the current findings are somewhat biased. Additional research combining different galectins and evaluating a broader spectrum of cytokines will provide a better understanding of how galectins can affect cytokine levels in the (immune) microenvironment.
Cytokine-Mediated Effects on Galectin Levels
While the previous section provides ample evidence that galectins can trigger the release of cytokines by different cell types, it has also been shown, albeit less extensive, that cytokines can regulate the expression and secretion of galectins. For example, Imaizumi et al. showed that the proinflammatory cytokine IFN-γ stimulated the expression and production of galectin-9 in vascular endothelial cells, while the anti-inflammatory IL-4 did not [14,117]. The IFN-γ-induced galectin-9 expression mediated interactions between endothelial cells and eosinophils [14]. We also observed that different cytokines, including IFN-γ, triggered the expression of specific galectin-9 splice variants in endothelial cells. In our study, the anti-inflammatory IL-10 also induced galectin-9 expression, albeit to a lesser extent than IFN-γ, while other cytokines (IL-1, TNF-α, VEGF) had no or a slightly inhibitory effect [13]. Recently, Carreca and coworkers reported that secretion of galectin-9 by NK cells was increased upon treatment with IFN-α [118]. However, since IFN-α also induced IFN-γ, the observation could be partly secondary to IFN-γ. In line with this, IL-2/IL-15 treatment of NK cells did not induce the secretion of IFN-γ and galectin-9 [118].
In mesenchymal stem cells, the expression of galectin-9 could also be induced by combined treatment with TNF-α and IFN-γ, as demonstrated by Kim and collaborators. The authors concluded that TNF-α/IFN-γ-induced galectin-9 is involved in immunosuppression since galectin-9 was shown to induce apoptosis of activated Th1 and Th17 cells, thereby altering the cytokine balance to a more suppressive phenotype [119]. Notably, while galectin-1 expression was not induced in the mesenchymal stem cells by TNF-α/IFN-γ [119], both cytokines have been reported to induce galectin-1 expression in endothelial cells [120]. All this indicates that the cytokine-induced changes in galectin expression/secretion are cell-type specific and can act as a mechanism to potentiate or hamper an immune response. In line with this, IFN-γ (but not IL-4) was shown to induce galectin-9 expression in fibroblasts resulting in increased eosinophil adhesion [121]. At the same time, it triggered galectin-9 secretion from mesenchymal stromal cells, which contributed to the suppression of T cell proliferation [122].
Collectively, these findings show that cytokines can influence galectin levels by affecting protein expression and/or secretion. At the same time, the current studies likely represent only the tip of the iceberg as it can be anticipated that many cytokines will affect galectin expression because of their ability to induce cellular functions in which galectins play a role, e.g., migration, proliferation, and survival. In addition, the effects of different cytokine combinations and levels are still poorly understood. This is relevant since the (immune) microenvironment is characterized by a complex mix of different galectins and cytokines that exert different effects on different cell types. Thus, a significant future challenge is to unravel the reciprocal regulation of expression and secretion by galectins and cytokines.
Direct Interactions between Galectins and Cytokines
The findings above show that galectins and cytokines can indirectly influence each other's activity by affecting expression and secretion levels. As already indicated, understanding the complex mutual expression regulation provides a significant challenge for future research. This is further complicated by recent findings showing that galectins and cytokines can also directly affect each other's function and activity by forming heterodimers. As such, galectin-cytokine heterodimers, here further referred to as "galectokines", add yet another layer of complexity to the mechanisms by which both protein families can control and shape immune responses. The following paragraphs will further describe the recent evidence of galectokine formation, its functional consequences, and possible mechanisms of action.
Galectokines
The possibility of direct interaction between galectins and cytokines was reported in 2004 by Ozaki et al. They found that intracellular galectin-2 could bind to lymphotoxinalpha (also known as tumor necrosis factor beta) and thereby increase cytokine secretion [123]. Two years later, we identified galectin-1 as the functional receptor for the angiogenesis inhibitor anginex, a synthetic peptide designed to mimic the structure shared by different endogenous angiostatic proteins, including chemokines CXCL8 and CXCL4 (platelet factor 4) [124,125]. In a follow-up study that showed that the binding of anginex increased the galectin-1 binding affinity for specific glycan ligands up to a thousandfold, it was stated that " . . . it is hard to believe that an artificial peptide can show such dramatic effects without speculating that there is a natural counterpart in vivo." [126]. Indeed, recently we provided evidence that CXCL4 can heterodimerize with galectin-1 while CCL5 heterodimerizes with galectin-9 [17]. These findings corroborated other studies that recently reported on galectin-cytokine interactions, e.g., galectin-3/IFN-ã [15] and galectin-3/CXCL12 [16]. All these findings indicate that both protein families, which were considered to act distinct from each other, can team up to extend their biological functionality. Regarding the latter, the formation of galectokines appears to induce bidirectional effects, i.e., galectins can affect cytokine activity, and cytokines can affect galectin activity.
Effects of Galectokine Formation on Cytokine Function
As described previously, cytokines trigger cellular responses via binding to specific transmembrane receptors. A common feature of cytokine-mediated signaling is the ability of cytokines to facilitate receptor dimerization or clustering to trigger intracellular signaling [25,26]. In addition, dimerization or multimerization of cytokines themselves can affect their ability to trigger receptor signaling, particularly chemokine-mediated activation of G-protein coupled receptors [27,127,128]. Consequently, cytokines must be freely available in the microenvironment to engage with each other or their receptor. Interestingly, recent findings suggest that cytokine availability can be influenced by galectokine formation, in which galectins act as cytokine scavenger molecules. Evidence for such a scavenger role was provided by Gordon-Alonso and coworkers, who demonstrated that extracellular galectin-3 could capture IFN-γ as well as IL-12 in the microenvironment [15]. The interaction was glycan-dependent and hampered the ability of IFN-γ to trigger the expression of the anti-inflammatory chemokines CXCL9 and CXCL10 in melanoma tumor cells. Since both chemokines are known to regulate the recruitment and localization of anti-tumor immune cells, capturing IFN-γ by extracellular galectin-3 could provide a means of tumor immune escape. Indeed, blocking galectin-3 in a murine tumor model increased CD8+ T cell infiltration, which was linked to an increase in IFN-γ mediated chemokine expression [15]. Notably, since many endogenous cytokines are glycosylated, it was suggested that other galectins might bind different cytokines to regulate their activity. Indeed, it has been shown that cytokine glycosylation can affect cytokine activity [129,130]. However, to what extent this involves glycosylation-dependent galectokine formation and the full spectrum of glycosylation-dependent galectokines awaits further investigation.
More recently, Eckardt and collaborators provided additional evidence of the immunomodulatory effect of galectokine formation. The authors performed interaction screens using a broad panel of chemokines (46 in total) together with either galectin-1 or galectin-3. Many interactions with chemokines were identified, some of which were specific for galectin-3, while others involved galectin-1 and galectin-3 [16]. Interestingly, the interactions were glycan-independent and structural analyses of the galectin-3/CXCL12 galectokine confirmed that the chemokine directly bound galectin-3, opposite the glycanbinding groove, and independent of the presence of a carbohydrate ligand [16]. Galectin-3, but likely also other galectins, can bind cytokines in both a glycan-dependent and glycanindependent manner. It is tempting to speculate that this controls the specific functionality of the resulting galectokine, but this requires additional research. In the study by Eckhardt et al., the formation of galectin-3/CXCL12 heterodimers hampered CXCL12-stimulated signaling via the CXCR4 receptor and reduced chemotaxis and recruitment of leukocytes [16]. Similar to the galectin-3/IFN-γ galectokine, these findings support the concept that galectokine formation is a mechanism that can modulate the immune response by regulating cytokine availability and potency. Interestingly, Eckardt et al. suggested that the galectokine did not prevent the binding of CXCL12 to CXCR4 but reduced the efficacy of triggering downstream signals compared to CXCL12 alone. Possibly, this was associated with a reduced ability of CXCL12 to interact with glycosaminoglycans that are involved in proper chemokine presentation [16].
Current research indicates that the interactions of galectins with cytokines can affect the ability of cytokines to activate receptor signaling. The effects appear to involve different mechanisms and can occur dependently or independently of glycan binding. Future research should further explore the reach and consequences of galectokines on the immunoregulatory activity of cytokines.
Effect of Galectin/Cytokine Heterodimers on Galectin Function
While the previous section described how galectins could control the availability and activity of cytokines, accumulating evidence indicates that cytokines also affect the biological activity of galectins. The control of galectin function by cytokines appears to be related to structural changes within the galectin CRD that occur upon heterodimerization. For this, it is important to understand that glycan binding by galectins extends beyond the core glycan binding groove in the CRD. This is exemplified by the observation that galectin-1 binds with higher affinity to more complex glycans than to individual lactosamine units [131,132]. Thus, any obstruction or structural change inside or outside the core binding groove can affect glycan affinity and specificity. As mentioned above, we previously described that a non-endogenous chemokine-like peptide, anginex, could form glycan-independent heterodimers with galectin-1 [124]. Additional research showed that the interaction with anginex affected the binding affinity of galectin-1 for specific glycans. Moreover, this was not restricted to galectin-1 as anginex could also alter the glycan binding affinity of other galectins [126]. More recently, we described that the effect on glycan-binding was also induced after heterodimerization of galectin-1 with chemokine CXCL4 [17]. Like anginex, the heterodimer formation altered glycan-binding affinity, which was accompanied by structural changes in the galectin-1 CRD [17]. This supports the hypothesis that galectokines represent a mechanism to steer the glycan-binding affinity and specificity of galectins. Additional evidence for this hypothesis was provided by Elantak et al. They identified a specific region of the pre-B cell receptor (pre-BCR), i.e., the 5λ-UR motif, as a binding partner of galectin-1 [133]. Like anginex and CXCL4, heterodimerization occurred adjacent to the carbohydrate-binding site of galectin-1. Moreover, it was found that the lactose-binding affinity of galectin-1 was four times lower in the presence of 5λ-UR [133]. Follow-up research by Bonzi and coworkers revealed that the galectin-1/pre-BCR interaction induced local conformational changes in the carbohydrate-binding site of galectin-1 accompanied by a reduction in the glycan binding affinity. Based on these findings, the authors suggested that heterodimerization provided a mechanism to regulate pre-BCR clustering, the checkpoint of B-cell differentiation [134]. While the proposed mechanism awaits confirmation, we provided evidence that galectokine formation can exert immunoregulatory functions. In the study describing the galectin-1/CXCL4 galectokine, we also evaluated the effects of this galectokine on T cell apoptosis. Results indicated that CXCL4 enhanced the apoptotic activity of galectin-1 on activated peripheral blood mononuclear cells (PBMCs), affecting mainly CD8+ T cells [17]. In the same study, galectin-9 was found to heterodimerize with chemokine CCL5 which hampered the pro-apoptotic activity of galectin-9 on activated PBMCs. In the latter case, CD4+ T cells were particularly susceptible to the effects of the galectin-9/CCL5 galectokine [17]. These findings suggested that specific galectokines trigger opposite effects in specific immune cells.
From all the above, an immunomodulatory mechanism emerges in which galectokine formation can fine-tune the glycan-binding affinity of galectins. Furthermore, since glycanbinding is at the core of galectin function, heterodimerization can be hypothesized to represent a novel mechanism underlying the diversification of galectin function.
Summary and Future Perspectives
As described in the current review, there is ample evidence of a reciprocal relationship between galectins and cytokines. This relationship extends beyond transcriptional regulation as galectins and cytokines have been found to form heterodimers. These heterodimers, or galectokines, display functional activity towards different immune cells. While the width and reach of these galectokines are currently unknown, the available literature suggests that the interaction between galectins and cytokines provides a mechanism to regulate and/or fine-tune the immune response. Indeed, evidence shows that specific galectokines can either serve as a mechanism to stimulate or inhibit specific immune cell recruitment and functionality [15][16][17]. At the same time, many outstanding questions must be answered to better understand whether galectokines contribute to immune homeostasis. An important issue to address is the width of galectin-cytokine interactions. We, as well as Eckhardt and colleagues, identified several different galectin-cytokine heterodimers [16,17]. However, heterodimer formation between galectins and cytokines has thus far only been described for galectin-1, galectin-3, and galectin-9. Since galectins share structural similarities, it can be anticipated that other galectins could bind cytokines. In support of this, we did show that the cytokine-based peptide anginex can also interact with galectin-2, galectin-7, and the N-terminal CRDs of galectin-8 and galectin-9 [126]. Further insight into the full spectrum of galectin-cytokine heterodimers thus requires additional research.
Identifying additional galectokines can also shed light on the binding requirements that underlie galectin-cytokine heterodimer formation. As Gordon-Alonso et al. show, cytokine glycosylation can be involved in galectokine formation [15]. Since several cytokines are known to be glycosylated [129,130], it could be hypothesized that the addition or removal of glycans provides a way to regulate cytokine availability and/or activity by the actions of galectins. At the same time, members of the chemokine subfamily appear to be less frequently glycosylated [135], and recent findings indeed confirm that galectins and chemokines can interact without the involvement of glycans. While a better understanding of the binding characteristics and requirements of such glycan-independent interactions still requires further research, some relevant insights have been gained. For example, based on NMR analyses and in silico docking, Eckhardt et al. found that residues in the β6, β8, and β9 strands of galectin-3 interacted with the β1 and β2 strands of CXCL12, while residues in β6, and the loop between β4 and β5, interact with residues in the CXCL12 helix [16]. Regarding the galectin-1/CXCL4 heterodimer, the β6 and β9 strands of galectin-1 mainly interacted with the β1 and β2 strands of CXCL4, while the strands β8 and β9 were found to interact with the C-terminal helix of CXCL4. These findings suggest the presence of both common and specific interaction sites. Given the high structural homology between the chemokines of the CXC family, it appears feasible that galectin-1 has additional chemokine binding partners. The observed non-glycan interactions occur outside the core glycan-binding domain and, therefore, do not block glycan binding. Instead, these glycan-independent interactions appear to induce structural changes that can allosterically modulate the glycan-binding affinity and specificity of galectins. Obtaining additional structural information from other galectin-cytokine heterodimers in the future will increase our understanding of the common and distinct requirements of galectokine formation and how this affects galectin glycan-binding and function.
Another important issue to address involves the biological activity of galectokines in a complex microenvironment. This is especially relevant since galectins and cytokines have also been shown to form heterodimers within their own family, affecting their activity [136][137][138][139][140]. Given their binding promiscuity, it is likely that the extracellular microenvironment contains a complex balance of galectins, cytokines, potential homodimers, and heterodimers, as well as galectokines. Unraveling the biological consequences of such promiscuous relationships represents a significant challenge for future research. From the currently available data, different functional mechanisms have been identified or can be proposed (see Figure 3). It can be anticipated that such mechanisms occur simultaneously in vivo and depend on multiple factors, including the availability and concentration of galectins and cytokines, specific glycoconjugates and receptors, and the presence of different target cells. Untangling such complex networks will provide essential information regarding immunomodulation, both in physiological and pathological conditions. The latter is important because it could help to improve current immunotherapeutic efforts or to develop novel therapeutic approaches. addition or removal of glycans provides a way to regulate cytokine availability and/or activity by the actions of galectins. At the same time, members of the chemokine subfamily appear to be less frequently glycosylated [135], and recent findings indeed confirm that galectins and chemokines can interact without the involvement of glycans. While a better understanding of the binding characteristics and requirements of such glycanindependent interactions still requires further research, some relevant insights have been gained. For example, based on NMR analyses and in silico docking, Eckhardt et al. found that residues in the β6, β8, and β9 strands of galectin-3 interacted with the β1 and β2 strands of CXCL12, while residues in β6, and the loop between β4 and β5, interact with residues in the CXCL12 helix [16]. Regarding the galectin-1/CXCL4 heterodimer, the β6 and β9 strands of galectin-1 mainly interacted with the β1 and β2 strands of CXCL4, while the strands β8 and β9 were found to interact with the C-terminal helix of CXCL4. These findings suggest the presence of both common and specific interaction sites. Given the high structural homology between the chemokines of the CXC family, it appears feasible that galectin-1 has additional chemokine binding partners. The observed non-glycan interactions occur outside the core glycan-binding domain and, therefore, do not block glycan binding. Instead, these glycan-independent interactions appear to induce structural changes that can allosterically modulate the glycan-binding affinity and specificity of galectins. Obtaining additional structural information from other galectincytokine heterodimers in the future will increase our understanding of the common and distinct requirements of galectokine formation and how this affects galectin glycanbinding and function.
Another important issue to address involves the biological activity of galectokines in a complex microenvironment. This is especially relevant since galectins and cytokines have also been shown to form heterodimers within their own family, affecting their activity [136][137][138][139][140]. Given their binding promiscuity, it is likely that the extracellular microenvironment contains a complex balance of galectins, cytokines, potential homodimers, and heterodimers, as well as galectokines. Unraveling the biological consequences of such promiscuous relationships represents a significant challenge for future research. From the currently available data, different functional mechanisms have been identified or can be proposed (see Figure 3). It can be anticipated that such mechanisms occur simultaneously in vivo and depend on multiple factors, including the availability and concentration of galectins and cytokines, specific glycoconjugates and receptors, and the presence of different target cells. Untangling such complex networks will provide essential information regarding immunomodulation, both in physiological and pathological conditions. The latter is important because it could help to improve current immunotherapeutic efforts or to develop novel therapeutic approaches. Finally, the formation of galectin-chemokine heterodimers has previously been Finally, the formation of galectin-chemokine heterodimers has previously been referred to as "the marriage of chemokines and galectins" [8]. Based on the current findings, it can be concluded that this marriage has-at the very least-an extremely "open" character, given the promiscuous relationships between galectins and cytokines/chemokines. Nevertheless, it is a relationship that holds great promise as it adds a novel regulatory layer to immune homeostasis and provides opportunities for (immuno)therapeutic interventions in case immune homeostasis has gone awry. | 8,936.2 | 2022-09-01T00:00:00.000 | [
"Biology"
] |
Utilization of Polypropylene in the Production of Metal-Filled Polymer Composites: Development and Characteristics
Metal-filled composites based on polypropylene waste have been successfully obtained with an injection molding method of metalized polymer raw materials. Using the model polymer, the peculiarities of the formation of the copper layer in solutions of chemical metallization on the polypropylene surface were investigated and the main factors influencing this process were established. The main influence on the rate of reduction of copper in solutions of chemical metallization has the concentration of copper sulfate, sodium hydroxide, and EDTA-Na2. It was shown that the efficiency of the copper plating process also strongly depends on polymer processing, which follows the activation. In case of the use of simple activation, it is not possible to obtain metalized raw materials with high efficiency. Additional processing of activated polymer raw materials is required to carry out the process with high efficiency. The amount of reduced copper on the polymer surface can be adjusted by changing the concentration of the components of the chemical metallization solution, as well as the degree of loading of the polymer raw material. Examination by electron scanning microscopy of the obtained metalized polypropylene showed that the copper coating on the polymer particles is formed with a high degree of surface coverage. The formed copper coating is free of copper oxides, which is confirmed by X-ray diffraction studies and analysis of the spectrum of characteristic X-rays. Metal-filled composites have been characterized by the effect of copper on mechanical and rheological (MFR) properties. The Differential Scanning Calorimetry (DSC) and Thermogravimetric (TG) methods show a certain effect of metal on the magnitude of thermal effects and the rate of weight loss.
Introduction
Significant interest of the modern industry in the use of polymer composite materials is caused by a set of valuable properties that characterize these materials. In many cases, they are a real alternative to traditional materials, for example in the manufacture of products for structural purposes [1][2][3]. Moreover, new special-purpose materials can be obtained on their basis [4][5][6][7][8][9][10].
The widespread introduction of polymer composite materials in various fields of science and technology is also facilitated by the fact that the technology of their production has almost unlimited possibilities to change their properties. The combination of both the polymer matrix and the filler properties can provide new materials with a set of necessary properties, as well as predict these properties at the stage of obtaining the material [11][12][13][14][15][16][17][18][19][20].
When creating polymer composite materials for special purposes, the use of high-tonnage thermoplastics as a polymer matrix and metal fillers is promising. Such a combination allows us to obtain materials with the required operational, physicomechanical, and physicochemical properties, as well as a relatively low cost. Besides, the use of waste products and waste materials as polymer raw materials will contribute to even greater cost reduction, and most importantly, a solution to the number of acute environmental and socio-economic problems. This will also expand their possible recycling and reuse in the form of high-tech metal-filled polymer composites [21][22][23].
Metal-filled polymer composites can be used as antifriction, heat-conducting, antistatic, and shielding materials [24,25], as well as act as a basis for the creation of highly efficient heat storage systems, in which due to the heat of the phase transition it is possible to accumulate a significant amount of heat and reduce the drawback of conventional materials with a phase transition-low thermal conductivity [26][27][28][29][30]. Highly crystalline and high-tonnage polymers such as polyethylene, polypropylene, polyamide, polyethylene terephthalate, etc. are promising in this regard.
In the simplest case, the metal-filled polymer composite consists of metal particles that are evenly distributed in the polymer matrix [31]. The main disadvantage of this system is that at low concentrations of filler, they remain isolated from each other and do not contribute to the conductivity of the system. The mechanical properties of the system deteriorate sharply with the next increased concentration [32]. Thus, the creation of polymer composites that combine good conductive (electrical and thermal) and mechanical properties is a difficult task and is of considerable practical interest.
Obtaining metal-filled composite materials with high technological and operational properties requires the development of alternative technological solutions for their production. The technology of obtaining metal-filled polymer composites by the metallization of polymer raw materials with its subsequent processing by standard methods directly into products was used [33][34][35][36]. As a result, the process of combining the components is significantly facilitated and uniform distribution of the metal filler in the polymer matrix is ensured. This technology is a highly efficient, resource-saving technological process and is characterized by a shortened production cycle.
The technology is based on the use of mechanical activation of the polymer surface in order to give it catalytic activity before the deposition of metal in chemical reduction solutions. The use of such an activation technology allows us to avoid the main disadvantages of classical metallization technology: a large number of preliminary operations to prepare the polymer surface to give it catalytic activity and the use of hazardous and expensive reagents [37]. The process of chemical activation of the polymer surface was simplified as a result of simultaneous processing in a ball mill of a polymeric material with a powdered metal activator. As a result of such processing, the metal activator is firmly fixed on the polymer surface and provides the polymer surface with the catalytic activity required for the formation of the base layer of the metal in chemical reduction solutions [38]. The effectiveness of the method of mechanical activation is determined by at least two factors: reduction of the number of technological operations and reduction of the number of expensive and harmful chemicals.
The main tasks that the researchers face in the development of new technological processes is to establish the main factors that affect this process. Information about the process patterns allows it to be carried out in controlled conditions, which guarantees obtaining the products of the required quality with minimal resource costs. This study is aimed at establishing the patterns of the process of copper plating of the polymer surface and obtaining metallized polymeric raw materials, which is the first step in developing an effective technology for obtaining metal-filled composites. Information about the patterns of formation of the metal layer on the polymer surface is essential for the possibility of identifying the main factors that will ensure the process in controlled conditions.
Materials and Obtaining of Metalized Polymeric Raw Materials
Two types of polypropylene (PP) were used as a polymer matrix to study the peculiarities of metallization and production of metal-filled composites: polypropylene brand Moplen HF501N (LyondellBasell, Houston, TX, USA) and polypropylene waste (Figure 1a), which is obtained as a result of mechanical processing of car interior panels. This processing was carried out to ensure the controlled destruction of the panels and was performed at the Technical University of Kosice (Kosice, Slovakia) (Figure 1b).
Materials and Obtaining of Metalized Polymeric Raw Materials
Two types of polypropylene (PP) were used as a polymer matrix to study the peculiarities of metallization and production of metal-filled composites: polypropylene brand Moplen HF501N (LyondellBasell, Houston, TX, USA ) and polypropylene waste (Figure 1a), which is obtained as a result of mechanical processing of car interior panels. This processing was carried out to ensure the controlled destruction of the panels and was performed at the Technical University of Kosice (Kosice, Slovakia) (Figure 1b). Zinc powder of super extra fraction (Norzinco GmbH, Goslar, Germany) was used as a metal activator. Activation of the polypropylene surface was performed in a laboratory ball mill with a volume of 4 liters with ceramic cylindrical grinding bodies, rotation speed is 100 rpm. The mill was loaded with polymer and zinc powder, the ratio of polypropylene: Zn powder was 50:9 wt.%, processing time 1 h. During the rotation of the mill, the activator metal was fixed on the polymer surface.
The next stage is the formation of a metal layer on the activated polypropylene surface as a result of the reduction of copper ions in chemical metallization solutions ( Figure 2). Solutions of the following composition were used for metallization: CuSO4•5H2O brand "pure for analysis", EDTA-Na2 (C10H14N2Na2O8•2H2O) brand "pure for analysis", NaOH brand "pure", formalin stabilized [33,35,37]. Concentrations of components of chemical metallization solutions were within (mmol/L): CuSO4•5H2O-48-144, EDTA-Na2-47-67, NaOH-250-560, formalin 366. The concentration of formalin was constant and was not considered as a factor that influenced the process. The metallization was carried out with vigorous stirring using a magnetic stirrer, the volume of the chemical metallization solution in all cases was 200 mL. Zinc powder of super extra fraction (Norzinco GmbH, Goslar, Germany) was used as a metal activator. Activation of the polypropylene surface was performed in a laboratory ball mill with a volume of 4 liters with ceramic cylindrical grinding bodies, rotation speed is 100 rpm. The mill was loaded with polymer and zinc powder, the ratio of polypropylene: Zn powder was 50:9 wt.%, processing time 1 h. During the rotation of the mill, the activator metal was fixed on the polymer surface.
The next stage is the formation of a metal layer on the activated polypropylene surface as a result of the reduction of copper ions in chemical metallization solutions ( Figure 2). Solutions of the following composition were used for metallization: CuSO 4 ·5H 2 O brand "pure for analysis", EDTA-Na 2 (C 10 H 14 N 2 Na 2 O 8 ·2H 2 O) brand "pure for analysis", NaOH brand "pure", formalin stabilized [33,35,37]. Concentrations of components of chemical metallization solutions were within (mmol/L): CuSO 4 ·5H 2 O-48-144, EDTA-Na 2 -47-67, NaOH-250-560, formalin 366. The concentration of formalin was constant and was not considered as a factor that influenced the process. The metallization
Kinetics of Metallization
Studies of the kinetics of copper layer formation in chemical metallization solutions were performed by the volumetric method. The volumetric method of studying the kinetics of metallization of the activated polymer surface is based on the peculiarity of the reduction of copper ions in solutions with the complexing agent EDTA-Na2. In such solutions, one mole of hydrogen is released per mole of reduced copper, the volume of which is measured [31,33]. To do this, the metallization was performed in an airtight container, which was connected to a measuring tube where water was displaced by gas (hydrogen). To illustrate the results of volumetric studies, the average values of at least 5 studies are presented. The average deviation between the results of the research is not more than 5%.
Sieve Analysis and Calculation of the Polymer Surface Area
Sieve analysis was used to characterize the particle size distribution of polypropylene brand Moplen HF501N.
The average particle radius (mm) of polypropylene on a certain sieve was calculated from the function: where, -the size of the previous sieve cell, -sieve cell size. The surface area of a certain fraction of polypropylene was calculated based on the following considerations: 1. The surface area of one particle was calculated by using the average particle diameter of polypropylene on a certain sieve: 2. To calculate the total area, the number of particles of a certain mass is needed, so: 2.1. the total volume (monolith) of the polymer should be calculated:
Kinetics of Metallization
Studies of the kinetics of copper layer formation in chemical metallization solutions were performed by the volumetric method. The volumetric method of studying the kinetics of metallization of the activated polymer surface is based on the peculiarity of the reduction of copper ions in solutions with the complexing agent EDTA-Na 2 . In such solutions, one mole of hydrogen is released per mole of reduced copper, the volume of which is measured [31,33]. To do this, the metallization was performed in an airtight container, which was connected to a measuring tube where water was displaced by gas (hydrogen). To illustrate the results of volumetric studies, the average values of at least 5 studies are presented. The average deviation between the results of the research is not more than 5%.
Sieve Analysis and Calculation of the Polymer Surface Area
Sieve analysis was used to characterize the particle size distribution of polypropylene brand Moplen HF501N.
The average particle radius (mm) of polypropylene on a certain sieve was calculated from the function: where, r n−1 -the size of the previous sieve cell, r n -sieve cell size. The surface area of a certain fraction of polypropylene was calculated based on the following considerations: 1. The surface area of one particle was calculated by using the average particle diameter of polypropylene on a certain sieve: 2. To calculate the total area, the number of particles of a certain mass is needed, so: 2.1. the total volume (monolith) of the polymer should be calculated: where m-the mass of the polymer, ρ-density of polypropylene brand Moplen HF501N (900 kg/m 3 ); 2.2. the volume of one particle of polypropylene: where r-the average radius of the particle; 2.3. the number of particles: 3. The total area of the particles:
Calculation of Metal Content and Metallization Efficiency
To calculate the copper (zinc) content of the metalized (activated) polypropylene raw material, it was weighed to the nearest 0.0005 g, treated with 50% nitric acid and, after filtration, washing and drying to constant weight, weighed again. The metal content was calculated by the formula: where m-a mass of metalized (activated) polymer, m 1 -the mass of the polymer after etching, washing, and drying. The metallization efficiency (in%) was calculated by the ratio of the mass of copper on polypropylene after metallization to the theoretical mass of copper, which can be formed by the reduction of all copper ions that were introduced into the solutions of chemical metallization.
Obtaining a Metal-Filled Polymer Composite
Obtaining a metal-filled polymer composite was the result of processing of metallized polypropylene by injection molding. When obtaining products from metallized polypropylene by injection molding, the melting of polypropylene and its flow appear. This destroys the layer of metal that covers the polypropylene particles and the metal is evenly distributed in the polymer matrix during the production of the product (in our case, samples for physical and mechanical studies).
Test Methods
The mechanical properties of composites in tension were investigated according to ISO 527-5: 2009, using the samples of type 1B. Tensile tests were performed using a universal testing machine Zwick/Roell Z010 (Zwick/Roell, Ulm, Germany). Five samples were used for each composite composition. Tensile tests were performed at a speed of 5 mm/min. Samples were obtained on a Demag Ergotech pro 25-80 machine (Wiehe, Germany). The temperature in the zones of the material cylinder was 190, 220, 240 • C. The temperature of the mold was 20 • C. The holding under pressure time was 10 s, and the cooling time was 10 s. The injection pressure was 80 MPa. A stationary cold channel two-slot mold was used.
The rheological properties of metalized polymer raw materials were characterized by value MFR on the device IIRT-AM ("ASMA-Pribor", Svitlovodsk, Ukraine) at a temperature of 190 and 230 • C and loaded with 10 and 5 kg, respectively.
Differential scanning calorimetry and thermogravimetric analysis were performed for the polymer raw material and metal-filled composite using the SDT Q 600 (TA Instruments, New Castle, PA, USA) in an inert (argon) atmosphere. The heating rate of the samples is 10 K/min. Microscopic examinations were performed using an optical microscope SIGETA Expert 10-300× 5.0 Mpx (SIGETA, Kiev, Ukraine) and scanning electron microscope-microanalyzer PEMMA-102-02 (JSC "SELMI", Sumy, Ukraine). The range of the accelerating voltage change was 0.2-40 keV, the range of magnification change was 10-300,000, and the resolution was no more than 5.0 nm. The crystal structure of the samples was analyzed by X-ray diffractometry (XRD), for which a DRON-4-07 X-ray Diffractometer (JSC «Bourevestnik», Saint Petersburg, Russia) was used. Irradiating lamps with a copper anode and Ni-filter were used. Investigations were carried out in the range of 2Θ from 4 to 100 • .
Metallization of Activated Polypropylene Waste
The use of activated polypropylene waste showed that copper ions are reduced in a solution of chemical metallization ( Figure 3). In this case, the reduction of copper occurs on the activated polypropylene surface, which can be seen based on a photomicrograph of copper polypropylene waste ( Figure 4). Thus, it allowed establishing the fundamental possibility of reduction of copper ions on the activated polypropylene surface to obtain metalized polymeric raw materials.
Materials 2020, 13, x FOR PEER REVIEW 6 of 20 (JSC "SELMI", Sumy, Ukraine). The range of the accelerating voltage change was 0.2-40 keV, the range of magnification change was 10-300,000, and the resolution was no more than 5.0 nm. The crystal structure of the samples was analyzed by X-ray diffractometry (XRD), for which a DRON-4-07 X-ray Diffractometer (JSC «Bourevestnik», Saint Petersburg, Russia) was used. Irradiating lamps with a copper anode and Ni-filter were used. Investigations were carried out in the range of 2Θ from 4 to 100°.
Metallization of Activated Polypropylene Waste
The use of activated polypropylene waste showed that copper ions are reduced in a solution of chemical metallization (Figure 3). In this case, the reduction of copper occurs on the activated polypropylene surface, which can be seen based on a photomicrograph of copper polypropylene waste ( Figure 4). Thus, it allowed establishing the fundamental possibility of reduction of copper ions on the activated polypropylene surface to obtain metalized polymeric raw materials. Analysis of the photomicrograph of polypropylene waste (Figures 1a and 4) obtained as a result of mechanical processing of car interior panels shows that this polymer raw material is very uneven, both in shape and size. This in turn will create significant difficulties in explaining the results of the study of the process of chemical metallization of such raw materials and the choice of the necessary parameters for its implementation. (JSC "SELMI", Sumy, Ukraine). The range of the accelerating voltage change was 0.2-40 keV, the range of magnification change was 10-300,000, and the resolution was no more than 5.0 nm. The crystal structure of the samples was analyzed by X-ray diffractometry (XRD), for which a DRON-4-07 X-ray Diffractometer (JSC «Bourevestnik», Saint Petersburg, Russia) was used. Irradiating lamps with a copper anode and Ni-filter were used. Investigations were carried out in the range of 2Θ from 4 to 100°.
Metallization of Activated Polypropylene Waste
The use of activated polypropylene waste showed that copper ions are reduced in a solution of chemical metallization ( Figure 3). In this case, the reduction of copper occurs on the activated polypropylene surface, which can be seen based on a photomicrograph of copper polypropylene waste ( Figure 4). Thus, it allowed establishing the fundamental possibility of reduction of copper ions on the activated polypropylene surface to obtain metalized polymeric raw materials. Analysis of the photomicrograph of polypropylene waste (Figures 1a and 4) obtained as a result of mechanical processing of car interior panels shows that this polymer raw material is very uneven, both in shape and size. This in turn will create significant difficulties in explaining the results of the study of the process of chemical metallization of such raw materials and the choice of the necessary parameters for its implementation. Analysis of the photomicrograph of polypropylene waste (Figures 1a and 4) obtained as a result of mechanical processing of car interior panels shows that this polymer raw material is very uneven, both in shape and size. This in turn will create significant difficulties in explaining the results of the study of the process of chemical metallization of such raw materials and the choice of the necessary parameters for its implementation.
Since the process of metallization of the polymer surface in the proposed technology is crucial for obtaining metalized raw materials of the required quality, it was decided to use model polymeric Materials 2020, 13, 2856 7 of 20 raw materials for studying the kinetics of reduction of copper ions. The use of model polymeric raw materials will allow a more thorough study of the process of metallization of the polypropylene surface, in particular, in relation to the influence of the polymer surface area on the metallization process. Moplen HF501N polypropylene, which is characterized by a wide fractional composition and consists of spherical particles, was chosen as a model polymer. This will allow a thorough analysis of the influence of the area of the activated polymer surface on the patterns of obtaining metalized polymer raw materials.
Sieve Analysis
The sieve analysis showed that the polypropylene brand Moplen HF501N mainly consists of a fraction with a particle size greater than 1 mm ( Figure 5), with a significant content of polymer fractions with a particle size of about 0.6 mm. Since the process of metallization of the polymer surface in the proposed technology is crucial for obtaining metalized raw materials of the required quality, it was decided to use model polymeric raw materials for studying the kinetics of reduction of copper ions. The use of model polymeric raw materials will allow a more thorough study of the process of metallization of the polypropylene surface, in particular, in relation to the influence of the polymer surface area on the metallization process. Moplen HF501N polypropylene, which is characterized by a wide fractional composition and consists of spherical particles, was chosen as a model polymer. This will allow a thorough analysis of the influence of the area of the activated polymer surface on the patterns of obtaining metalized polymer raw materials.
Sieve Analysis
The sieve analysis showed that the polypropylene brand Moplen HF501N mainly consists of a fraction with a particle size greater than 1 mm ( Figure 5), with a significant content of polymer fractions with a particle size of about 0.6 mm. To study the effect of the polymer surface area on metallization regularity, the following fractions were selected from the sieves: 0.5, 0.7, 1.0, 1.6, which most fully characterize this polymeric raw material.
Since the same mass of activated polypropylene (5 grams) was used in all cases for the study of metallization kinetics, there is a direct relationship between the particle size of a certain fraction of polypropylene and the area of the activated surface in contact with a chemical copper plating solution.
The calculation results of the dependence of the surface area of the investigated polypropylene mass on the particle size of a certain fraction are given in Table 1.
Metallization of Activated Polypropylene Brand Moplen HF501N
Obtaining high-tech metal-containing composites based on polypropylene primarily requires information about the patterns of metal layer formation on the activated polymer surface. The study 0. To study the effect of the polymer surface area on metallization regularity, the following fractions were selected from the sieves: 0.5, 0.7, 1.0, 1.6, which most fully characterize this polymeric raw material.
Since the same mass of activated polypropylene (5 grams) was used in all cases for the study of metallization kinetics, there is a direct relationship between the particle size of a certain fraction of polypropylene and the area of the activated surface in contact with a chemical copper plating solution.
The calculation results of the dependence of the surface area of the investigated polypropylene mass on the particle size of a certain fraction are given in Table 1. Table 1. The dependence of the surface area of 5 grams of polypropylene on the particle size.
Sieve Cell Size (mm)
The Average Particle Diameter of the PP on the Sieve (mm) The Surface Area of Polypropylene (cm 2 )
Metallization of Activated Polypropylene Brand Moplen HF501N
Obtaining high-tech metal-containing composites based on polypropylene primarily requires information about the patterns of metal layer formation on the activated polymer surface. The study Materials 2020, 13, 2856 8 of 20 method of the kinetics of metallization of the activated polymer surface in solutions of chemical copper plating, which was used here, is based on the following kinetic equation: and takes into account only the amount of copper that is reduced by reaction with formaldehyde. According to this equation, the reagents for the reduction of copper ions are formalin and sodium hydroxide. Another competitive reaction of copper ion reduction, which takes place in the proposed method, occurs without the release of hydrogen and is an exchange reaction with zinc: This reaction is designed to create conditions for the autocatalytic reaction of copper reduction with formaldehyde and occurs only at the initial stage of the process.
Thus, the kinetic curves obtained in the process of the volumetric metallization study show only the amount of copper recovered by the reaction with formaldehyde.
The obtained kinetic curves of copper ion reduction on zinc-activated polypropylene depending on the particle size ( Figure 6) showed a significant dependence of the rate of copper ion reduction on the particle size of polypropylene.
Materials 2020, 13, x FOR PEER REVIEW 8 of 20 method of the kinetics of metallization of the activated polymer surface in solutions of chemical copper plating, which was used here, is based on the following kinetic equation: and takes into account only the amount of copper that is reduced by reaction with formaldehyde. According to this equation, the reagents for the reduction of copper ions are formalin and sodium hydroxide.
Another competitive reaction of copper ion reduction, which takes place in the proposed method, occurs without the release of hydrogen and is an exchange reaction with zinc: This reaction is designed to create conditions for the autocatalytic reaction of copper reduction with formaldehyde and occurs only at the initial stage of the process.
Thus, the kinetic curves obtained in the process of the volumetric metallization study show only the amount of copper recovered by the reaction with formaldehyde.
The obtained kinetic curves of copper ion reduction on zinc-activated polypropylene depending on the particle size ( Figure 6) showed a significant dependence of the rate of copper ion reduction on the particle size of polypropylene. In the case of a fraction larger than 1.6 mm, the reduction rate of copper ions is the lowest and is characterized by the largest induction period. At this time, the fraction of polypropylene from a sieve with a cell size of 0.5 mm is characterized by the highest rate of reduction of copper ions and the smallest induction period. The polymer fractions obtained from sieves with cell sizes of 0.7 and 1.0 mm occupy an intermediate position and are close in value. This feature can be explained by the different contact area of the zinc-activated polymer surface, which interacts with a solution of chemical metallization and on which the reduction of copper ions occurs. The increase in the reduction rate of copper ions in the case of a decrease in the particle size of polypropylene can be explained by the increase in the area of the activated surface. An increase in the area in contact with the chemical precipitation solution is equal to an increase in the concentration of the activator metal.
It should also be noted that the unexpected effect of the amount of activator metal on the amount of hydrogen released during the reaction with formaldehyde (which was used to calculate the mass of reduced copper). It can be assumed that for smaller fractions of polypropylene, the amount of released hydrogen should be smaller compared to fractions with larger particle size due to a larger amount of activator metal on their surface. A significant amount of activator metal will promote a In the case of a fraction larger than 1.6 mm, the reduction rate of copper ions is the lowest and is characterized by the largest induction period. At this time, the fraction of polypropylene from a sieve with a cell size of 0.5 mm is characterized by the highest rate of reduction of copper ions and the smallest induction period. The polymer fractions obtained from sieves with cell sizes of 0.7 and 1.0 mm occupy an intermediate position and are close in value. This feature can be explained by the different contact area of the zinc-activated polymer surface, which interacts with a solution of chemical metallization and on which the reduction of copper ions occurs. The increase in the reduction rate of copper ions in the case of a decrease in the particle size of polypropylene can be explained by the increase in the area of the activated surface. An increase in the area in contact with the chemical precipitation solution is equal to an increase in the concentration of the activator metal.
It should also be noted that the unexpected effect of the amount of activator metal on the amount of hydrogen released during the reaction with formaldehyde (which was used to calculate the mass of reduced copper). It can be assumed that for smaller fractions of polypropylene, the amount of released hydrogen should be smaller compared to fractions with larger particle size due to a larger amount of activator metal on their surface. A significant amount of activator metal will promote a deeper exchange reaction with zinc, which takes place without the release of hydrogen. However, there is an opposite dependence: at a zinc content of 6.5 wt.% in the fraction of 0.5, the amount of reduced copper compared to the fraction of 1.6, for which the zinc content is 1.8 wt.%, is greater.
To increase the flexibility and efficiency of the process of polypropylene metallization, a series of studies were conducted on the effect of changes in the concentration of reagents on the rate of copper ions reduction.
An increase in the concentration of sodium hydroxide to 0.56 mol/L affects some increase in the amount of reduced copper as a result of interaction with formaldehyde ( Figure 7). In all cases, at a NaOH concentration of 0.56 mol/L, the reaction of copper ions reduction occurs to the end, as evidenced by the complete discoloration of the solution after metallization compared with the blue initial solution. Moreover, the discoloration of the solution occurs at the time of cessation of hydrogen evolution, which indicates that in the final stages of the reaction of copper ion reduction occurs due to interaction with formaldehyde.
Materials 2020, 13, x FOR PEER REVIEW 9 of 20 deeper exchange reaction with zinc, which takes place without the release of hydrogen. However, there is an opposite dependence: at a zinc content of 6.5 wt.% in the fraction of 0.5, the amount of reduced copper compared to the fraction of 1.6, for which the zinc content is 1.8 wt.%, is greater.
To increase the flexibility and efficiency of the process of polypropylene metallization, a series of studies were conducted on the effect of changes in the concentration of reagents on the rate of copper ions reduction.
An increase in the concentration of sodium hydroxide to 0.56 mol/L affects some increase in the amount of reduced copper as a result of interaction with formaldehyde ( Figure 7). In all cases, at a NaOH concentration of 0.56 mol/L, the reaction of copper ions reduction occurs to the end, as evidenced by the complete discoloration of the solution after metallization compared with the blue initial solution. Moreover, the discoloration of the solution occurs at the time of cessation of hydrogen evolution, which indicates that in the final stages of the reaction of copper ion reduction occurs due to interaction with formaldehyde. Another factor that has a significant impact on the kinetics of copper ions reduction is the concentration of EDTA-Na2. The decrease in the concentration of the complexing agent affects the increase in the rate of copper ion reduction. This is especially true for solutions with a concentration of EDTA-Na2 47 mmol/L (Figure 8b). This can be explained by a certain loss of stability of chemical precipitation solutions, which increases the rate of the reduction reaction [33]. Another factor that has a significant impact on the kinetics of copper ions reduction is the concentration of EDTA-Na 2 . The decrease in the concentration of the complexing agent affects the increase in the rate of copper ion reduction. This is especially true for solutions with a concentration of EDTA-Na 2 47 mmol/L (Figure 8b). This can be explained by a certain loss of stability of chemical precipitation solutions, which increases the rate of the reduction reaction [33].
When studying the effect of CuSO 4 , it was found that increasing its concentration to 80 mmol/L most significantly affects the rate of copper ion reduction and its amount (Figure 9). A significant acceleration of the reduction reaction of copper ions with formaldehyde in the case of polypropylene fraction 1.6 in the range of 20-25 min, and less noticeable-for the fraction of 1.0 in the range of 12-15 min should also be noted.
Another factor that has a significant impact on the kinetics of copper ions reduction is the concentration of EDTA-Na2. The decrease in the concentration of the complexing agent affects the increase in the rate of copper ion reduction. This is especially true for solutions with a concentration of EDTA-Na2 47 mmol/L (Figure 8b). This can be explained by a certain loss of stability of chemical precipitation solutions, which increases the rate of the reduction reaction [33]. When studying the effect of CuSO4, it was found that increasing its concentration to 80 mmol/L most significantly affects the rate of copper ion reduction and its amount (Figure 9). A significant acceleration of the reduction reaction of copper ions with formaldehyde in the case of polypropylene fraction 1.6 in the range of 20-25 min, and less noticeable-for the fraction of 1.0 in the range of 12-15 min should also be noted. Thus, based on research, we can conclude that the factors that have the greatest impact on the copper plating process are the concentrations CuSO4, NaOH і EDTA-Na2.
At the same time, despite the high speed of the process of copper ion reduction, especially for fractions of polypropylene with small particle size, there is low efficiency of the metallization of the activated polymer surface ( Table 2). Low efficiency is manifested in the formation of a significant amount of sediment, which consists of reduced copper, which is not connected with the polymer surface in any way ( Figure 10). Thus, based on research, we can conclude that the factors that have the greatest impact on the copper plating process are the concentrations CuSO 4 , NaOH i EDTA-Na 2 .
At the same time, despite the high speed of the process of copper ion reduction, especially for fractions of polypropylene with small particle size, there is low efficiency of the metallization of the activated polymer surface ( Table 2). Low efficiency is manifested in the formation of a significant amount of sediment, which consists of reduced copper, which is not connected with the polymer surface in any way ( Figure 10). Materials 2020, 13, x FOR PEER REVIEW 11 of 20 The formation of a large amount of sediment can be explained by the weak interaction of a significant part of the activator metal with the polypropylene surface, which under conditions of intense mixing leads to washing the metal away from the polymer surface. The presence of an activator metal that is not bound to the polymer surface in the chemical precipitation solution leads to copper ions reduction on zinc particles, which reduces the efficiency of copper plating of the polypropylene surface.
To increase the efficiency of the metallization process, the possibility of reducing the amount of Zn that is washed away during metallization, which causes the copper ions reduction not on the surface of polypropylene, but in the volume of the solution with subsequent precipitation. To do this, the activated polypropylene was washed with water before chemical metallization. This allowed to separate weakly fixed zinc particles and to obtain a fundamentally new activated raw material (Table 3). The obtained kinetic curves of copper ion reduction on the activated polymer surface, which is devoid of weakly fixed particles, show that in this case there is a slightly different nature of metallization ( Figure 11). In this case the dependence of the process of metallization of polypropylene on the sizes of particles, as well as on the area of the activated polymeric surface is less. That proves the participation in the reaction of copper ions reduction of free (washed from the polymer surface) Zn particles, which largely determine the features of copper ions reduction in the case of not washed activated polymer raw material. The formation of a large amount of sediment can be explained by the weak interaction of a significant part of the activator metal with the polypropylene surface, which under conditions of intense mixing leads to washing the metal away from the polymer surface. The presence of an activator metal that is not bound to the polymer surface in the chemical precipitation solution leads to copper ions reduction on zinc particles, which reduces the efficiency of copper plating of the polypropylene surface.
To increase the efficiency of the metallization process, the possibility of reducing the amount of Zn that is washed away during metallization, which causes the copper ions reduction not on the surface of polypropylene, but in the volume of the solution with subsequent precipitation. To do this, the activated polypropylene was washed with water before chemical metallization. This allowed to separate weakly fixed zinc particles and to obtain a fundamentally new activated raw material (Table 3). The obtained kinetic curves of copper ion reduction on the activated polymer surface, which is devoid of weakly fixed particles, show that in this case there is a slightly different nature of metallization ( Figure 11). In this case the dependence of the process of metallization of polypropylene on the sizes of particles, as well as on the area of the activated polymeric surface is less. That proves the participation in the reaction of copper ions reduction of free (washed from the polymer surface) Zn particles, which largely determine the features of copper ions reduction in the case of not washed activated polymer raw material.
The amount of copper that is reduced by the reaction with formaldehyde is higher when using washed activated polypropylene. What is more, the effect of the size of the polypropylene fractions is manifested only in a slightly higher rate of completion of the reaction for smaller fractions, as well as a slightly smaller amount of reduced copper as a result of interaction with formaldehyde.
In addition to the change in the concentration of components of chemical metallization solutions, the influence of the degree of loading of activated polypropylene raw materials on the regularity of the copper plating process was also investigated ( Figure 12). Washed activated polypropylene was used for this. Materials 2020, 13, x FOR PEER REVIEW 12 of 20 The amount of copper that is reduced by the reaction with formaldehyde is higher when using washed activated polypropylene. What is more, the effect of the size of the polypropylene fractions is manifested only in a slightly higher rate of completion of the reaction for smaller fractions, as well as a slightly smaller amount of reduced copper as a result of interaction with formaldehyde.
In addition to the change in the concentration of components of chemical metallization solutions, the influence of the degree of loading of activated polypropylene raw materials on the regularity of the copper plating process was also investigated ( Figure 12). Washed activated polypropylene was used for this. In this case, the decisive factor that influences the appearance of the kinetic curves is the area of the activated surface in contact with the chemical precipitation solution. Increasing the degree of loading (increasing the contact area) affects the increase in the rate of metallization, as well as in the decrease in the amount of copper, which is reduced as a result of interaction with formaldehyde with the release of hydrogen. The amount of copper that is reduced by the reaction with formaldehyde is higher when using washed activated polypropylene. What is more, the effect of the size of the polypropylene fractions is manifested only in a slightly higher rate of completion of the reaction for smaller fractions, as well as a slightly smaller amount of reduced copper as a result of interaction with formaldehyde.
In addition to the change in the concentration of components of chemical metallization solutions, the influence of the degree of loading of activated polypropylene raw materials on the regularity of the copper plating process was also investigated ( Figure 12). Washed activated polypropylene was used for this. In this case, the decisive factor that influences the appearance of the kinetic curves is the area of the activated surface in contact with the chemical precipitation solution. Increasing the degree of loading (increasing the contact area) affects the increase in the rate of metallization, as well as in the decrease in the amount of copper, which is reduced as a result of interaction with formaldehyde with the release of hydrogen. In this case, the decisive factor that influences the appearance of the kinetic curves is the area of the activated surface in contact with the chemical precipitation solution. Increasing the degree of loading (increasing the contact area) affects the increase in the rate of metallization, as well as in the decrease in the amount of copper, which is reduced as a result of interaction with formaldehyde with the release of hydrogen.
The use of washed activated polypropylene to obtain metalized polymeric raw materials showed high efficiency of this solution ( Table 4). The efficiency values of copper plating of such raw materials are significantly higher compared to the results of Table 2. Studies using a scanning electron microscope in contrast mode and identification of the spectrum of characteristic X-rays of the metalized surface show that the obtained metalized polypropylene is characterized by the formation of a copper coating on polymer particles with a high degree of surface coverage (Figures 13 and 14). materials are significantly higher compared to the results of Table 2. Studies using a scanning electron microscope in contrast mode and identification of the spectrum of characteristic X-rays of the metalized surface show that the obtained metalized polypropylene is characterized by the formation of a copper coating on polymer particles with a high degree of surface coverage (Figures 13 and 14). It should also be noted that the spectra of the characteristic X-rays of the surface of copper polypropylene do not have peaks corresponding to oxygen. This allows us to conclude that there are no copper oxides in the coating, which was also noted by other researchers who received a copper coating in EDTA-Na2 solutions of chemical metallization [35]. The absence of oxides in the copper coating is also indicated by the diffraction pattern of copper polypropylene with no peaks on that can be attributed to copper oxides (СuO (35°, 38°, 61°) and Cu2O (29°, 36°, 61°)). There are only peaks that
PP Fraction on a Sieve
Concentration (mol/L) Metallization Efficiency (%) NaOH CuSO4 0.7 0.56 48 98.2 Studies using a scanning electron microscope in contrast mode and identification of the spectrum of characteristic X-rays of the metalized surface show that the obtained metalized polypropylene is characterized by the formation of a copper coating on polymer particles with a high degree of surface coverage (Figures 13 and 14). It should also be noted that the spectra of the characteristic X-rays of the surface of copper polypropylene do not have peaks corresponding to oxygen. This allows us to conclude that there are no copper oxides in the coating, which was also noted by other researchers who received a copper coating in EDTA-Na2 solutions of chemical metallization [35]. The absence of oxides in the copper coating is also indicated by the diffraction pattern of copper polypropylene with no peaks on that can be attributed to copper oxides (СuO (35°, 38°, 61°) and Cu2O (29°, 36°, 61°)). There are only peaks that It should also be noted that the spectra of the characteristic X-rays of the surface of copper polypropylene do not have peaks corresponding to oxygen. This allows us to conclude that there are no copper oxides in the coating, which was also noted by other researchers who received a copper coating in EDTA-Na 2 solutions of chemical metallization [35]. The absence of oxides in the copper coating is also indicated by the diffraction pattern of copper polypropylene with no peaks on that can be attributed to copper oxides (CuO (35 • , 38 • , 61 • ) and Cu 2 O (29 • , 36 • , 61 • )). There are only peaks that are responsible for the crystal structure of polypropylene and copper on the diffraction pattern ( Figure 15). are responsible for the crystal structure of polypropylene and copper on the diffraction pattern ( Figure 15). The use of a model polymer provided a convenient study of the process of chemical metallization of the activated polypropylene surface. This guarantees a controlled and effective influence on this process and will allow you to choose the most optimal compositions of solutions to obtain metalized polymeric raw materials of the required quality with a controlled and predetermined metal content.
The recommendations on carrying out the process of copper plating of the activated waste of polypropylene can be formed based on the conducted research. The metal content of this waste can be adjusted by both the composition of the solution and the degree of loading of secondary raw materials. Also, to significantly increase the copper content in polymer waste, a method of multiple metallization of one polymer raw material can be recommended. In the case of using the method of re-metallization the layer of copper already formed on the surface of polypropylene is a very effective activator of the reduction process. Figure 16 shows the kinetic curves of copper ions reduction on the activated surface of polypropylene waste, which are similar to the recovery curves of copper ions on the model polymer. The use of a model polymer provided a convenient study of the process of chemical metallization of the activated polypropylene surface. This guarantees a controlled and effective influence on this process and will allow you to choose the most optimal compositions of solutions to obtain metalized polymeric raw materials of the required quality with a controlled and predetermined metal content.
The recommendations on carrying out the process of copper plating of the activated waste of polypropylene can be formed based on the conducted research. The metal content of this waste can be adjusted by both the composition of the solution and the degree of loading of secondary raw materials. Also, to significantly increase the copper content in polymer waste, a method of multiple metallization of one polymer raw material can be recommended. In the case of using the method of re-metallization the layer of copper already formed on the surface of polypropylene is a very effective activator of the reduction process. Figure 16 shows the kinetic curves of copper ions reduction on the activated surface of polypropylene waste, which are similar to the recovery curves of copper ions on the model polymer.
Materials 2020, 13, x FOR PEER REVIEW 14 of 20 are responsible for the crystal structure of polypropylene and copper on the diffraction pattern ( Figure 15). The use of a model polymer provided a convenient study of the process of chemical metallization of the activated polypropylene surface. This guarantees a controlled and effective influence on this process and will allow you to choose the most optimal compositions of solutions to obtain metalized polymeric raw materials of the required quality with a controlled and predetermined metal content.
The recommendations on carrying out the process of copper plating of the activated waste of polypropylene can be formed based on the conducted research. The metal content of this waste can be adjusted by both the composition of the solution and the degree of loading of secondary raw materials. Also, to significantly increase the copper content in polymer waste, a method of multiple metallization of one polymer raw material can be recommended. In the case of using the method of re-metallization the layer of copper already formed on the surface of polypropylene is a very effective activator of the reduction process. Figure 16 shows the kinetic curves of copper ions reduction on the activated surface of polypropylene waste, which are similar to the recovery curves of copper ions on the model polymer.
Properties of Copper waste Polypropylene
Photomicrographs of copper samples of polypropylene waste ( Figure 17) with different amounts of copper show that the increase in the amount of metal affects the degree of metal coating of the polymer surface. In the case of samples with a metal content of 20 wt.%. The entire polymer surface is almost completely covered with metal.
Properties of Copper waste Polypropylene
Photomicrographs of copper samples of polypropylene waste (Figure 17) with different amounts of copper show that the increase in the amount of metal affects the degree of metal coating of the polymer surface. In the case of samples with a metal content of 20 wt.%. The entire polymer surface is almost completely covered with metal. The study of raw materials (polypropylene waste) and the obtained metalized polypropylene waste by DSC showed a certain effect of metal on the magnitude of thermal effects (Figure 18). The study of raw materials (polypropylene waste) and the obtained metalized polypropylene waste by DSC showed a certain effect of metal on the magnitude of thermal effects ( Figure 18).
Properties of Copper waste Polypropylene
Photomicrographs of copper samples of polypropylene waste (Figure 17) with different amounts of copper show that the increase in the amount of metal affects the degree of metal coating of the polymer surface. In the case of samples with a metal content of 20 wt.%. The entire polymer surface is almost completely covered with metal. The study of raw materials (polypropylene waste) and the obtained metalized polypropylene waste by DSC showed a certain effect of metal on the magnitude of thermal effects (Figure 18). The presence of copper in the composite has almost no effect on the temperature of the maximum end effect caused by the melting of the crystalline phase of polypropylene; however, its value increases slightly. It can be noted that more significant influence of copper presence on thermal effects in the field of high temperatures. In this case, the peak on the curve of DSC is more pronounced. Besides, the temperature on the curve of thermogravimetric analysis, which corresponds to the maximum rate of mass loss for the metalized polymer is 12 °C lower compared to non-metalized waste, which may indicate their better thermal conductivity.
Samples of copper-plated waste polypropylene were characterized by value MFR (Melt Flow Rate) (Figure 19). The rheological properties of polymer composites depend on the interaction between the filler and the polymer matrix. This interaction becomes possible as a result of adsorption on the surface of the filler of macromolecules resulting in the formation of polymer shell of a certain thickness around the filler particle. There is also a temperature dependence of the thickness of the adsorption layer of macromolecules on the filler particles. These phenomena can explain the obtained results of MFR measurements at different temperatures. The presence of copper in the composite has almost no effect on the temperature of the maximum end effect caused by the melting of the crystalline phase of polypropylene; however, its value increases slightly. It can be noted that more significant influence of copper presence on thermal effects in the field of high temperatures. In this case, the peak on the curve of DSC is more pronounced. Besides, the temperature on the curve of thermogravimetric analysis, which corresponds to the maximum rate of mass loss for the metalized polymer is 12 • C lower compared to non-metalized waste, which may indicate their better thermal conductivity.
Samples of copper-plated waste polypropylene were characterized by value MFR (Melt Flow Rate) ( Figure 19). The presence of copper in the composite has almost no effect on the temperature of the maximum end effect caused by the melting of the crystalline phase of polypropylene; however, its value increases slightly. It can be noted that more significant influence of copper presence on thermal effects in the field of high temperatures. In this case, the peak on the curve of DSC is more pronounced. Besides, the temperature on the curve of thermogravimetric analysis, which corresponds to the maximum rate of mass loss for the metalized polymer is 12 °C lower compared to non-metalized waste, which may indicate their better thermal conductivity.
Samples of copper-plated waste polypropylene were characterized by value MFR (Melt Flow Rate) (Figure 19). The rheological properties of polymer composites depend on the interaction between the filler and the polymer matrix. This interaction becomes possible as a result of adsorption on the surface of the filler of macromolecules resulting in the formation of polymer shell of a certain thickness around the filler particle. There is also a temperature dependence of the thickness of the adsorption layer of macromolecules on the filler particles. These phenomena can explain the obtained results of MFR measurements at different temperatures. The rheological properties of polymer composites depend on the interaction between the filler and the polymer matrix. This interaction becomes possible as a result of adsorption on the surface of the filler of macromolecules resulting in the formation of polymer shell of a certain thickness around the filler particle. There is also a temperature dependence of the thickness of the adsorption layer of macromolecules on the filler particles. These phenomena can explain the obtained results of MFR measurements at different temperatures.
Measurement of MFR at a temperature of 190 • C showed that the increase in copper content affects the increase in melt viscosity (decrease in MFR) (Figure 19a). It can be assumed that in this case, the filler particles (copper) are covered with an adsorption layer of polymer resulting in an effective increase in their volume. As the particle and the associated polymer layer move together, the viscosity increases.
The amount of copper has the opposite effect on the MFR value at a temperature of 230 • C (Figure 19b). In this case, the decrease in viscosity can be explained by the effect of both temperature and shear rate. As the temperature increases, the thickness of the adsorption layer decreases and the mobility of macromolecules increases. This, as well as the increase in the shear rate affects the fact that the polymer melt does not form a stronger structure compared to the unfilled polymer. The interaction between the polymer macromolecules and the filler particles is not strong enough and does not lead to the formation of a stronger network.
Samples obtained from copper-plated waste polypropylene by injection molding showed that the influence of the amount of metal on the strength properties of the obtained metal-filled composites is insignificant and is manifested in some increase in tensile strength and reduced ductility (Table 5). The high strength properties of the obtained composited can be explained from the standpoint of forming a homogenous structure. Due to the use of pre-metalized polypropylene waste in the process of its injection molding processing, uniform distribution of metal filler in polymer matrix occurs, which provides high strength properties.
Conclusions
Thus, it can be claimed that the proposed method of introducing a metal filler into the polymer matrix with the help of a chemical metallization of the surface of polymer raw material is effective and can be used to obtain high-tech metal-filled polymer composites, including those based on waste polymeric materials. The formation of a metal shell on the polymer surface, which is destroyed during the melting of the polymer, ensures easy introduction and uniform distribution of the metal over the volume of the material.
Studies on the influence of concentration factors on the process of chemical copper plating of zinc-activated polypropylene allow us to identify the main factors influencing the process of obtaining metalized polypropylene. The amount of reduced copper on the activated polypropylene surface can be adjusted by changing the concentration of CuSO 4 , EDTA-Na 2 , and NaOH, as well as the degree of loading. Such information will allow us to obtain polymeric raw materials of the required quality and to control the metal content in the final metal-filled composite at the stage of obtaining metalized raw materials. The introduction of metal into the polymer matrix in the form of a metal coating formed on the polymer surface guarantees the production of metal-containing polymer composites, which are characterized by uniform distribution of metal in the polymer matrix and high technological and operational properties. Obtaining such metal-filled composites will occur directly during the processing of metalized polymeric raw materials. The properties of the obtained materials can be predicted, as well as changed by the metal content at the stage of the metallization of the polymer surface. | 13,214.6 | 2020-06-01T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Solar Integrated Anaerobic Digester: Energy Savings and Economics
Industrial anaerobic digestion requires low temperature thermal energy to heat the feedstock and maintain temperature conditions inside the reactor. In some cases, the thermal requirements are satisfied by burning part of the produced biogas in devoted boilers. However, part of the biogas can be saved by integrating thermal solar energy into the anaerobic digestion plant. We study the possibility of integrating solar thermal energy in biowaste mesophilic/thermophilic anaerobic digestion, with the aim of reducing the amount of biogas burnt for internal heating and increasing the amount of biogas, further upgraded to biomethane and injected into the natural gas grid. With respect to previously available studies that evaluated the possibility of integrating solar thermal energy in anaerobic digestion, we introduce the topic of economic sustainability by performing a preliminary and simplified economic analysis of the solar system, based only on the additional costs/revenues. The case of Italian economic incentives for biomethane injection into the natural gas grid—that are particularly favourable—is considered as reference case. The amount of saved biogas/biomethane, on an annual basis, is about 4–55% of the heat required by the gas boiler in the base case, without solar integration, depending on the different considered variables (mesophilic/thermophilic, solar field area, storage time, latitude, type of collector). Results of the economic analysis show that the economic sustainability can be reached only for some of the analysed conditions, using the less expensive collector, even if its efficiency allows lower biomethane savings. Future reduction of solar collector costs might improve the economic feasibility. However, when the payback time is calculated, excluding the Italian incentives and considering selling the biomethane at the natural gas price, its value is always higher than 10 years. Therefore, incentives mechanism is of great importance to support the economic sustainability of solar integration in biowaste anaerobic digestion producing biomethane.
Introduction
Biogas is a renewable fuel produced by the anaerobic digestion (AD) of biodegradable organic substrates and its main components are CH 4 and CO 2 . The main sources of biogas are municipal solid waste landfills, digesters of sludge from wastewater treatment plants and industrial anaerobic digestion plants using different feedstocks as biowaste, manure, and energy crops. With reference to 2017, the production of biogas as primary energy in the European Union (EU) was about 16,811,000.6 ton of oil equivalent (toe), mainly generated by AD processes of different substrates (excluding sewage sludge). The main biogas producers were Germany (46.7%), United Kingdom (16.2%), and Italy (11.3%) [1].
Biogas can be directly used in gas turbines, internal reciprocating combustion engines, fuel cells, or organic Rankine Cycles [2]. However, the possibility of upgrading biogas to biomethane, i.e., namely by CO 2 removal, used for fuelling vehicles or for injection into the natural gas grid, is already quite frequent in Northern Europe and it is gaining increasing interest worldwide. Biomethane, being renewable, can replace natural gas: its expected future increased production and use can provide an important contribution in decreasing the heavy dependency on foreign supply of several EU countries. To this aim, several EU countries developed devoted types of subsidy systems, to boost the biomethane production and use: a summary of incentive scheme is reported in Lombardi and Francini [3]. Around 540 biogas plants with biomethane upgrading were active in Europe in 2017 [4]. In Italy, there were about 7 plants producing biomethane in 2018 [5], thanks also to the latest economic incentives promoting the production of biomethane to be injected into the natural gas grid and used for transportation purposes.
Industrial AD requires thermal energy to heat the feedstock and maintain mesophilic or thermophilic process conditions inside the reactor (compensating for heat loss to the environment). It is quite common that the required thermal energy can be recovered from the exhausts of the internal combustion engine fuelled by biogas. However, in some cases the thermal requirements are satisfied by burning part of the biogas in devoted boilers.
Some authors proposed to provide the AD required thermal energy by solar integration, with different purposes for various specific applications.
Mahmudul and Rasul [6], in their study regarding opportunities for solar assisted biogas plant in subtropical climate in Australia, assessed the possible benefits and challenges of the process, highlighting that it may reach higher biogas yield by increasing digester temperature and hence improved possibilities for recovering energy from waste. Another study, on small fermentation system [7] integrated with solar collectors, showed that at Hothot (China) in October, an area of 2 m 2 of solar collectors, running 8 h, can provide the heat required by 6 m 3 digester.
Ouhammou et al. [8] studied an up-flow anaerobic sludge blanket (UASB) digester coupled with solar thermal system, located in western North Morocco, and concluded that it allows saving 100% of energy consumption for almost ten months per year and 70% for two cold months per year.
The coupling of anaerobic digestion and solar thermal energy can be viewed also as way to store the solar energy into biogas, as proposed by Zhong et al. [9] for the case of mesophilic digestion and co-digestion of manure, who concluded that the preferred system to store solar energy into methane biogas is the mesophilic case.
Borrello et al. [10] proposed to integrate a solar collector system, equipped with thermal storage, into a mesophilic anaerobic digester with the aim of continuously providing the required thermal power. Specifically, the considered digester is a wet continuous flow stirred tank reactor (CSTR) with a volume of 320 m 3 , fed by an organic load of 5 kgVS/m 3 d. They concluded that digester needs a solar field equal to 560 m 2 .
Thus, using solar energy as heat source for the AD is an interesting solution for maintaining the required temperature inside the reactor. Especially in the AD plants where biogas is upgraded to biomethane, where biogas boilers are generally used, the possibility of avoiding or at least reducing the biogas self-consumption is directly translated into higher amount of biomethane for being injected into the grid, and this can represent a significant increased revenue, especially when incentives for biomethane injection are acknowledged.
However, the economic evaluations of solar thermal energy integration into AD reactors was not investigated in the previously mentioned studies. Thus, the present work aims to contribute at filling this gap, introducing a preliminary economic analysis.
In particular, this study aims to investigate the possibility to integrate thermal solar collectors into a mesophilic or thermophilic biowaste anaerobic digestion plant, equipped with the upgrading section producing biomethane. The amount of thermal energy that can be supplied yearly by the solar source, using different collector types and for different geographical locations, is evaluated. The supplied solar thermal energy allows saving part of the biogas-generally used for heating the reactor-that can be additionally upgraded to biomethane. The amount of yearly saved biomethane is presented and a Energies 2020, 13, 4292 3 of 16 preliminary economic evaluation of the additional costs and additional revenues, with respect to the reference case (mesophilic/thermophilic anaerobic digestion without solar integration), is reported in term of payback time of the additional investment.
System Description
The studied concept refers to the AD reactor integrated with solar collector system. Figure 1 shows the layout of the investigated system. Two operating conditions of the anaerobic reactor are studied, in particular the mesophilic and thermophilic conditions. AD plant is fed by about 9.800 t/y of source-sorted organic fraction of municipal solid waste (SS-OFMSW), with 70% moisture content for both operating conditions. Different digester types can be used for the AD of organic wastes. The selected type depends on operational factors, including the nature of the waste to be treated, e.g., its solid content [11]. In this study the reactor is assumed to be a conventional wet one, based on the CSTR model, working with total solids (TS) content equal to 12%, 365 days per year. Assuming for the SS-OFMSW a density equal to 1.12 t/m 3 , the flow rate entering the reactor resulted 157 t/day (12% TS), equal to 140 m 3 /day. The expected biogas production rate for the mesophilic and thermophilic conditions are 140 Nm 3 /h and 168 Nm 3 /h, respectively, with 60% volumetric content of CH 4 . The typical hydraulic retention time (HRT) is 15 and 21 days in thermophilic and mesophilic condition, respectively. A summary of the working parameters of the anaerobic digester for both conditions is presented in Table 1. integration), is reported in term of payback time of the additional investment. 97
System Description 99
The studied concept refers to the AD reactor integrated with solar collector system. Figure 1 100 shows the layout of the investigated system. Two operating conditions of the anaerobic reactor are 101 studied, in particular the mesophilic and thermophilic conditions. AD plant is fed by about 9.800 t/y 102 of source-sorted organic fraction of municipal solid waste (SS-OFMSW), with 70% moisture content 103 for both operating conditions. Different digester types can be used for the AD of organic wastes. The 104 selected type depends on operational factors, including the nature of the waste to be treated, e.g. its 105 solid content [11]. In this study the reactor is assumed to be a conventional wet one, based on the 106 CSTR model, working with total solids (TS) content equal to 12%, 365 days per year. Assuming for 107 the SS-OFMSW a density equal to 1.12 t/m 3 , the flow rate entering the reactor resulted 157 t/day (12% 108 TS), equal to 140 m 3 /day. The expected biogas production rate for the mesophilic and thermophilic 109 conditions are 140 Nm 3 /h and 168 Nm 3 /h, respectively, with 60% volumetric content of CH4. The 110 typical hydraulic retention time (HRT) is 15 and 21 days in thermophilic and mesophilic condition, 111 respectively. A summary of the working parameters of the anaerobic digester for both conditions is 112 presented in Table 1. 113 The reactor is assumed to be built in reinforced concrete material, with cylindrical shape and flat 114 cover. Its volume is calculated from the inlet volumetric flow rate and the HRT, considering an extra 115 volume of about 17.5%. Technical and geometric data of the reactor, for both operating conditions, 116 are presented in Table 2. 117 Several biogas upgrading technologies are available on the market, each one with specific 118 advantages and drawbacks. However, they are quite similar in terms of consumptions and yields [3], 119 being the high-pressure water scrubbing (HPWS) the most popular on a commercial basis [12]. The 120 expected biomethane output flows, for the mesophilic and thermophilic conditions, are 84.00 Nm 3 /h and 121 100.80 Nm 3 /h, respectively. 122 The reactor is assumed to be built in reinforced concrete material, with cylindrical shape and flat cover. Its volume is calculated from the inlet volumetric flow rate and the HRT, considering an extra volume of about 17.5%. Technical and geometric data of the reactor, for both operating conditions, are presented in Table 2. Several biogas upgrading technologies are available on the market, each one with specific advantages and drawbacks. However, they are quite similar in terms of consumptions and yields [3], being the high-pressure water scrubbing (HPWS) the most popular on a commercial basis [12]. The expected biomethane output flows, for the mesophilic and thermophilic conditions, are 84.00 Nm 3 /h and 100.80 Nm 3 /h, respectively.
The thermal system providing heat to the reactor consists of a solar collector field, storage tank, and biogas back-up boiler. Different collector types are investigated to produce hot water which feds the reactor-heating loop. The collector types utilized in this study are depicted in Figure 2. Three plane collector designs are selected for performance and economic comparison analysis: flat plate collector-glazed FKC-2W by Bosch [13]; flat plate collector FT 226-2V by Bosch [13]; evacuated tubular collector VTK 120-2 CPC by Bosch [13]. Bosch FKC-2W flat solar collector is composed by the pre-moulded tank and aluminium pick-up plate, absorber with a selective inox finish [13]. FT 226-2V solar collector has a copper and aluminium absorber with PVD coating and a double-twisted geometry of the hydraulic circuit [8]. The third solar collector is the Bosch VTK 120-2 CPC: it has very high performance guaranteed throughout the year thanks to evacuated technology, where the pipes inside the collector are kept under vacuum [13]. Technical data of the chosen collectors are presented in Table 3. Thickness of the cover (m) 0.15 The thermal system providing heat to the reactor consists of a solar collector field, storage tank, 127 and biogas back-up boiler. Different collector types are investigated to produce hot water which feds 128 the reactor-heating loop. The collector types utilized in this study are depicted in Figure 2. Three 129 plane collector designs are selected for performance and economic comparison analysis: flat plate 130 collector-glazed FKC-2W by Bosch [13]; flat plate collector FT 226-2V by Bosch [13]; evacuated tubular 131 collector VTK 120-2 CPC by Bosch [13]. Bosch FKC-2W flat solar collector is composed by the pre-132 moulded tank and aluminium pick-up plate, absorber with a selective inox finish [13]. FT 226-2V 133 solar collector has a copper and aluminium absorber with PVD coating and a double-twisted 134 geometry of the hydraulic circuit [8]. The third solar collector is the Bosch VTK 120-2 CPC: it has very 135 high performance guaranteed throughout the year thanks to evacuated technology, where the pipes 136 inside the collector are kept under vacuum [13]. Technical data of the chosen collectors are presented 137 in Table 3. 138 Incidence angle modifier (IAM) is a function of declination, local hour angle, latitude, longitude, and solar azimuth angle obtained from the solar radiation and weather data processor in TRNSYS software (TRNSYS 18, Solar Energy Laboratory, University of Wisconsin, Madison, WI, USA). The IAM is provided by the manufacturer, as a set of IAM factors associated with given incidence angles. The value of IAM, for the simulation purpose, can be obtained by interpolation based on data reported in Table 4. The solar system is coupled with the water thermal storage, assumed to be a shell and tube arrangement, with water as the heat transfer fluid and also circulating in the inside tube.
However, when the solar radiation is not enough, the reactor heat duty is supplied by a backup biogas-fired boiler, operating in parallel with the thermal storage. Biogas and solar energy can be used in combination or independently, depending on irradiation conditions and on the reactor requests.
The aim is to keep the reactor temperature at the desired level 37 • C in the mesophilic condition and 55 • C in the thermophilic condition-minimising auxiliary biogas consumption. The preferred energy source is, of course, the solar radiation, but, since it cannot be altered, the control system keeps the desired inlet temperature conditions, by handling the biogas boiler. It is assumed that the biogas boiler back-up system operates only when the storage tank level is lower than 10%, and there is no (or not enough) energy from the solar collector field available at the given moment.
Thermal Modelling
The simulation is performed for each hour of one year, calculating heat losses and gains, assuming the weather conditions for three reference sites in Italy. Milano (site 1), Frosinone (site 2), and Enna (site 3) are considered as reference sites, for which the hourly meteorological data (external temperature, wind speed, direct normal irradiance, global irradiance, and incidence angle) are collected. The three sites are representative of the variation of global solar radiation (G) through the Italian territory. Figure 3 shows that the first site (site 1) falls in an area with low global irradiation of about 1300 kWh/m 2 as yearly totals, while for the second (site 2) and third site (site 3) the global irradiation values are about 1600 and 1900 kWh/m 2 as yearly totals, respectively. The average values of meteorological data for the reference sites are given in Table 5. The simulation of the system resolves the energy balances, on hourly basis using two software 175 (TRNSYS 18 and Matlab R2018b (R2018b, MathWorks, Natick, MA, USA)). Random availability of 176 the solar source is assumed in the model, being a very important feature when dealing with 177 renewable sources [18]. As results, hourly load profiles of the reactor are obtained for each site under 178 study and the two operating conditions of the anaerobic digester system. 179 Thermal load of the reactor QAD.load is the sum of the thermal energy to heat the substrate to the 180 required temperature Qsub and the thermal energy required to compensate heat losses through the 181 external walls of the reactor to environment Qdisp (1): 182 Thermal energy lost from the reactor walls to the surroundings by conduction, convection, and 183 radiation is represented as the product of heat transfer coefficient Ui, wall area Ai and the temperature 184 difference between the reactor internal temperature Tdig and the ambient temperature TDb (2). 185 where the heat transfer coefficient is given by (3): 186 The simulation of the system resolves the energy balances, on hourly basis using two software (TRNSYS 18 and Matlab R2018b (R2018b, MathWorks, Natick, MA, USA)). Random availability of the solar source is assumed in the model, being a very important feature when dealing with renewable sources [18]. As results, hourly load profiles of the reactor are obtained for each site under study and the two operating conditions of the anaerobic digester system. Thermal load of the reactor Q AD.load is the sum of the thermal energy to heat the substrate to the required temperature Q sub and the thermal energy required to compensate heat losses through the external walls of the reactor to environment Q disp (1): Thermal energy lost from the reactor walls to the surroundings by conduction, convection, and radiation is represented as the product of heat transfer coefficient U i , wall area A i and the temperature difference between the reactor internal temperature T dig and the ambient temperature T Db (2).
where the heat transfer coefficient is given by (3): Energies 2020, 13, 4292 7 of 16 and α 1.amb is the external heat transfer coefficient; it is a function of the wind speed Ws in m/s, it depends on the external conditions [19] and it is given by (4): Conductive and internal convective heat transfer coefficients and heat exchange areas used in the calculations are presented in Table S1 in the Supplementary Materials.
T dig is the temperature of the digester 37 • C or 55 • C, while T Db is the dry-bulb temperature supplied by the meteorological stations for each hour of the year. Q sub is given by (5): where . m is the mass flow of the entering substrate, equal to 157 t/day; c p is the specific thermal capacity of the substrate equal to 4186 J/kg K; ∆t is the temperature difference between the initial and final temperature of the substrate entering the reactor. Table 6 shows gross annual amount as energy content of biogas produced annually by the anaerobic digester, the annual consumption of biogas, without solar collectors, fed to the boiler for heating the digester, and the annual amount of biogas converted to biomethane (cases without solar integration). Table 6. Annual amount of biogas produced, required by the boiler and available for biomethane production, at the three considered sites, for mesophilic and thermophilic conditions, without solar system integration.
Mesophilic Thermophilic
Biogas produced by AD ( It is noted that even in the thermophilic case, the heat duty is higher, also the biogas sent to biomethane production is higher because of the higher specific production. In general, in the thermophilic case without solar systems, about 3.3% more biogas is available for upgrading than in the mesophilic case.
Solar System Design
A parametric analysis is performed varying both solar collectors field area and thermal storage size in order to compare the energy gain and the economic feasibility of the proposed solutions. The area of the solar collector field is calculated for the maximum thermal power demand of the reactor Q AD.load.max and varying design solar irradiation DNI design from 100 to 800 W/m 2 with a step of 25 W/m 2 (6). where: • X is the collector cleanliness factor, minimization criterion, it is equal to 1 in the low range temperature (50-100 • C) [20]; • η th is the efficiency of the collector evaluated by Equation (7): • η 0 is the optical efficiency; • ∆t is the difference between average fluid temperature into collectors and average surroundings temperature [21]; • a 1 and a 2 are the linear and quadratic heat loss coefficients given in Table 3.
The IAM values are fitted with a polynomial function reported in Equation (8), based on the data provided by the manufacturer in Table 4. The coefficients of the polynomials obtained for all the considered cases are presented in Table 7.
where θ is the incidence angle (θ • ), a function of the weather conditions for the three considered sites. In the case of VTK 120-2 CPC collector, the value of IAM is estimated as the product of longitudinal and transversal IAM as in Equation (9).
The optical efficiency η 0 and thermal loss coefficients a 1 and a 2 are determined experimentally. Optical and thermal losses both have an impact on the collector efficiency, and their relative contributions depend mainly on the physical design of the collector. In Figure 4, the solar collector efficiency η col , calculated according to Equation (10), at different solar radiation levels is illustrated for the three types of considered collectors. Similar values of efficiency about evacuated tube collectors are presented by Atkins [22].
Different capacities of thermal storage, expressed in hours, are considered, from 0 to 24 h with a 3 h step. The resulting storage capacities are in the range from 600 to 4830 kWh and 980 to 7850 kWh, for mesophilic and thermophilic conditions, respectively.
The effect of the thermal energy produced by the solar section is to reduce the biogas consumption for the reactor heating. Thus, the highest the solar energy supplied to the reactor, the highest the biogas that can be sent to the upgrading section to produce biomethane, that, in turn, can be injected into the grid. This benefit is evaluated according to the amount of saved biomethane (expressed as the energy content, in MWh, of the additional biomethane annually produced with respect to the base case mesophilic/thermophilic AD, without solar system). The calculation is carried out on an hourly basis: per each hour of the year the (i) the heat available from the solar system is calculated (including heat Energies 2020, 13, 4292 9 of 16 stored or retrieved from the storage); (ii) the AD reactor heat requirement is calculated; iii) the saved biogas/biomethane is obtained as the difference between (ii) and (i). The effect of the thermal energy produced by the solar section is to reduce the biogas 240 consumption for the reactor heating. Thus, the highest the solar energy supplied to the reactor, the 241 highest the biogas that can be sent to the upgrading section to produce biomethane, that, in turn, can 242 be injected into the grid. This benefit is evaluated according to the amount of saved biomethane 243 (expressed as the energy content, in MWh, of the additional biomethane annually produced with 244 respect to the base case mesophilic/thermophilic AD, without solar system). The calculation is carried 245 out on an hourly basis: per each hour of the year the (i) the heat available from the solar system is 246 calculated (including heat stored or retrieved from the storage); (ii) the AD reactor heat requirement is 247 calculated; iii) the saved biogas/biomethane is obtained as the difference between (ii) and (i). 248
Economic Analysis 249
An increase in the saved biomethane can be obtained, obviously, with a larger solar field area. 250 Of course, for increasing amounts of saved biomethane an increase in the additional revenues from 251 the biomethane selling are obtained. On the other hand, for increasing solar field area, the cost related 252 to the solar section increases as well. With this in mind, a preliminary and simplified economic 253 analysis is carried out, with the aim of evaluating the additional investment cost for the solar section 254 (for different area of the solar field and for the three sites) vs. the additional revenues from saved 255 biomethane selling. The analysis is not including the investment costs for the AD and upgrading 256 system as well as it is not including the revenues from the biomethane produced in the reference 257 operating condition (i.e. mesophilic/thermophilic case without solar integration). Thus, the economic 258 evaluation is based only on the additional costs/revenues. Even though this approach is rather 259 simplified, it can provide some interesting clues on whether the addition of the solar section might 260 be sustainable, without altering the AD/biomethane profitability. A whole life cycle cost analysis, 261 including the AD plant investment and operation costs and overall revenues (biomethane selling and 262
Economic Analysis
An increase in the saved biomethane can be obtained, obviously, with a larger solar field area. Of course, for increasing amounts of saved biomethane an increase in the additional revenues from the biomethane selling are obtained. On the other hand, for increasing solar field area, the cost related to the solar section increases as well. With this in mind, a preliminary and simplified economic analysis is carried out, with the aim of evaluating the additional investment cost for the solar section (for different area of the solar field and for the three sites) vs. the additional revenues from saved biomethane selling. The analysis is not including the investment costs for the AD and upgrading system as well as it is not including the revenues from the biomethane produced in the reference operating condition (i.e., mesophilic/thermophilic case without solar integration). Thus, the economic evaluation is based only on the additional costs/revenues. Even though this approach is rather simplified, it can provide some interesting clues on whether the addition of the solar section might be sustainable, without altering the AD/biomethane profitability. A whole life cycle cost analysis, including the AD plant investment and operation costs and overall revenues (biomethane selling and biowaste feed-in tariff), could provide a wider picture of the economic sustainability, but it is out of the scope of this work and suggested for future analysis.
The total investment cost for the solar section (C inv ) is estimated starting from the specific costs for the solar collectors and for the storage system. The specific costs of the solar collectors are presented in Table 8, according to Bosch catalogue costs 2016 [13]: these costs include the supply of the entire solar thermal kit and exclude installation. The cost of the storage tank is assumed equal to 8.20 Euro/kWh [23]. Tables S21 to S26 in Supplementary Materials report the investment costs for all the analysed solutions (mesophilic/thermophilic; storage hours), with resulting solar field area between 0 and 10,000 m 2 . Maintenance cost for the solar system (C m ) is evaluated as 1.0% of initial investment cost [24]. For the assessment of the annual revenues (B years ) from the biomethane selling, the Italian Ministerial Decree dated 2 March 2018 [25] is considered. According to this decree, the injection into the grid provides a revenue equal to the natural gas price (average of the last available three months values), reduced by 5%, assumed equal to 20.41 Euro/MWh [26]. Additionally, in the specific case of SS-OFMSW, for each 5 GCal of biomethane injected into the grid, a certificate (CIC) is acknowledged, with a value of 375 Euro each, for the first 10 years (corresponding to 64.54 Euro/MWh). The Italian incentives are representative of highly advantageous subsidy, with respect to other European countries [3].
The parameter used to evaluate the investment is the simple payback time (PBT). The PBT is calculated as the ratio between the C inv and the net annual revenues, calculated as the difference between B years and C m , as given by (11):
Results
This section summarizes the main results of the analysis. The total thermal power required by the digester-made by the two contributions Q disp and Q load -depends mainly on the internal temperature of the anaerobic digester (i.e., mesophilic/thermophilic) and on the external temperature, changing during the year, in the different geographical locations. The average power required to maintain the mesophilic temperature conditions in the digester varied in the range from 80-100 kW in summer (July) to 190-200 kW in winter (January). While, in the thermophilic condition case, it varied in the range from 210-230 kW in summer (July) to 310-340 kW in winter (January). The trend of average monthly thermal power required during the year is reported in Figure S1 of Supplementary Materials, showing that the contribution of the heat loss through the reactor surfaces is quite small. Specifically, the results indicate that up to 96.4% (mesophilic conditions) or 97.2% (thermophilic conditions) of the heat required by the digester operation is used to raise the feedstock temperature to the operating temperature, while the remaining part is utilised to keep the process temperature constant.
Saved Biomethane
This section reports the results in term of additional amount of biomethane (i.e., saved biomethane), not used for heating the digester. The saved biomethane is calculated as the difference between the actual amount of available biomethane in each mesophilic/thermophilic case and the amount of biomethane available in the base reference case, which is the mesophilic/thermophilic one without solar system (in the different sites). Figure 5 reports the amount of saved biomethane by introducing in the system three different types of solar collectors, with different storage systems, in the three sites in Italy. For graphical reasons, only three levels of storage, that are 6, 12, and 18 h, are presented. Complete results are reported in Tables S3-S20 of Supplementary Materials. A considerable difference exists between the results of the mesophilic case and the thermophilic one, due to the different hourly production of biomethane (84 Nm 3 /h and 100.8 Nm 3 /h, respectively). The areas of the solar collectors in the thermophilic conditions (Figure 5b,d,f) are significantly larger than in the mesophilic case, because the thermophilic reactor requires a larger amount of thermal energy compared to the mesophilic case. Saved biomethane reaches 700-750 MWh/year in the mesophilic conditions (Figure 5a,c,e) and 1300-1500 MWh/year in the thermophilic conditions (Figure 5b,d,f), for the most efficient collector (VTK 120-2 CPC) and longest storage time. However, to reach similar values in the different sites, the required area increases for sites with low annual solar irradiation. and a surface of the collectors FKC-2W of about 3300 m 2 in site 1, the fractions of biomethane saved 321 annually are equal to 38% (564 MWh/year) and 24% (655 MWh/year) for the mesophilic and thermophilic 322 conditions, respectively. In addition, under the same conditions (site 1, 18 h and 3300 m 2 ), using the 323 collector VTK 120-2 CPC type, the fractions of saved biomethane increase to 46% (683 MWh/year) and 324 35% (959 MWh/year) for mesophilic and thermophilic conditions, respectively. The additional 325 biomethane produced by the integrated system might be regarded as a way to store solar energy into 326 an easily deliverable biofuel. 327 Finally, we can observe that results of saved biomethane obtained using collectors FKC-2W and 328 FT 226-2V are quite similar, at the same storage capacity conditions, while the use of collector VTK 329 120-2 CPC may provide higher savings. results are presented assuming the 10 years constraint, as it is the maximum period for receiving the 337 economic incentives for biomethane. The investment costs are higher for solar collectors of larger 338 surfaces with more hours of thermal storage. On the other hand, by increasing the solar field area 339 and storage size, more biomethane is saved and additional economic benefits are possible. 340 As expected, the saved biomethane fraction (ratio of annually saved biomethane to annually biogas sent to boiler in the base case without solar integration) increases with the increasing of the area of collectors, storage tank size, and with collector type. Considering a thermal storage of 18 h and a surface of the collectors FKC-2W of about 3300 m 2 in site 1, the fractions of biomethane saved annually are equal to 38% (564 MWh/year) and 24% (655 MWh/year) for the mesophilic and thermophilic conditions, respectively. In addition, under the same conditions (site 1, 18 h and 3300 m 2 ), using the collector VTK 120-2 CPC type, the fractions of saved biomethane increase to 46% (683 MWh/year) and 35% (959 MWh/year) for mesophilic and thermophilic conditions, respectively. The additional biomethane produced by the integrated system might be regarded as a way to store solar energy into an easily deliverable biofuel.
Finally, we can observe that results of saved biomethane obtained using collectors FKC-2W and FT 226-2V are quite similar, at the same storage capacity conditions, while the use of collector VTK 120-2 CPC may provide higher savings. Figure 6 presents the results of the economic assessment in terms of PBT of the investment. The results are presented assuming the 10 years constraint, as it is the maximum period for receiving the economic incentives for biomethane. The investment costs are higher for solar collectors of larger surfaces with more hours of thermal storage. On the other hand, by increasing the solar field area and storage size, more biomethane is saved and additional economic benefits are possible. corresponding to the minimum PBT, in Figure 6c,e, falls in the range between 800 and 1000 m 2 , with 347 storage tank size equal to 12-18 h. 348
Economic Results
In the thermophilic conditions, as shown in Figure 6b,d,f PBT is close to 10 years only in the case 349 of site 3 for FKC-2W collector model, associated with 12-18 hours of storage time. From Figure 6f, it 350 is noted that the minimum PBT falls in the range between 1900 and 2500 m 2 , with storage tank size 351 Figure 6a,c,e for the mesophilic conditions, show that PBT is close to 10 years for some solutions, in particular with the FKC-2W collector model, in the site 2 and 3, for a limited range of areas and capacity of storage tanks. In this case, the best solution from the economic point of view, corresponding to the minimum PBT, in Figure 6c,e, falls in the range between 800 and 1000 m 2 , with storage tank size equal to 12-18 h.
In the thermophilic conditions, as shown in Figure 6b,d,f PBT is close to 10 years only in the case of site 3 for FKC-2W collector model, associated with 12-18 h of storage time. From Figure 6f, it is noted that the minimum PBT falls in the range between 1900 and 2500 m 2 , with storage tank size equal to 12-18 h. It is observed that the cost of the collector with higher efficiency (VTK 120-2 CPC), which actually provides higher biomethane saving, is too high to supply economic sustainability for the studied systems.
Discussion
From these results, it is clear that the solar integration into AD producing biomethane has a limited economic sustainability and it is strongly affected by the investment cost of the solar system. Uncertainty on the investment cost plays a major role of the results, however only a reduction of such investment cost in the future may improve the economic sustainability of the system. For this reason, a sensitivity analysis is performed in order to evaluate the influence of the specific cost reduction for the different solar collector types, with respect to previously assumed values. Assuming to reduce the collector cost by 10 to 70%, the PBT is recalculated, as reported in Figure 7, for two selected types of collectors, i.e., FKC-2W and VTK 120-2 CPC, in thermophilic condition with 12 h of thermal storage, for site 3. Table S27 in Supplementary material reports the values of the reduced specific cost of the solar collectors. Of course, the revenues obtainable by selling the biomethane at the natural gas price, without 383 Figure 7 shows that reducing the specific cost, the PBT values obviously decrease, providing some acceptable economic solutions. In the FKC-2W collector case with 12 h thermal storage size, the PBT becomes in general lower than 10 years when the specific cost is below 189.87 Euro/m 2 (−20% collector costs). While in the case of the VTK 120-2 CPC collector with 12 h thermal storage size, the PBT becomes lower than 10 years only if the specific cost is 231.93 Euro/m 2 (−70% collector costs).
Finally, we recalculate the PBT for some selected cases (site 3, thermophilic case, FKC-2W and VTK 120-2 CPC collectors), considering only the income from biomethane selling at the natural gas price (without the additional incentives specific for the Italian situation) equal to 20.41 Euro/MWh (which is about 25% of the overall selling price considered in the base case). Being that the Italian incentives are quite advantageous, with respect to other European countries, we expect that other results, eventually calculated considering other intermediate cases of incentives, will lay within the range of our results calculated with and without the Italian incentives, providing a more general interpretation of the presented results.
Of course, the revenues obtainable by selling the biomethane at the natural gas price, without incentives, are also dependant on the trend of the natural gas market, which presented average annual values in the range 19-26 Euro/MWh in the last ten years in Italy, according to a decreasing trend [26].
Results in Figure 8 shows that the PBT calculated without incentives is always higher than 10 years, as expected, confirming that the incentives mechanism is fundamental to support the economic sustainability of solar integration in AD producing biomethane.
Conclusions 393
The possibility of integrating thermal solar energy into anaerobic digestion reactor, with biogas 394 upgrading to biomethane, is studied, considering mesophilic/thermophilic conditions, three different 395 types of solar collectors (Bosch FKC-2W, FT 226-2V, VTK 120-2 CPC), and three different Italian sites. 396 The influence of the solar collector's area and the thermal storage size are studied. 397 The integration of solar source allows reducing the amount of biogas required for heating the 398 digester, thus more biogas is sent to the upgrading section, producing more biomethane that can be 399 sent to the grid. The saved biomethane amount depends on the different studied conditions. In 400 general, it is larger in thermophilic case with respect to the mesophilic one; it increases with the area 401 of the collectors, with the storage capacity, with decreasing latitude of the site and with improved 402 collector technology. 403 The amount of yearly saved biomethane lays in the range 100-700 MWh/y and 100-1500 MWh/y for 404 the mesophilic and thermophilic cases, respectively. This amount represents a fraction of the required 405 heat for the gas boiler in the base case, ranging from 4% to 55%, depending on the different considered 406 variables. 407 Results of the simplified and preliminary economic analysis show that in the mesophilic 408 conditions there are some economically acceptable solutions with PBT close to 10 years, in particular 409 with the collector FKC-2W type for the range between 800 and 1000 m 2 , with storage tank size equal 410 to 12-18 h in the sites 2 and 3. In the thermophilic conditions, the minimum PBT is 10 years, and it 411 can be obtained with collector area between 1900 and 2500 m 2 , with storage tank capacity 12-18 h, in 412 the site 3. 413 It is observed that the economic acceptability can be reached only using the less expensive 414 collector, even if its efficiency allows lower biomethane savings. Future reduction of solar collector 415 costs might improve the economic feasibility. However, when the payback time is calculated 416 excluding the Italian incentives, its value is always higher than 10 years. Therefore, incentives 417 mechanism is a fundamental support to the economic sustainability of solar integration in these 418 systems. 419
Conclusions
The possibility of integrating thermal solar energy into anaerobic digestion reactor, with biogas upgrading to biomethane, is studied, considering mesophilic/thermophilic conditions, three different types of solar collectors (Bosch FKC-2W, FT 226-2V, VTK 120-2 CPC), and three different Italian sites. The influence of the solar collector's area and the thermal storage size are studied.
The integration of solar source allows reducing the amount of biogas required for heating the digester, thus more biogas is sent to the upgrading section, producing more biomethane that can be sent to the grid. The saved biomethane amount depends on the different studied conditions. In general, it is larger in thermophilic case with respect to the mesophilic one; it increases with the area of the collectors, with the storage capacity, with decreasing latitude of the site and with improved collector technology.
The amount of yearly saved biomethane lays in the range 100-700 MWh/y and 100-1500 MWh/y for the mesophilic and thermophilic cases, respectively. This amount represents a fraction of the required heat for the gas boiler in the base case, ranging from 4% to 55%, depending on the different considered variables.
Results of the simplified and preliminary economic analysis show that in the mesophilic conditions there are some economically acceptable solutions with PBT close to 10 years, in particular with the collector FKC-2W type for the range between 800 and 1000 m 2 , with storage tank size equal to 12-18 h in the sites 2 and 3. In the thermophilic conditions, the minimum PBT is 10 years, and it can be obtained with collector area between 1900 and 2500 m 2 , with storage tank capacity 12-18 h, in the site 3.
It is observed that the economic acceptability can be reached only using the less expensive collector, even if its efficiency allows lower biomethane savings. Future reduction of solar collector costs might improve the economic feasibility. However, when the payback time is calculated excluding the Italian incentives, its value is always higher than 10 years. Therefore, incentives mechanism is a fundamental support to the economic sustainability of solar integration in these systems.
Further analysis, based on complete life cycle costing of the integrated AD/solar system, is however required to better evaluate the main economic indicators for the whole system, also considering the uncertainties associated with costs and revenues and other boundary conditions as climate change impact on solar energy.
Finally, the proposed integration can be regarded as a way to store solar energy, considering that the solar energy is the most abundant energy resource with the potential to become a major component of a sustainable global energy solution. | 9,976.4 | 2020-08-19T00:00:00.000 | [
"Engineering"
] |
Optoelectrical Properties of Transparent Conductive Films Fabricated with Ag Nanoparticle-Suspended Emulsion under Various Formulations and Coating Conditions
Transparent conductive films (TCFs) were fabricated through bar-coating with a water-in-toluene emulsion containing Ag nanoparticles (AgNPs). Morphological changes in the self-assembled TCF networks under different emulsion formulations and coating conditions and the corresponding optoelectrical properties were investigated. In preparing various emulsions, the concentration of AgNPs and the water weight fraction were important factors for determining the size of the water droplets, which plays a decisive role in controlling the optoelectrical properties of the TCFs affected by open cells and conductive lines. An increased concentration of AgNPs and decreased water weight fraction resulted in a decreased droplet size, thus altering the optoelectrical properties. The coating conditions, such as coating thickness and drying temperature, changed the degree of water droplet coalescence due to different emulsion drying rates, which also affected the final self-assembled network structure and optoelectrical properties of the TCFs. Systematically controlling various material and process conditions, we explored a coating strategy to enhance the optoelectrical properties of TCFs, resulting in an achieved transmittance of 86 ± 0.2%, a haze of 4 ± 0.2%, and a sheet resistance of 35 ± 2.8 Ω/□. TCFs with such optimal properties can be applied to touch screen fields.
Introduction
Transparent conductive films (TCFs) are optically transparent and electrically conductive. They have been widely applied in industrial fields such as touch screens [1][2][3], organic solar photovoltaics [4], organic light-emitting diodes (OLEDs) [5][6][7], liquid crystal displays (LCDs) [8][9][10][11], and transparent conducting heaters [12]. Indium tin oxide (ITO), which has a high transmission and low electrical resistivity, is a popular material for fabricating TCFs [13]. However, the critical disadvantages of ITO, such as brittleness and high cost, are actively driving the development of alternative materials for applications in flexible electronic devices [14][15][16]. For this purpose, metallic nanostructures, including metal thin films, metal nanogrids, and metal nanowire networks, are fascinating alternatives because of their unique optoelectronic properties. In particular, metal (for example, Ag) nanowire networks have been considered excellent candidates because they exhibit a better performance compared to ITO [17]. In addition, TCFs containing Ag nanowires can be effectively fabricated at low cost via a solution-based roll-to-roll process. However, the high junction resistance resulting from the weak contact between Ag nanowires and surface roughness is the main drawback that requires a solution [18,19]. In this situation, Ag nanoparticles (Ag-NPs) are used to produce Ag-coated films with excellent optical and electrical performance by efficiently decreasing sheet resistance through tight junctions between particles [20,21].
TCFs with AgNPs are typically fabricated through the self-assembly of nanoparticles. After the thin-film coating process with a AgNP-suspended emulsion at atmospheric pressure, random mesh-like network structures composed of AgNPs are formed via spontaneous self-assembly as evaporation proceeds. The driving force for network structure formation originates from the coffee ring effect [22], which is induced by the different evaporation rates of the continuous phase and dispersed phase solvents. Through this operation, both the desired optical transparency and electrical conductivity can be achieved using open cells surrounded with AgNPs and conductive lines composed of AgNPs, respectively. Therefore, network structures can be achieved economically through this inexpensive maskless coating method. This is a promising film fabrication process that ensures superb film properties during mass production [23].
The droplet size of the dispersed phase in the emulsion is the most important factor for generating network structures because AgNPs are deposited along the contact lines of the dispersed phase droplets. This is directly related to the optical and electrical performance of the TCFs.
To determine emulsion stability and droplet size, numerous factors can be used, including the weight ratio of the two solvents [24], nanoparticle concentration [25], structure of surfactants [26], stirring intensity [27], mixing time [28], and mixing temperature [27,29]. In addition, coating conditions, such as coating thickness and drying temperature, are important because they can affect the coalescence degree of droplets and the corresponding final structures of the TCFs by regulating the total drying time. Therefore, the establishment of proper emulsion preparation and coating conditions is indispensable for the optimal performance of TCFs.
Thus far, AgNPs have been mainly studied in combination with other materials, such as graphene nanosheets [30], Ag nanowires [20], and carbon nanomaterials [31]. The performance of TCFs fabricated with only AgNPs has been examined under limited emulsion-formulating conditions, such as the size and shape of AgNPs [32]. Thus, it is important to systematically correlate the overall coating process conditions with the structure and properties of the AgNP-based TCFs.
In this study, the effects of the emulsion formulation and drying conditions on the final optical and electrical properties of TCFs were investigated. Water-in-toluene emulsions were prepared by changing the concentration of AgNPs and the water fraction based on a constant solvent weight. Optical microscopy was used to observe the dispersed phase droplets, that is, water droplets, in various formulations. In the bar-coating stage of emulsions, thin coating films of different thicknesses were produced using different bars and drying temperatures within the convection oven. The final TCF structures were observed using scanning electron microscopy (SEM). The optical properties, such as haze and total transmittance, were measured using a haze meter, and the electrical properties, such as sheet resistance, were measured with a low resistivity meter that had a four-point probe.
Water-in-Toluene Emulsions
First, AgNPs in the range 0.519-1.038 g (AGFA, Mortsel, Belgium, D = 30-80 nm) were added to a mixture of 0.026 g of Span 60 (DUKSAN, Ansan-si, Korea), 2.14 g of cyclohexanone (DUKSAN, Ansan-si, Korea), 0.057 g of BYK 410 (BYK, Wesel, Germany), and 23.03 g of toluene (DAEJUNG, Siheung-si, Korea). Subsequently, an ultrasonic homogenizer (UW 2070, BANDELIN, Berlin, Germany) was used to disperse the AgNPs twice for 30 s each. After the dispersion step, 0.08 g of APS, which is a mixture of hexamethoxymethylmelamine (Sigma-Aldrich, St. Louis, MO, USA) and p-toluenesulfonic acid (Sigma-Aldrich, St. Louis, MO, USA), 0.86 g of Dispo, which is a mixture of 2-amino-1-butanol (Sigma-Aldrich, St. Louis, MO, USA) and polyether siloxane (Sigma-Aldrich, St. Louis, MO, USA), and 23.11 g of DI water were added. Ultrasonic homogenization was conducted three times, each time for 30 s, for emulsification. Next, various emulsions were formed with different water weight fractions (based on the total solvent weight) in the range 0.4-0.55 under conditions where the total solvent and other substances were kept constant. The appropriate concentration of AgNPs was selected based on experimental results, such as network structures and optoelectrical properties. The dispersion and emulsification processes were optimized in the same manner. Through these procedures, water(dispersed phase)-in-toluene(continuous phase) emulsions were produced, and the formulations of the emulsions used in this study are listed in Table 1. Polyethylene terephthalate (PET) films (Higashiyama Film, Nagoya, Japan) of 18 cm width and 27 cm height were coated with primer, which is a mixture of methyl ethyl ketone (DUKSAN, Ansan-si, Korea), methyl isobutyl ketone (DUKSAN, Ansan-si, Korea), hydroxy dimethyl acetophenone (Sigma-Aldrich, St. Louis, MO, USA), and polyethylene glycol diacrylate (Sigma-Aldrich, St. Louis, MO, USA). The 6.86 µm-thick primer coating was applied using a bar-coater (DAO-CO02-SERVO, DAO Technology, Uiwang-si, Korea) at a speed of 167 mm/s to enhance the emulsion coating. The film was then dried at 50 • C for 1 min in a convection oven (OF-02GW, JeioTech, Daejeon, Korea) and UV-cured using a UV-curing benchtop conveyor (LC6B, Fusion UV Systems, Inc., Gaithersburg, MD, USA). Next, the primer-coated PET films were maintained at room temperature for 24 h for sufficient aging.
The coating films using the emulsions listed in Table 1 were fabricated via bar-coating at a speed of 167 mm/s with a wet thickness of 27.4 µm. The films were then dried at 50 • C for 1-2 min and thermally cured at 150 • C for 2 min.
After determining the optimal emulsion formulation, the optimal wet coating thickness and drying temperature were sequentially investigated in the 27.4-96.0 µm thickness range (using the coating bars), and in the temperature range between 30 and 90 • C (controlled using the convection oven [OF-02GW, JeioTech, Daejeon, Korea]). The coating conditions are presented in detail in Table 2.
Formation of Self-Assembled Network Structures in TCFs
The overall process of TCF fabrication is illustrated in Figure 1. First, a conductive water-in-toluene emulsion with AgNPs was coated onto a solid substrate, that is, a primercoated PET film, via bar-coating. After the coating stage, water droplets with a higher density were distributed on the substrate surface in the form of isolated islands. The AgNPs, which were dispersed in the toluene phase, were not allowed to penetrate the water droplets and were thus captured on the surfaces of the droplets. As more volatile toluene evaporated, the water droplets separated by a continuous phase began to form polygonal shapes. The spaces separating the droplets became thinner, and the capillary pressure caused by drying led to droplet coalescence as evaporation proceeded [33]. Finally, both toluene and water completely evaporated, and the AgNPs spontaneously self-assembled into a microsized network structure. Subsequently, a high-temperature thermal treatment was conducted to strengthen the network structures. In this study, the effect of the material and process conditions-such as the concentration of AgNPs, water fraction, coating thickness, and drying temperature-on the final network structures of AgNPs was scrutinized based on the TCF manufacturing mechanism.
Observation of Water Droplets in Water-in-Toluene Emulsions
The size of the water droplets (dispersed phase) in the emulsions was significantly related to the final network structures. The emulsions were slightly diluted with toluene because of the very dark color of the AgNPs. The amount of toluene added was determined based on the consideration of not breaking the water droplets. Water droplets in the water-in-toluene emulsions with different AgNP concentrations and water weight
Observation of Water Droplets in Water-in-Toluene Emulsions
The size of the water droplets (dispersed phase) in the emulsions was significantly related to the final network structures. The emulsions were slightly diluted with toluene Nanomaterials 2023, 13, 1191 5 of 12 because of the very dark color of the AgNPs. The amount of toluene added was determined based on the consideration of not breaking the water droplets. Water droplets in the water-in-toluene emulsions with different AgNP concentrations and water weight fractions were observed using an optical microscope (LILY MCX500, Micros Austria, Gewerbezone, Austria) after the same time from loading emulsions.
Optical and Electrical Properties of TCFs
The total transmittance and haze of the TCFs were measured using a haze meter (NDH 5000, Nippon Denshoku, Tokyo, Japan). Total transmittance is the fraction of incident light that is transmitted in the range of visible light, and haze is the ratio of sample diffusion to total transmittance. The enhancement of the optical properties represents increased total transmittance and decreased haze. Having conducted a total of 10 measurements at various positions on TCFs, the average values of these parameters were plotted. The morphologies of the TCFs were observed using SEM (S-4800, Hitachi, Tokyo, Japan) to relate the optical properties to the network structure.
The sheet resistance, which is the bulk resistivity divided by the sheet thickness, was measured using a low resistivity meter with a four-point probe (MCP-T610, Mitsubishi Chemical, Tokyo, Japan). Highly conductive films have low sheet resistance values. The sheet resistance at multiple positions on TCFs was measured 10 times, and the resulting average values were plotted.
Concentration of AgNPs
An increase in the concentration of nanoparticles usually enhances emulsion stability [25,34]. When Ag nanoparticles were dispersed in the emulsion, the interfaces between the water and toluene in the emulsion became covered with AgNPs and a particle layer was formed around each water droplet, which prevented the coalescence of water droplets and stabilized the emulsion. When additional nanoparticles were added, large water droplets were broken down into smaller ones because a greater number of AgNPs could then be adsorbed on the droplet surfaces. Therefore, an increase in the nanoparticle concentration can effectively reduce the average size of the emulsion droplets [35,36], as shown in Figure 2a.
In addition, Figure 2b shows that the size of open cells surrounded by AgNPs decreases with an increase in the concentration of AgNPs. The change in cell size can directly affect the optoelectrical properties of the TCFs. The decrease in open cell size leads to the degradation of the optical properties of the TCFs; that is, the transmittance decreases and haze increases, as displayed in Figure 2c, because the open cell size is normally responsible for the optical properties.
The conductive lines formed with AgNPs are directly responsible for the electrical properties, that is, the sheet resistance. As more AgNPs are added under the given conditions of P-1.12 to P-2.24, the conductive lines become wider because of the dense deposits, as shown in Figure 3a. Note that the conductive lines become wider with an increase in the AgNP concentration, even if the number of smaller droplets increases. Therefore, the probability of current leakage is suppressed with increasing AgNPs, resulting in a decrease in the sheet resistance (Figure 3b). In addition, the extremely high sheet resistance of the P-1.12 sample is caused by both the thin conductive lines and the incomplete network structure of the TCFs, owing to the insufficient concentration of AgNPs.
centration can effectively reduce the average size of the emulsion droplets [35,36], as shown in Figure 2a. In addition, Figure 2b shows that the size of open cells surrounded by AgNPs decreases with an increase in the concentration of AgNPs. The change in cell size can directly affect the optoelectrical properties of the TCFs. The decrease in open cell size leads to the degradation of the optical properties of the TCFs; that is, the transmittance decreases and haze increases, as displayed in Figure 2c, because the open cell size is normally responsible for the optical properties. The conductive lines formed with AgNPs are directly responsible for the electrical properties, that is, the sheet resistance. As more AgNPs are added under the given conditions of P-1.12 to P-2.24, the conductive lines become wider because of the dense deposits, as shown in Figure 3a. Note that the conductive lines become wider with an increase in the AgNP concentration, even if the number of smaller droplets increases. Therefore, the probability of current leakage is suppressed with increasing AgNPs, resulting in a decrease in the sheet resistance ( Figure 3b). In addition, the extremely high sheet resistance of the P-1.12 sample is caused by both the thin conductive lines and the incomplete network structure of the TCFs, owing to the insufficient concentration of AgNPs. As a result of the optoelectrical properties, the concentration of AgNPs was found to be in an appropriate range (between 1.51 wt% and 1.89 wt%) because the P-1.12 and P-2.24 samples were not recommended owing to their high sheet resistance and poor optical properties, respectively. The optimal concentration of AgNPs was chosen to be 1.51 wt%, guaranteeing better optical properties.
Water Weight Fraction
An increased water fraction in the emulsion gives rise to larger water droplets existing in the form of isolated islands [24]. Figure 4a As a result of the optoelectrical properties, the concentration of AgNPs was found to be in an appropriate range (between 1.51 wt% and 1.89 wt%) because the P-1.12 and P-2.24 samples were not recommended owing to their high sheet resistance and poor optical properties, respectively. The optimal concentration of AgNPs was chosen to be 1.51 wt%, guaranteeing better optical properties.
Water Weight Fraction
An increased water fraction in the emulsion gives rise to larger water droplets existing in the form of isolated islands [24]. Figure 4a
Water Weight Fraction
An increased water fraction in the emulsion gives rise to larger water droplets existing in the form of isolated islands [24]. Figure 4a As the droplet size increases, the total surface area of the droplets with adsorbed AgNPs tends to decrease. Therefore, the conductive lines become wider because of the constant concentration of AgNPs, resulting in a decrease in sheet resistance (Figure 5a,b). However, in the W-0.55 case, the sheet resistance is rather high because the network is not tightly formed. The particularly high sheet resistance of the W-40 sample is due to the large number of small break-ups of conductive lines, which is caused by the relative insufficiency of AgNPs owing to the increase in the total surface area of the droplets, as shown in Figure 5a. Based on the results shown in Figures 4 and 5, W-0.50 was considered the optimal water fraction, providing good optical properties and low sheet resistance. As the droplet size increases, the total surface area of the droplets with adsorbed AgNPs tends to decrease. Therefore, the conductive lines become wider because of the constant concentration of AgNPs, resulting in a decrease in sheet resistance (Figure 5a,b). However, in the W-0.55 case, the sheet resistance is rather high because the network is not tightly formed. The particularly high sheet resistance of the W-40 sample is due to the large number of small break-ups of conductive lines, which is caused by the relative insufficiency of AgNPs owing to the increase in the total surface area of the droplets, as shown in Figure 5a. Based on the results shown in Figures 4 and 5, W-0.50 was considered the optimal water fraction, providing good optical properties and low sheet resistance. constant concentration of AgNPs, resulting in a decrease in sheet resistance (Figure 5a,b). However, in the W-0.55 case, the sheet resistance is rather high because the network is not tightly formed. The particularly high sheet resistance of the W-40 sample is due to the large number of small break-ups of conductive lines, which is caused by the relative insufficiency of AgNPs owing to the increase in the total surface area of the droplets, as shown in Figure 5a. Based on the results shown in Figures 4 and 5, W-0.50 was considered the optimal water fraction, providing good optical properties and low sheet resistance.
Coating Thickness
After determining the optimal emulsion formulation, the wet coating thickness was controlled by changing the diameter of the bar used in the bar-coating process. As the coating thickness increases, the emulsion load increases, and more time is required for drying. Therefore, it is more likely for water droplets to coalesce into larger droplets as drying continues, which increases the cell size (Figure 6a). However, the optical properties degrade even when the size of the open cell is increased (Figure 6b). For the determination of the optical properties, as the coating thickness increases, the increased width of the overall conductive lines becomes a more influential factor than the size of the open cells.
Coating Thickness
After determining the optimal emulsion formulation, the wet coating thickness was controlled by changing the diameter of the bar used in the bar-coating process. As the coating thickness increases, the emulsion load increases, and more time is required for drying. Therefore, it is more likely for water droplets to coalesce into larger droplets as drying continues, which increases the cell size (Figure 6a). However, the optical properties degrade even when the size of the open cell is increased (Figure 6b). For the determination of the optical properties, as the coating thickness increases, the increased width of the overall conductive lines becomes a more influential factor than the size of the open cells. A thicker coating indicates an increase in the loading amount of the emulsion; accordingly, there are more AgNPs consisting of conductive lines, making the conductive lines wider and denser, as portrayed in Figure 7a. There is little possibility of current leakage as the coating thickness increases. Therefore, the sheet resistance decreases as the coating thickness increases (Figure 7b).
There is a trade-off between the optical and electrical properties with increasing coating thickness, which means that the optimal coating thickness of TCFs can be determined, depending on the required properties for various applications. The required performance of TCFs for touch screen applications should satisfy a sheet resistance of 30-40 Ω/□ and a transmittance of 85% or more. Therefore, it is confirmed that a coating thickness of 27.4 μm is optimal. Notably, in a network structure with a very thin wet coating (thinner than 27.4 μm, for example), it is expected for a large number of break-ups to occur because of the extremely high drying rate. A thicker coating indicates an increase in the loading amount of the emulsion; accordingly, there are more AgNPs consisting of conductive lines, making the conductive lines wider and denser, as portrayed in Figure 7a. There is little possibility of current leakage as the coating thickness increases. Therefore, the sheet resistance decreases as the coating thickness increases (Figure 7b).
Drying Temperature
The drying temperature was controlled using a forced convection oven. The drying temperature predominantly affected the drying rate of the emulsion. The lower the drying There is a trade-off between the optical and electrical properties with increasing coating thickness, which means that the optimal coating thickness of TCFs can be determined, depending on the required properties for various applications. The required performance of TCFs for touch screen applications should satisfy a sheet resistance of 30-40 Ω/ and a transmittance of 85% or more. Therefore, it is confirmed that a coating thickness of 27.4 µm is optimal. Notably, in a network structure with a very thin wet coating (thinner than 27.4 µm, for example), it is expected for a large number of break-ups to occur because of the extremely high drying rate.
Drying Temperature
The drying temperature was controlled using a forced convection oven. The drying temperature predominantly affected the drying rate of the emulsion. The lower the drying rate, the greater the coalescence of water droplets in the emulsion becomes, resulting in larger open-cell regions during drying.
When the drying temperature is low (for example, 30 • C), the drying speed is slow enough to easily induce the coalescence of water droplets. Consequently, the size of open cells becomes relatively large ( Figure 8a). As the drying temperature is gradually increased to 70 and then to 90 • C, cell size decreases because rapid drying prevents the coalescence of droplets. Therefore, the transmittance decreases and haze increases with increasing drying temperature (Figure 8b). A higher drying temperature induces a smaller open cell size, reducing the width of the conductive lines under a constant AgNP concentration (Figure 9a). Therefore, the increased sheet resistance was measured at high drying temperatures because of the higher possibility of conductive line break-ups (Figure 9b). Note that drying temperatures over 70 • C do not significantly change the sheet resistance because the emulsions dry very rapidly at such temperatures.
Based on the above results, a low-temperature range between 30 and 50 • C was found to be an appropriate drying temperature to produce TCFs with good optoelectrical properties. Thus, 50 • C was chosen as the optimal drying temperature because, at this temperature, the overall network structure was formed more closely, reducing the probability of critical break-ups, which can cause the malfunctioning of the final products.
Note that drying temperatures over 70 °C do not significantly change the sheet resistance because the emulsions dry very rapidly at such temperatures.
Based on the above results, a low-temperature range between 30 and 50 °C was found to be an appropriate drying temperature to produce TCFs with good optoelectrical properties. Thus, 50 °C was chosen as the optimal drying temperature because, at this temperature, the overall network structure was formed more closely, reducing the probability of critical break-ups, which can cause the malfunctioning of the final products. Note that drying temperatures over 70 °C do not significantly change the sheet resistance because the emulsions dry very rapidly at such temperatures. Based on the above results, a low-temperature range between 30 and 50 °C was found to be an appropriate drying temperature to produce TCFs with good optoelectrical properties. Thus, 50 °C was chosen as the optimal drying temperature because, at this temperature, the overall network structure was formed more closely, reducing the probability of critical break-ups, which can cause the malfunctioning of the final products.
Conclusions
Transparent conductive films (TCFs) were fabricated using a water-in-toluene emulsion containing AgNPs. The concentration of AgNPs and the water weight fraction were varied when preparing emulsions, and the coating thickness and drying temperature were controlled during the bar-coating process. The droplets in the emulsion were observed using an optical microscope, and the morphology of the self-assembled TCFs was observed using SEM. The optical and electrical properties were measured using a haze meter and low resistivity meter with a four-point probe, respectively. In some cases, the optical properties were determined by the size of the open cell and the width of the conductive lines composed of AgNPs. The electrical properties were affected by the width of the conductive lines and overall degree of self-assembled network connectivity. The water droplet size was determined to be the most important factor in preparing emulsions because it directly affected the size of the open cell in the TCFs. When the concentration of AgNPs was increased, the droplet size decreased, resulting in degraded optical and enhanced electrical properties. With a fixed concentration of AgNPs, we varied the water weight fraction and observed a decrease in the droplet size at low water weight fractions. This results in the degradation of the optoelectrical properties, contrary to the case with a high concentration of AgNPs. When the drying conditions were changed, the optoelectrical properties of the TCFs changed even when the same emulsion was applied. When the films were thickly coated, the optical properties degraded and the electrical properties were enhanced. As the drying temperature decreased, the optoelectrical properties improved. However, drying at too low a temperature may cause significant defects in TCFs. In this study, the optimal conditions throughout the TCF fabrication process were identified. Based on the experiments, we concluded that a concentration of AgNPs in the range of 1.51 wt%, a water weight fraction of 0.50, a coating thickness of 27 µm, and a drying temperature of 50 • C were the optimal conditions for fabricating TCFs. Under these conditions, the production of TCFs with optimal optoelectrical properties is expected to be advantageous to various industries, such as the manufacturing of displays and touch screens. In order to manufacture self-assembled TCFs that exhibit improved stability and flexibility, while guaranteeing transparency, it is crucial to well establish coating and drying conditions that are customized for the specific display products. | 6,132 | 2023-03-27T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering"
] |
Computational modeling of inhibition of voltage-gated Ca channels: identification of different effects on uterine and cardiac action potentials
The uterus and heart share the important physiological feature whereby contractile activation of the muscle tissue is regulated by the generation of periodic, spontaneous electrical action potentials (APs). Preterm birth arising from premature uterine contractions is a major complication of pregnancy and there remains a need to pursue avenues of research that facilitate the use of drugs, tocolytics, to limit these inappropriate contractions without deleterious actions on cardiac electrical excitation. A novel approach is to make use of mathematical models of uterine and cardiac APs, which incorporate many ionic currents contributing to the AP forms, and test the cell-specific responses to interventions. We have used three such models—of uterine smooth muscle cells (USMC), cardiac sinoatrial node cells (SAN), and ventricular cells—to investigate the relative effects of reducing two important voltage-gated Ca currents—the L-type (ICaL) and T-type (ICaT) Ca currents. Reduction of ICaL (10%) alone, or ICaT (40%) alone, blunted USMC APs with little effect on ventricular APs and only mild effects on SAN activity. Larger reductions in either current further attenuated the USMC APs but with also greater effects on SAN APs. Encouragingly, a combination of ICaL and ICaT reduction did blunt USMC APs as intended with little detriment to APs of either cardiac cell type. Subsequent overlapping maps of ICaL and ICaT inhibition profiles from each model revealed a range of combined reductions of ICaL and ICaT over which an appreciable diminution of USMC APs could be achieved with no deleterious action on cardiac SAN or ventricular APs. This novel approach illustrates the potential for computational biology to inform us of possible uterine and cardiac cell-specific mechanisms. Incorporating such computational approaches in future studies directed at designing new, or repurposing existing, tocolytics will be beneficial for establishing a desired uterine specificity of action.
INTRODUCTION
Computational modeling of an action potential (AP) of an electrically excitable cell was first developed in 1952 with the landmark study of neurons (Hodgkin and Huxley, 1952). Its success led to the development of other models such as the tonic AP in cardiac cells (Noble, 1960) and the bursting AP in β-pancreatic cells (Chay and Keizer, 1983). In the intervening years there has been an enormous advance in our understanding of the cardiac physiome (Noble, 2007;Schmitz et al., 2011;Noble et al., 2012) and computational analysis of electrical excitability now runs handin-hand with physiological experimentation in heart research (Crampin et al., 2004;Bassingthwaighte et al., 2009;Masumiya et al., 2009;Nikolaidou et al., 2012;Zhang et al., 2012). Many computational models exist to describe in considerable detail cardiac cell-specific excitation-contraction properties, including the biophysical details of the constituent ion currents and calcium fluxes. These include multicellular tissue and organ maps of spatiotemporal electrical and calcium wave propagation (Rudy, 2000;Zhang et al., 2000;Kleber and Rudy, 2004;Severi et al., 2009;Aslanidi et al., 2011b;Atkinson et al., 2011). Mathematical models are continuously being developed and applied to predicting the risks of pathophysiological phenomena (e.g., the likelihood of dyssynchronous activation and fibrillation) (Benson et al., 2008;Bishop and Plank, 2012;Cherry et al., 2012;Kharche et al., 2012;Behradfar et al., 2014) as well as the potential beneficial effects of drugs and treatments (Levin et al., 2002;Muzikant and Penland, 2002;Davies et al., 2012;Mirams et al., 2012;di Veroli et al., 2013di Veroli et al., , 2014. The uterus is also an electrically excitable tissue whose contractile function is determined by episodic spontaneous APs and calcium fluxes. Although our comprehension of the electrophysiological basis of uterine AP formation lags behind that of cardiac muscle, there is an increasing awareness that computational approaches, such as those which have been applied so extensively to cardiac muscle, may foster advances in this matter (Taggart et al., 2007;Aslanidi et al., 2011a;Tong et al., 2011;Sharp et al., 2013). For example, we have developed a biophysicallydetailed uterine smooth muscle cell (USMC) model validated against experimental data and it can describe many different uterine AP forms and the corresponding intracellular calcium changes (Tong et al., 2011).
Utilizing computational models for the examination of uterine APs offers the additional possibility of predicting the likely actions of drugs that target ion channels or exchangers with the intention of attenuating premature uterine contractions. Hitherto, such tocolytics have been used clinically without prior extensive in silico assessment of effectiveness. Uterine APs, and the resultant contraction of smooth muscle cells, are markedly dependent upon the activation of a prominent voltage-gated inward (depolarizing) current, the long-lasting L-type calcium current (I CaL ). Nifedipine, an L-type voltage-gated calcium channel blocker, is currently used as a tocolytic treatment (RCOG, 2011). This treatment is useful for delaying labor in the short term (Conde-Agudelo et al., 2011). However, it is not without adverse side effects, in particular on maternal cardiovascular performance (van Veen et al., 2005;Guclu et al., 2006;Gaspar and Hajagos-Toth, 2013), and it is not presently recommended for longer-term use, nor for women with cardiac disease (RCOG, 2011). This alerts one to another necessary consideration of tocolytic drugs intended to limit uterine APs namely, what possible actions may there be on cardiac electrical excitability?
Another voltage-gated Ca current, in addition to I CaL , that, theoretically, may be a suitable target for developing tocolytic drugs is the short-lasting, transient T-type Ca channel current (I CaT ). T-type calcium currents are believed to play a role in pacemaking in many cell types (Mangoni et al., 2006;Perez-Reyes et al., 2009). I CaT has been observed in uterine tissues (Inoue et al., 1990;Young et al., 1991Young et al., , 1993Blanks et al., 2007) and putative blockers of I CaT reduce in vitro uterine contractions (Lee et al., 2009). I CaT has a different current-voltage (I-V) profile from I CaL and, unlike the latter, there is considered to be little I CaT present in ventricular cardiomyocytes although it has been suggested to contribute to sinoatrial (SA) node APs (Ono and Iijima, 2010;Mesirca et al., 2014).
Our over-arching objective was to utilize computational models of uterine and cardiac cells to theoretically test the possible cell-specific effects of inhibiting voltage-gated Ca entry (I CaL and I CaT ) in a manner that may be anticipated to occur with drugs targeting these pathways. Using publicly-available computational models of uterine and cardiac APs we have performed a series of simulation experiments to investigate the following questions: (i) Are there comparable effects on uterine and cardiac APs of reducing I CaL ? (ii) Are there similar effects on uterine and cardiac APs of reducing I CaT ? (iii) Does combined reduction of I CaL and I CaT have differential actions on uterine and cardiac APs?
UTERINE SMOOTH MUSCLE CELL MODEL
For these simulation studies we used our previously published USMC model (Tong et al., 2011). A schematic of the ionic currents contributing to the model is shown in Figure 1A. The model (Faber and Rudy, 2000). The lollipop sign indicates channels that are voltage-gated. The L-type Ca channel and T-type Ca channel, carrying I CaL and I CaT , respectively, are indicated in red.
source code, parameter values and the full description are provided in Tong et al. (2011). All the USMC AP simulations were started at the same resting state. The numerical values of all the dynamical variables at this state-the initial conditions-are provided in the Supplementary Materials.
CARDIAC CELL MODELS
There are numerous computational models of cardiac cells and many are listed in the CellML model repository (http://models. cellml.org/). They represent different parts of the heart from the central and peripheral sinoatrial nodal cells (Zhang et al., 2000;Garny et al., 2003), the atrial cardiomyocytes , the atrial-ventricular node and His-bundle (Inada et al., 2009), Purkinje fiber cells (Stewart et al., 2009;Li and Rudy, 2011) and ventricular cardiomyocytes (Faber and Rudy, 2000). These cell types can be grouped, rather roughly, into those of a contractile or conducting function. In our preliminary simulations using these cardiac models, we selected the ventricular and sinoatrial nodal cell models that showed the greatest propensity to alter AP form following a reduction of Ca current thus forming the most sensitive situation for comparing the same maneuvre to that on the USMC model. For the sinoatrial nodal cell, we used the Garny et al. (2003) rabbit sinoatrial nodal cell model (SAN) and its 0D-capable version configurations for the SAN central cell. A schematic of the ionic currents contributing to the model is shown in Figure 1B. For the ventricular cardiomyocyte cell, we used as a platform the Faber and Rudy (2000) guinea pig cell model (LRd00) and its M cell configurations. A schematic of the ionic current components of the model is shown in Figure 1C. The model equations, parameter values and initial conditions are listed in the CellML model repository. A copy of the model source code can also be downloaded from http://rudylab.wustl.edu/research/cell/ methodology/cellmodels/LRd/code.htm. However, as detailed below in the Results, we had to modify this model in order to incorporate a more robust description of I CaT . The LRd00 modified model was paced for 20 min at 2 Hz to allow it to reach a stable state with its M cell configurations. Then, the values of all the dynamical variables in this stable state were saved: these stable state conditions of the modified LRd00 model are provided in the Supplementary Materials.
AND I CaT
The effects of reduced I CaL and I CaT on the uterine and cardiac APs were assessed and quantified. The I CaL and I CaT were reduced by multiplying their maximal conductances with a scaling factor between 0 and 1 and we assumed the same proportional reduction would occur in all three cell types. As the APs between the three unique cell types have different forms and characteristics, it is difficult to use a common measure to assess their AP behaviors. Therefore, we chose to assess a characteristic that best reflects the main function of each of the cell types: the bursting activities in the uterine cell, the pacemaking ability of the SAN cell and the action potential duration (APD) of the ventricular cell. For the USMC model, after adjusting the levels of I CaL and I CaT , an AP was induced by a 5 s current clamp at −0.5 pA pF −1 and the maximal peak membrane voltage (V peak ) reached after the first initial spike was measured. This assessment gives an indication of the presence or absence of bursting spikes in a USMC AP. For the SAN model, after adjusting the levels of I CaL and I CaT , the model was simulated for 10 s and the pacemaking frequency was measured from the APs in the last 2 s of simulations. For the LRd00 model, an AP was induced by a 0.5 ms current pulse at −90 pA pF −1 and the APD at 90% repolarization level (APD 90 ) was assessed.
All simulations were computed using XPPAUT Version 6 (Ermentrout, 2002) in a Dell Optiplex PC with an Intel(R) Core(TM) i7-3770 CPU @ 3.40 GHz. For the USMC model, all simulations were computed with a fixed time step of 0.02 ms and the Euler method. For the SAN model, all simulations were computed with a fixed time step of 0.1 ms and the fourth-order Runge-Kutta method. For the LRd00 model, all simulations were computed with a fixed time step of 0.002 ms and the fourth-order Runge-Kutta method.
VALIDATION OF THE CELL MODELS
We first verified each source code of the three individual cell models with their respective published results. Both the USMC and the SAN models were validated in this regard. Surprisingly, we found that the LRd00 model could not be validated so in respect of I CaT : the simulated current tracings (Figure 2A), and the corresponding current-voltage (I-V) relationship ( Figure 2B), of the LRd00 model did not match the experimental results that they were rated against (Balke et al., 1992). The simulations produced current responses at membrane voltages that were much farther to the right (positive) than observed in the experimental data or, indeed, to that anticipated from other I CaT models. When we traced back the original formulation of this I CaT to Zeng et al. (1995), and the experimental values from which its kinetics details were based to Droogmans and Nilius (1989), we did not find any typographical errors in the model equations or the experimental values to explain the discrepancy. As an aim of our study was to investigate the effects of altering I CaT , it was essential that the models incorporated the correct formulations. We were left, therefore, with no option but to attempt to modify the I CaT in the LRd00 model such that it did reflect more closely the experimental data. For those interested in the details of this process, our attempts to modify the model were undertaken as follows: we had first tried, but failed, to produce a reasonable I CaT model by either adjusting the LRd00 I CaT equations or reformulating new equations using data from Droogmans and Nilius (1989). We next tried substituting the LRd00 I CaT with a validated I CaT model from Li and Rudy (2011), which was developed for canine Purkinje fiber cells. The simulated current tracings with this canine I CaT resemble the I CaT current tracings in Balke et al. (1992) but the kinetics were too fast and the peak of the I-V relationship was still too positive compared to the experimental data from Balke et al. (1992). Using the I CaT from Li and Rudy (2011) as our template, we corrected the differences between the simulations and the experimental data with the modifications that resulted in Equations (1-8) for the new ventricular I CaT that are listed in the Supplementary Materials. As discussed by Droogmans and Nilius (1989), we also found that the kinetics of the ventricular I CaT currents were best described by a gating product of b 2 g where b and g are gating variables for activation and inactivation (Equation 1 in the Supplementary Materials). The I-V data in Balke et al. (1992) was matched with a half-activation at −50 mV in the activation steady-state function (Equation 3 in the Supplementary Materials). Although this half-activation value deviated from the reported value for I CaT (Droogmans and Nilius, 1989), it is similar to those reported values from clonal Cav3.1 expression data (Serrano et al., 1999;Hering et al., 2003). As the kinetics of the canine I CaT model were too fast compared to the ventricular cell experimental data within the LRd00 model (i.e., the data from Balke et al., 1992), we slowed down the activation and inactivation by scaling the time constants (Equations 5, 6 in the Supplementary Materials). The resultant new I CaT model satisfactorily described the ventricular I CaT data (Balke et al., 1992) FIGURE 2 | Incorporation of a modified T-type calcium current model for LRd00. The original I CaT contained in the LRd00 cardiac ventricular cell model did not behave like a T-type calcium current. The simulated raw current tracings (A) and the I-V current plots (B) appeared at holding membrane potentials farther to the right than of the experimental data of Balke et al. (1992) that was used as a source experimental dataset. Simulated current tracings at five different voltage steps from a holding potential of −90 mV (A) and simulated peak current-voltage (I-V) relationships (B) of the original and modified I CaT currents are shown in comparison to the experimental data from Balke et al. (1992). All current tracings were normalized to their maximal peak current from their I-V relationships. The modified I CaT now closely resembled the experimental data. (Experimental tracings and data, adapted with permission from Figure 4 from Balke et al. (1992); copyright 1992 The Physiological Society.) and we replaced the original LRd00 I CaT with this new I CaT model. The ratio between peak I CaT and I CaL in guinea pig ventricular cells are reported to be between 0.13 and 0.19 (Balke et al., 1992;Masumiya et al., 1998;Zorn-Pauly et al., 2004). Therefore, we chose a value for the maximal conductance of our new I CaT model (ḡ CaT ) so that the ratio of the maximum I CaT : I CaL in the modified LRd00 cell model was 0.15.
NORMAL USMC, SAN, AND VENTRICULAR APs
USMC APs can take a variety of forms of variable durations. Bursting type USMC AP, wherein an initial spike of an AP followed by rapid and repetitive fluctuations in membrane potential that persist for the whole AP duration, are often observed. These may assist in the maintenance of long-duration contractions of many tens of seconds that are a feature of the uterus during labor. Therefore, a promising feature of a tocolytic would be one that could dampen, or delay, the bursting in the USMC APs with minimum effects on the APs of the cardiac SAN and the ventricular cells. Figure 3A shows the simulated APs from the three cell models under control conditions before any maneuvre. The USMC model showed a resting membrane potential (RMP) of −55.5 mV before stimulation. During the evoked USMC AP, burstings occurred throughout the AP with a frequency between 2 and 2.54 Hz and the amplitudes of the repetitive spikes were around 40 mV. The V peak after the initial spike was −7 mV. The SAN cell is autorhythmic where periodic APs occur without external stimulation. The frequency and amplitude of the pacemaking activities were 3.15 Hz and 75 mV (between −56 and 19 mV), respectively. These values were the same throughout the whole 10 s simulation. The LRd00 ventricular cell showed a comparatively hyperpolarized (negative) RMP at −87 mV. A single AP was evoked in respond to a stimulus and the ventricular APD 90 was 165 ms. These quantified characteristics are the reference values before any maneuvre.
EFFECTS OF REDUCING I CaL IN UTERINE AND CARDIAC CELLS
I CaL is a major depolarization current in USMCs and thus a logical target for tocolytics. We examined the effect of reducing I CaL by 10% (Figure 3B). In the USMC model, this change dampened the initial repetitive spikes of the USMC AP and reduced the duration of the burstings by around 60%. The amplitudes of the remaining, late-onset, bursting spikes were reduced by 30% to about 28 mV and the V peak of the USMC AP was lowered to −14.8 mV. A less severe effect of 10% less I CaL was evident in the APs from the SAN and the LRd00 models. The amplitude of the SAN pacemaking APs was modestly reduced by 12% and the frequency remained unchanged. The LRd00 APD 90 was slightly reduced by 5%. When I CaL was reduced by 20% (Figure 3C), the burstings of the USMC AP were suppressed completely. However, the SAN cell also stopped pacemaking. The LRd00 APD 90 was slightly reduced by ∼10% at 150 ms. These data indicated that although reducing some I CaL suppressed uterine bursting APs, this maneuvre also affected both the cardiac pacemaker APs and the ventricular AP.
EFFECTS OF REDUCING I CAT IN UTERINE AND CARDIAC CELLS
I CaT may be one of the SAN pacemaking currents in the heart and may also be involved in the generation of APs in USMCs. The effects of our maneuvres to reduce I CaT in each of the three cell types are shown in Figure 4. The control cases of the three models are displayed in Figure 4A. Reduction of I CaT by 40% (Figure 4B) delayed the onset of the burstings in the USMC AP. However, both the V peak and the amplitude of the later-onset burstings remained the same as the control. With 40% less I CaT , the pacemaking by the SAN cell slowed down to about 2.94 Hz with the peak potential at a slightly lower level of 15 mV. Reduction of I CaT by 40% did not affect the LRd00 APD 90 . When more I CaT was reduced, by 80% (Figure 4C), the onset of the USMC burstings was further delayed with only one spike appearing at the end of the AP. However, the amplitude of this spike was similar in size to those of the control. Also, the RMP level became more hyperpolarized at ∼ −57 mV. With this large reduction of I CaT (by 80%), the SAN pacemaking APs slowed down to 2.63-2.7 Hz. The pacemaking potential in the SAN model remained at the same level at −56 mV but the peak potential was lowered to 12 mV. At this level, the SAN model also showed a longer (∼0.5 s) transient during simulation. With 80% less I CaT , the LRd00 APD 90 remained unchanged. Without I CaT (Figure 4D), the RMP level of the USMC became more hyperpolarized at ∼ −58 mV and the bursting of the USMC AP stopped. The SAN pacemaking also ceased without I CaT but the LRd00 AP was unaffected. These data indicated that reduction of I CaT suppressed the uterine bursting AP with no impact on the ventricular AP. However, depending on the magnitude of I CaT reduction, there was also an effect on the SAN APs.
EFFECTS OF REDUCING I CaL AND I CaT IN UTERINE AND CARDIAC CELLS
The above data suggested that there was some promise in reducing either I CaL or I CaT and effecting an inhibition of USMC APs with little deleterious action on ventricular APs. The manouevres that reduced I CaL or I CaT and had mild effects on SAN AP may be tolerable. However, reductions in either current that effected a larger inhibition of uterine APs, as one may desire the action of a tocolytic in our experimental setting to be, also had dramatic actions on SAN AP form. The outcomes of reducing I CaL or I CaT in the USMC models were slightly different: reducing I CaT altered the RMP and the onset of spike bursting whereas changes in I CaL had a predominant action on spike amplitude. This begat the question, could a combined reduction of I CaL and I CaT exert the desired action on USMC APs with little impact on cardiac cell APs? An example is illustrated in Figure 5. The control cases of the three models are shown in Figure 5A and the effects of reducing only I CaL by 10%, or only I CaT by 40%, are illustrated again in Figures 5B,C, respectively. This enables comparison to the effect arising when both I CaL and I CaT were reduced, by 10 and 40%, respectively, ( Figure 5D). In this case, the burstings of the USMC AP were completely suppressed ( Figure 5D).
www.frontiersin.org
October 2014 | Volume 5 | Article 399 | 5 With this combination, the pacemaking frequency of the SAN APs remained the same at 3.125 Hz and the amplitude was reduced by 20% (from −52 to 7.7 mV). The LRd00 APD 90 was slightly reduced by ∼4% to 158 ms. Compared to the cases when only either I CaL or I CaT were reduced alone to completely suppress the burstings of the USMC AP (Figures 3C, 4D), this combined approach clearly performed better while preserving many of the properties of the cardiac APs. We plotted two-parameter maps of percentage current inhibitions-of I CaL vs. I CaT -in each cell model to examine the effective ranges of different combinations of I CaL and I CaT reductions on the properties of uterine and cardiac APs (Figure 6). After the initial spike of a USMC AP, the peak of the bursting spikes typically reach values between −20 and 0 mV whereas without bursting, the plateau AP voltage usually stayed below −30 mV. Therefore, we assessed the presence or absence of burstings in a USMC AP from the level of V peak when the proportions of I CaL and I CaT were changed and the results of the parameter plots are color-coded in Figure 6. In Figure 6A, the white region indicated the combinations of I CaL and I CaT that would produce a USMC AP with some burstings and the shaded region indicated the combinations of currents with which the USMC cell would not produce such activity. The USMC AP example shown previously in Figure 5D, with 10% reduction of I CaL and 40% reduction of I CaT , lay just within the non-bursting domain (as indicated by the green star). For the SAN cell, the pacemaker frequency was monitored as one changed the proportions of I CaL and I CaT and the results are shown in Figure 6B. The colored domain indicated pacemaking occurrence and the white region suggested the parameter combinations would result in no pacemaking. The frequencies of the pacemaking APs were similar throughout the pacemaking zone and the transition between the two regions was steep (not shown). The SAN example shown in Figure 5D with 10% reduction of I CaL and 40% reduction of I CaT was within the pacemaking region. For the LRd00 ventricular cell, the APD 90 was monitored as the proportions of I CaL and I CaT were changed and the results are shown in Figure 6C. Changes in the APD 90 occurred only with changes in I CaL , indicating that I CaT did not influence the APD of the ventricular cell model. From overlapping these parameter maps, we derived the boundaries of combinations of I CaL and I CaT for attaining specific sets of AP forms in the uterine and cardiac cells as shown in Figure 6D. The colored lines separate different properties of the uterine and cardiac cells obtained from the simulations in Figures 6A-C. The blue line traces the level of V peak at −30 mV in the parameter space of I CaL and I CaT of the USMC cell model and it separates the bursting and quiescent conditions. The red line separates the pacemaking conditions with I CaL and I CaT in the SAN cell model from the non-pacemaking. We allowed for 10% variations in the APD 90 of the LRd00 ventricular cells and this threshold at 150 ms was indicated by the gray line. These three criteria were superimposed on the same parameter map of I CaL and I CaT and revealed an overlapping area that satisfied all three conditions. In other words, any combination of reduced I CaL and I CaT within this area can produce our desired outcome for tocolytic treatment, namely, effective suppression of USMC APs with minimal effects on cardiac APs.
The above experiments have considered the outcomes on AP form when imposing the same proportional reduction of I CaL and I CaT in all three cell types. However, another interesting possibility would be the same absolute quantities of currents being reduced in all three cell types. This consideration arises from examining the range of current densities of I CaL and I CaT reported from the different cell types as noted in Table 1. Cells of the cardiac conduction system tend to have much more I CaT than contractile cardiomyocytes and, when comparing between uterine and cardiac cell types, the total I CaT found in USMCs amounts to around 42% of that reported from guinea pig SAN. We re-performed our simulations to test the effects of reducing the same quantities of currents in each of the three cell type models. This resulted in a transformation of the XY axes of the parameter maps in Figure 6 to current densities and the result is shown in Figure 7. Now, the simulations reveal a larger domain within which one may reduce I CaL and I CaT to produce effective inhibition of USMC model APs with little detriment to cardiac cell model APs. In particular, it is possible to completely block the USMC bursting by large reduction of the USMC I CaT without affecting the SAN pacemaking. Therefore, our first experimental protocol of examining the effects of proportional current reductions likely represents the most stringent case.
DISCUSSION
Muscle contractions are determined by the form and duration of the preceding APs and the corresponding calcium fluxes. I CaL is involved in the depolarization of APs and is a major source of calcium influx in many excitable cell types including uterine smooth muscle and cardiac muscle cells. As it has been shown in experimental studies (Kodama et al., 1997;Terrar et al., 2007;Lee et al., 2009;Young and Bemis, 2009) and in this study, reducing I CaL affects the amplitude and the duration of the AP and/or contractile force in these cell types. These properties made I CaL an obvious choice as a target for tocolytic treatment. Indeed, nifedipine is used clinically as a second line tocolytic for short periods and within tight dosage recommendations; caution in its use is warranted because of the possibility of side effects including palpitations and hypotension (van Geijn et al., 2005;RCOG, 2011). This alerts one to a necessary consideration of the possible actions on cardiac electrical excitability by tocolytic drugs intended to limit uterine APs.
From our simulations, the USMC model seems to be more susceptible to reduced I CaL than the two cardiac (SAN and ventricular) cell models. However, the influence on the cardiac SAN model of reducing I CaL appears rather steep and so, as one attempts to increase the USMC effects by greater I CaL reductions, there too is a potentially damaging influence on SAN activity and, by extension, heart pacing. This may relate to the side effects on cardiac functions and the tight dosage guidelines recommended for nifedipine use as a tocolytic (RCOG, 2011).
With its potential role in facilitating the onset of USMC APs, I CaT is an alternative tocolytic target. The RMP of USMCs of around −55 mV is close to the voltage level for the window current from I CaT . However, experimental studies in cardiac SAN cells illustrate that maneuvres designed to reduce I CaT result in alterations of SAN APs (Hagiwara et al., 1988;Masumiya et al., 1998;Tanaka et al., 2008). In our study, modest reduction of I CaT hyperpolarizes the USMC model RMP and diminishes the bursting spikes with a mild effect on the SAN model cell AP. However, as above for I CaL , further reductions in I CaT with the purpose of abrogating USMC model APs resulted in deleterious actions on the SAN cell model APs.
If this approach is to work, then there will have to be an agent that blocks I CaT with at least as good selectivity as nifedipine for I CaL . Mibefradil has been suggested to be a selective T-type Ca channel blocker but there are several studies indicating that it can affect both L-type and T-type calcium channels (Masumiya et al., 1999;Protas and Robinson, 2000;de Paoli et al., 2002). From such a seeming disadvantage, could one utilize the cross-channel inhibition of a particular drug for the purposes described above? i.e., might the use of one drug eventuate the situation of beneficially inhibiting both I CaL and I CaT ? Mibefradil, at 1 μM, was shown to block 55% of I CaT in rabbit sinonatrial nodal cells (Protas and Robinson, 2000) but also and 64% of I CaL ; at 3 μM, it blocked 28% of I CaT and 15% of I CaL in guinea pig ventricular cells (de Paoli et al., 2002); at 10 μM, it inhibited 90% I CaT in guinea pig ventricular cells (Masumiya et al., 1999) and also 40% I CaL . If we compare these combinations of proportional inhibitions of I CaL and I CaT against our results in Figure 6, all of these combinations fall outside our desired proportional mix of these two currents for a useful tocolytic treatment. Instead, based on our simulation results, if mibefradil was used as tocolytic treatments at these concentrations, it might reduce uterine bursting APs but it would likely affect cardiac pacemaking functions as well. Indeed, mibefradil, at 1 μM, was shown to attenuate the contractile forces of uterine muscle strips from late pregnant rats (IC50 ∼1 μM, Lee et al., 2009). However, at the same concentration, mibefradil also reversibly stopped the pacemaking activities from sinoatrial nodal cells (Protas and Robinson, 2000). Thus, it would seem that the dual actions, and limited channel-specific discretion, of mibefradil on I CaL and I CaT would not be advantageous to our purpose in both in silico and in vitro experimental settings. In addition, it has been reported to have inhibitory actions on a number of non-Ca channels including I Na(1.5) , a major Na current in cardiomyocytes (McNulty and Hanck, 2004). This highlights another issue, therefore, in the prosecution of these studies in the longer-term, i.e., that the development of inhibitors with greater selectivity for T-type Ca channels would be a major advance.
In this in silico assessment of potential tocolycis, not only did we consider the effectiveness of action on the uterus, but we also assessed the possible side effects to different cells of the heart. This approach is only possible as the computational models of both organs are developed with sufficient biophysical details. The most obvious limitation for this kind of assessment is that it depends on the depth and accuracy of biophysical details in the computational models. We can only include quantitative details from www.frontiersin.org October 2014 | Volume 5 | Article 399 | 9 experimental data but often we are forced to make assumptions and educated guesses for the unknowns. Indeed, we have seen in our study that close scrutiny of existing models, however well established, can reveal anomalies that need addressing-in our case the I CaT characteristics in the ventricular cell model-before experimental simulations with particular purposes can be tested. This is the benefit of the iterative process. As computational models continue to be tested, and evolved to incorporate new and relevant biophysical details, the accuracy of the quantitative predictions will improve.
FUTURE DIRECTIONS
Our work brings to the fore the notion that, instead of searching for a single tocolytic compound with high specificity, there may be merit in considering a cocktail of more than one compound. The present recommendations in the UK are to avoid combinations of tocolytic drugs for fear of increasing the risk of side effects (RCOG, 2011). However, this caution arises from a lack of depth to the background research data and the paucity of currently available tocolytic options. The progression to labor has been described as a modular accumulation of (patho)physiological systems-or MAPS-and it is such modularity that is a major challenge to overcome in seeking to prevent or inhibit labor (Mitchell and Taggart, 2009). The suspicion is that once module A of the labor process is inhibited, there is module B, or module C or more that eventually facilitates a similar function and outcome of labor and delivery. If this is so, then the possibilities offered by combination drug strategies must continue to be explored. Our assessment only considered effects at the cell level. However, to fully evaluate the actions of any tocolytic compounds, we need to also consider their effects at the tissue and organ levels. Currently there is no validated computational organ model for the uterus that would serve well this purpose although it is recognized by many that efforts toward this are required (Aslanidi et al., 2011a;Sharp et al., 2013). We are under no illusion that there is a need for much more "wet" experimentation-particularly in USMCs-to furnish computational model improvement, and to test with increasing rigor and clarity, important hypotheses of relevance to tocolysis.
CONCLUSION
Our approach herein can be regarded as a useful platform to be built upon for assessing the potential of tocolytics that act upon ion channels or electrogenic ion exchangers. Using the in silico approach described above enables future research to assess in parallel the potential benefits of attenuating premature uterine contractions vs. the risks of deleterious actions on cardiac electrical excitation.
Encouraging such in silico assessments at the early stages of laboratory or clinical studies will foster an integrated and iterative process between mathematical modeling and experimentation such that each informs the other. Related to this, it can alert one to the possibility of otherwise unforeseen actions of drugs and inform the subsequent protocol development for experimentation. Not only will this be beneficial in the examination of new physiological mechanisms and the actions of novel drugs but it will be useful, as indicated in this study, for the investigation of repurposing the application of existing drugs. It is to be hoped that similar approaches may form part of forthcoming scientific and clinical scientific strategies. | 8,488.2 | 2014-10-16T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Hydrothermal Growth and Application of ZnO Nanowire Films with ZnO and TiO2Buffer Layers in Dye-Sensitized Solar Cells
This paper reports the effects of the seed layers prepared by spin-coating and dip-coating methods on the morphology and density of ZnO nanowire arrays, thus on the performance of ZnO nanowire-based dye-sensitized solar cells (DSSCs). The nanowire films with the thick ZnO buffer layer (~0.8–1 μm thick) can improve the open circuit voltage of the DSSCs through suppressing carrier recombination, however, and cause the decrease of dye loading absorbed on ZnO nanowires. In order to further investigate the effect of TiO2buffer layer on the performance of ZnO nanowire-based DSSCs, compared with the ZnO nanowire-based DSSCs without a compact TiO2buffer layer, the photovoltaic conversion efficiency and open circuit voltage of the ZnO DSSCs with the compact TiO2layer (~50 nm thick) were improved by 3.9–12.5 and 2.4–41.7%, respectively. This can be attributed to the introduction of the compact TiO2layer prepared by sputtering method, which effectively suppressed carrier recombination occurring across both the film–electrolyte interface and the substrate–electrolyte interface.
Introduction
Dye-sensitized solar cells (DSSCs) based on a dye-sensitized wide-band-gap nanocrystalline semiconductor (typically TiO 2 ) film have attracted widespread attention as a potential, cost-effective alternative to silicon solar cells since they were first introduced by O'Regan and Grätzel in 1991 [1]. As one of the key components of dye-sensitized solar cells, the photoelectrode, composed of nanocrystalline semiconductor materials accumulated on a transparent conducting glass, has a very important influence on the photovoltaic performance [2,3]. It is well known that the energy conversion efficiency of DSSCs depends on the electron transport in the photoelectrode. Therefore, onedimensional structure such as rods or wires of semiconductor materials can greatly improve DSSCs efficiency by offering direct electrical pathways for photogenerated electrons, thus enhancing the electron transport in the photoelectrode. Recently, considerable efforts have been devoted to the synthesis of such 1D materials used as the photoelectrodes of DSSCs [4][5][6][7].
Among various emerging 1D nanomaterials, ZnO, a wide-band-gap (3.37 eV) semiconductor with a large exciton binding energy of 60 meV at room temperature, is a promising alternative semiconductor to TiO 2 . This is because that the band gap and the energetic position of the valence band maximum and conduction band minimum of ZnO are very close to that of TiO 2 and that the wurtzite structure of ZnO favors the formation of ordered 1D structures, moreover, presenting better electron transport compared with TiO 2 [4]. Consequently, the solar cell using nanowire arrays as the photoelectrodes shows a higher conversion efficiency compared to those using the disorderedly structured ZnO films [4]. In order to further improve the cell efficiency, the effective approaches currently applied are to control the morphology of ZnO nanostructure films, which can significantly increase dye loading and light harvesting [8,9], and to modify the surface of ZnO nanostructure films that can suppress carrier recombination [10]. However, by introducing a blocking layer at the base of the ZnO films, the influence of the blocking layer on the performance of ZnO DSSCs is an ongoing debate [10].
In this study, we report that the ZnO nanowire films with high aspect ratios and different thicknesses of ZnO buffer layers, which formed at the base of the nanowire films during growth, were prepared from different ZnO seed preparation methods. We also show that carrier recombination in ZnO nanowire-based dye-sensitized solar cells can be effectively suppressed and the photovoltaic conversion efficiency enhanced by introducing the TiO 2 buffer layer prepared by sputtering method.
ZnO Nanowire Array Synthesis
ZnO nanowire arrays were made in aqueous solution, using a two-step process described elsewhere [4]. To study the effect of a thin compact TiO 2 film on FTO substrate on the solar cell performance of ZnO array film, it was prepared at room temperature by using reactive DC magnetron sputtering.
Preparation of ZnO Seeds on FTO Substrates
In order to study the effect of ZnO crystal seed particles on the morphology and solar cell performance of ZnO array film, they were coated onto the FTO substrates by using two different methods.
Hydrothermal Deposition
ZnO nanowire arrays were grown by placing vertically the ZnO-seeded FTO substrates in solutions with 25 mM Zn(NO 3 ) 2 , 25 mM hexamethylenetetramine (HMT) and 7.3 mM polyethyleneimine at 92.5°C. In order to obtain a constant nanowire array growth rate, the solutions were refreshed during the reaction period (solution turnover time 2.5 h). Subsequently, the substrates were washed with water/ethanol and annealed at 400°C for 30 min to remove any residual organics.
Cell Assembly
The resulting substrates were immersed in dry ethanol containing 0.3 mM of N719 for 40 min. To assemble the solar cells, a Pt-coated conducting glass was placed on the ZnO nanowire array films separated by a 50-lm thin membrane spacer. The assembled cell was then clipped together as an open cell. An electrolyte, which was made with 0.1 M LiI (Aldrich), 0.1 M I 2 (Aldrich), 0.6 M dimethylpropylimidazolium iodide (DMPImI, Aldrich) and 0.5 M tert-butylpyridine (Aldrich) in dry acetonitrile (Aldrich), was injected into the open cell from the edges by capillarity.
Characterization
The morphology of the products was characterized with use of field-emission scanning electron microscopy (FE-SEM, Hitachi S-4800). XRD analysis was performed on a powder X-ray diffractometer (Rigaku D/max-2500 diffractometer using CuKa radiation, k = 0.1542, 40 kV, 100 mA). Photocurrent-voltage measurements were performed using simulated AM 1.5 sunlight with an output power of 100 mW cm -2 .
Effect of ZnO Crystal Seed Particles Prepared by Different Methods
In this study, we found that the different ZnO seed preparation methods strongly influenced the morphology and density of ZnO nanowire arrays, leading to the different performance of the DSSCs based on the ZnO nanowire films. Figure 1 shows the top-view and cross-sectional FESEM images of two samples prepared with 7.3 mM of PEI for 30 h on the FTO substrates with ZnO seed layers prepared by spin-coating and dip-coating. The mean values of the nanowire dimension, the array density and aspect ratio were estimated from a statistical evaluation of FE-SEM images and are summarized in Table 1. In order to avoid possible variations at their top, the diameters of the nanowires were measured slightly below the nanowire tip.
Although they had the similar length, the diameter distributions between the well-aligned ZnO nanowires for samples A and B had a significant difference. The nanowire arrays for the samples A1, A2 and A3 had mean diameters ranging from 195 to 210 and 120 to 150 nm for samples B1, B2 and B3. The densities of ZnO nanowires for samples A1, A2 and A3 were 2.1, 2.2 and 2.0 9 10 9 wires cm -2 , respectively, which are much higher than that of samples B1, B2 and B3 (1.5, 1.6, 1.2 9 10 9 wires cm -2 ). The different thicknesses of ZnO buffer layers, which formed at the base of the nanowire films during growth, were obtained from different preparation ZnO seed methods: *0.8-1 lm for samples A1, A2 and A3; *300-500 nm for samples B1, B2 and B3. From the resulting observations, we can conclude that the high density of the nanowires achieved is attributed to the larger number of ZnO seeds on the FTO surface prepared by several spincoating times [11,12], and that, however, this seed preparation method results in a greater variation in nanowire diameter. The crystallinity of grown ZnO nanowire arrays on FTO substrate was investigated using X-ray diffraction. Because all the samples had the very similar XRD patterns, only the XRD pattern of sample B1 was shown in Fig. 2. Diffraction peaks in XRD pattern can be indexed as wurtzite hexagonal structure (JCPDS card No. 36-1451).
With respect to the crystallographic orientation, the most intense peak of ZnO corresponds to the (0002) plane, indicating a strong preferential orientation along the (0001) direction. The resulting observation can be inferred from SEM observations (Fig. 1b, d, f, h, j and l). These results reveal that the nanowires, crystallized along the ZnO (0001) direction, were hexagonal prisms vertically aligned on the FTO substrate. The effect of the morphology and density of ZnO nanowire arrays on their DSSC performance was investigated, as shown in Fig. 3, while the parameters of dyesensitized solar cell based on ZnO nanowire array films are listed in Table 2. Although the densities of nanowire arrays for samples B1, B2 and B3 were lower than that of samples A1, A2 and A3, the short circuit current density (I sc ) increased from 1.61-1.97 to 1.93-2.25 mA cm -2 , and in contrast, the open circuit voltage (V oc ) decreased from 0.57-0.58 to 0.54-0.56 V on using samples B1, B2 and B3 as the photoanode compared with samples A1, A2 and A3. The fill factor showed little change for all samples. The samples B1, B2 and B3 based DSSCs demonstrated higher energy conversion efficiency (g) of 0.66-0.73% when compared to the samples A1, A2 and A3 based DSSCs of 0.57-0.69%. The increase in I sc may be due to that the ZnO nanowires with the thin ZnO buffer layer (*300-500 nm) had higher aspect ratios (*55-79) compared with that (*39-48) of the ZnO nanowires with *0.8-1 lm of the ZnO buffer layer, which caused the increase of dye loading absorbed on ZnO nanowires [13]. However, the ZnO buffer layer can act as a blocking layer to suppress carrier recombination that can occur across both the filmelectrolyte interface and the substrate-electrolyte interface [10]. Therefore, the thin ZnO buffer layer can less effectively suppress carrier recombination than the thick ZnO buffer layer for sample A, resulting in the maximum loss of V oc (*40 mV).
Effect of TiO 2 Blocking Layer
In this section, we investigated the influence of a dense, thin TiO 2 blocking layer (about 50 nm thick, as can be seen from Fig. 4h, j and l) underneath the ZnO nanowire array film prepared by sputtering method on carrier recombination in ZnO DSSC. Figure 4 shows the top-view and cross-sectional FESEM images of ZnO nanowire arrays prepared with 7.3 mM of PEI for 40 h on the bare FTO substrate and on the TiO 2 -coated FTO substrate with the ZnO seeds prepared by dip-coating. The mean values of the nanowire dimension, the array density and aspect ratio are summarized in Table 3. As shown in Fig. 4, fairly well-aligned nanowires, typically 160-170 nm wide with the length of 10.6-11 lm, and 120-135 nm wide with the length of 10.8-11.2 lm, grew onto the bare and TiO 2 -coated FTO substrates, respectively. The six ZnO nanowire films had a similar thickness of ZnO buffer layer (*500 nm). The density of ZnO nanowires for the bare FTO substrate were 1.4-1.6 9 10 9 wires cm -2 , which are slightly higher than that of the TiO 2 -coated FTO substrate (1.1-1.5 9 10 9 wires cm -2 ). The effect of the TiO 2 blocking layer on the photovoltaic performance of a DSSC was investigated. The currentvoltage characteristics of the DSSCs for the bare and TiO 2coated FTO substrates are shown in Fig. 5. The parameters of the cells are summarized in Table 4. The DSSC derived from the nanowire arrays with the TiO 2 blocking layer exhibited considerably improved I sc and V oc , compared to that of the DSSC without the TiO 2 blocking layer. However, the change trend of the fill factor was complex. This is, I sc was increased by 4.2-25.1% from 3.31-3.77 to 3.72-4.14 mA cm -2 , and V oc was increased by 3.9-12.5% from 0.48-0.51 to 0.53-0.55 V. As a result, g was improved by 2.4-41.7% from 0.72-0.84 to 0.86-1.02% by introducing the TiO 2 blocking layer. The increase in I sc can be attributed to the increase in the aspect ratio of the nanowire arrays with and without the TiO 2 blocking layer from about 65-67 to 80-93. V oc is known to be strongly dependent on the charge recombination reactions taking place on both the film-electrolyte interface and the substrate-electrolyte interface, a larger V oc value can be achieved through suppressing those reactions [14]. Therefore, the increase in V oc indicates that the compact TiO 2 layer prepared by sputtering method can effectively suppress the charge recombination. This clearly shows an effective increase in g by introducing the TiO 2 blocking layer.
Conclusion
In summary, the work presented here shows that the different ZnO seed preparation methods strongly influenced the morphology and density of ZnO nanowire arrays. The nanowire film growing from the ZnO seeds prepared by dipcoating had a thin ZnO buffer layer (*300-500 nm thick), which can less effectively suppress carrier recombination than the thick ZnO buffer layer (*0.8-1 lm thick) for spincoating, resulting in the maximum loss of V oc (about 40 mV). In order to further investigate the effect of a TiO 2 buffer layer on the performance of ZnO nanowire-based DSSCs, a TiO 2 blocking layer (about 50 nm thick) underneath the ZnO nanowire array film was prepared onto the FTO substrate by sputtering method. The two different ZnO nanowire films with and without the compact TiO 2 buffer layer (*50 nm) had the similar thickness of ZnO buffer layer (*300-500 nm) and were used to assemble the DSSCs. By introducing the compact TiO 2 layer (*50 nm thick), the photovoltaic conversion efficiency and open circuit voltage of the ZnO DSSCs were improved by 3.9-12.5 and 2.4-41.7%, respectively. This can be because that the compact TiO 2 layer effectively suppressed carrier recombination occurring across both the film-electrolyte interface and the substrate-electrolyte interface. | 3,158.4 | 2009-09-16T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Disseminated Fungal Infection and Fungemia Caused by Trichosporon asahii in a Captive Plumed Basilisk (Basiliscus plumifrons)
Trichosporon spp. are heavily arthroconidiating fungi and widely distributed in nature. Due to the similar fungal morphology, confusion among Trichosporon spp., Geotrichum spp., and Nannizziopsis spp. in reptiles is apparent and cannot be overlooked. Although few reptile Trichosporon isolates have been examined using the newer speciation criteria, the information on Trichosporon asahii in reptiles is still scarce. In the present study, we report the case of disseminated fungal infection and fungemia caused by T. asahii in a captive plumed basilisk (Basiliscus plumifrons). Multiple 0.2–0.5 cm, irregularly shaped, ulcerative nodules on the left hind foot were observed. The animal died due to the non-responsiveness to treatment. A microscopic evaluation revealed the fungal infection that primarily affected the left hind foot and right lung lobe with fungal embolisms in the lung and liver. The molecular identification of the fungal species by the DNA sequences of the ITS regions and D1/D2 gene from the fungal culture and ITS regions, from formalin-fixed paraffin-embedded (FFPE) lung tissues, were completely matched to those of T. asahii. The current report describes the first confirmed case of disseminated fungal infection and fungemia caused by T. asahii in a captive plumed basilisk.
The Trichosporon genus has undergone extensive revision and the species T. beigelii has been replaced by several species [1,2,10]. However, only few reptile Trichosporon isolates have been identified using the newer speciation criteria [3,9]. In a previous survey of 127 reptiles, 4 cutaneous Trichosporon isolates from 2 green iguanas (Iguana iguana), a blood python (Python curtus), and a ball python (Python regius) were identified as T. asahii, but no detailed information on the clinical presentation and pathological findings were provided [4]. In addition, due to the similar fungal morphology, confusion among Trichosporon spp., Geotrichum spp., and Nannizziopsis spp. is possible. In the present study, we reported a case of disseminated fungal infection and fungemia caused by T. asahii in a captive plumed basilisk (Basiliscus plumifrons), which can provide information on a more accurate diagnosis of T. asahii infection.
Case
A captive, adult, male, plumed basilisk was housed in Xpark (Taiwan Yokohama Hakkeijima Inc.). The lizard was kept in a 145 × 100 × 120 cm glass tank with soil and running water for 5 months. The lizard showed anorexia and multiple 0.2-0.5 cm, irregularly shaped, ulcerative nodules on the left hind foot were observed ( Figure 1). Dorsoventral (DV) vertical radiographic views of the left hind foot demonstrated that nodules on the second to third phalanges of the second digit and second phalanx of the third digit had an increased opacity with bone lysis (Figure 2). The radiographic appearance of the right lung lobe revealed mild, irregularly shaped foci of increased opacity, which were interpreted as pulmonary nodules or an overlapping of the heart. No significant findings were detected in the other organ systems. Lateral horizontal radiographic views, which were usually recommended to evaluate the lung in reptiles, were not performed due to the animal's condition. Antibiotic treatment (ceftazidime: 5 mg/kg SC q48h for 7 days) was given, but the lizard died due to the non-responsiveness to the treatment. A complete necropsy with standardized procedures was performed. Gross examination revealed a diffuse consolidated right lung lobe with multiple variably sized grey-to-tan nodules. No significant findings were detected in the left lung lobe and other internal organs. Representative tissue samples, including the brain, foot, and plug (heart; gastrointestinal tract; liver; pancreas; spleen; kidneys; and gonads), were collected, fixed in 10% neutral buffered formalin, and submitted to the Pangolin International Biomedical Consultant Ltd. for pathological examination. Fresh tissue samples from the digital nodules were submitted for fungal culture with a subsequent mycological analysis.
Fungal Species Identification
Fresh samples from the digital nodule were obtained and inoculated on inhibitory mold agar with chloramphenicol and gentamicin (ICG™) (Creative CMP ® , Taiwan) and Mycosel™ agar (BD Difco™, Detroit, MI, USA), and then incubated at 25°C. The fungi grown from the primary culture were used for subsequent molecular identification.
Polymerase chain reactions (PCRs) using primer sets, targeting internal transcribed spacer (ITS) regions and the partial 28S ribosomal DNA (D1/D2) gene, were performed with subsequent DNA sequencing [11,12]. In order to identify the fungal species affecting the lung, a PCR with subsequent DNA sequencing using formalin-fixed paraffin-embedded (FFPE) lung tissues was per-formed. DNA was extracted from 80 µm sections of tissue using the QIAamp DNA FFPE Tissue Kit (Qiagen, Valencia, CA, USA) and used as PCR templates [13]. Primer sets targeting the ITS regions of T. asahii (TAF: 5 -CGC ATC GAT GAA GAA CGC AG-3 and TAR: 5 -GCG GGT AGT CCT ACC TGA TT-3 ) designed by using Primer3 (http://www.ncbi.nlm.nih.gov/tools/primer-blast/, 1 September 2021) were used. The obtained DNA sequences were compared with DNA sequences available in GenBank using the Basic Local Alignment Search Tool (BLAST) server from the National Center for Biotechnology Information.
Histopathology
In the left hind foot, the dermis and hypodermis were extensively expanded and infiltrated to the underlying phalangeal bones that were variably sized, discrete to coalescing granulomas, composed of a central area of eosinophilic cellular debris admixed with mixed inflammatory cells surrounded by epithelioid macrophage and multinucleated giant cells rimmed by concentric layers of fibrous connective tissue ( Figure 3A). Large amounts of erythrocytes, necrotic cell debris, and mixed inflammatory cells with varying numbers of fungal elements were multifocally filling the central lumen and faveolar spaces of the right lung lobe ( Figure 3B). Small-to-large sized blood vessels from the lung and liver were occluded by thromboembolism composed of fibrin, necrotic cell debris, degenerate mixed inflammatory cells, and variable numbers of fungal hyphae and arthroconidia ( Figure 3C-D). Under PAS and JMS staining, it was observed that scattered throughout the granulomas and thromboembolism were variable numbers of fungal hyphae that were PAS-and JMS-positive, branched, septate, and approximately 2 to 4.5 µm in diameter with rare arthroconidia ( Figure 3E-F). No evidence of infectious microorganism was detected under the B&B Gram and ZN stains.
Fungal Identification
The DNA sequences of the ITS regions (accession numbers: OL468573) and D1/D2 gene (accession numbers: OL468574), from the fungal culture, and ITS regions (accession number: OL441035), from FFPE lung tissues, were completely matched to those of T. asahii.
Discussion
The microscopic evaluation of the submitted tissues revealed a disseminated fungal infection that primarily affected the left hind foot and right lung lobe with fungal embolisms in the lung and liver. These findings indicate that the current fungi are angioinvasive and cause disseminated lesions in multiple organs. Furthermore, the identification of fungal species by molecular analysis through PCR and DNA sequencing is performed by the DNA extracted from not only the isolated fungal colony of foot lesions, but also the FFPE lung tissue. The results of DNA sequencing demonstrated that the fungi observed in the foot lesions and lung were T. asahii. The results of histomorphology, PCR, and DNA sequencing confirmed the final diagnosis of disseminated fungal infection and fungemia caused by T. asahii.
Dermatomycosis is a term referring to the fungal infection of the skin and cutaneous adnexa, and the stages of dermatomycosis can be classified as superficial, deep, and disseminated, based on the invasiveness of the fungi [11,12]. Systemic mycosis can occur with or without concurrent dermatomycosis, and severe dermatomycosis can progress to systemic mycosis [3,14]. It is a common that systemic mycosis initially involves the lungs, and it is either confined to the lungs or disseminated to other organ systems [3,14]. The spreading of fungal elements in the visceral organs can occur hematogenously, or by a direct infiltration through the serosa and coelomic membranes to the adjacent organs [3,14]. Pneumonia, caused by fungal infection, is relatively common in tortoises and crocodilians, but it has been reported in a variety of reptile species [3]. Although the contributing factors, including the thermotolerance of the fungi, host immune status, and host species, for establishing a fungal infection have been investigated by studies conducted in humans and plants, such factors are largely unexplored in reptiles [9,15]. Nevertheless, deep dermatomycosis and disseminated fungal infections have been reported in stressed and immunocompromised reptiles due to long distance transportation and suboptimal environmental conditions [9,15].
Fungal Identification
The DNA sequences of the ITS regions (accession numbers: OL468573) and D1/D2 gene (accession numbers: OL468574), from the fungal culture, and ITS regions (accession number: OL441035), from FFPE lung tissues, were completely matched to those of T. asahii.
Discussion
The microscopic evaluation of the submitted tissues revealed a disseminated fungal infection that primarily affected the left hind foot and right lung lobe with fungal embolisms in the lung and liver. These findings indicate that the current fungi are angioinvasive and cause disseminated lesions in multiple organs. Furthermore, the identification of fungal species by molecular analysis through PCR and DNA sequencing is performed by the DNA extracted from not only the isolated fungal colony of foot lesions, but also the FFPE lung tissue. The results of DNA sequencing demonstrated that the fungi observed in the foot lesions and lung were T. asahii. The results of histomorphology, PCR, and DNA sequencing confirmed the final diagnosis of disseminated fungal infection and fungemia caused by T. asahii.
Dermatomycosis is a term referring to the fungal infection of the skin and cutaneous adnexa, and the stages of dermatomycosis can be classified as superficial, deep, and disseminated, based on the invasiveness of the fungi [11,12]. Systemic mycosis can occur with or without concurrent dermatomycosis, and severe dermatomycosis can progress to systemic mycosis [3,14]. It is a common that systemic mycosis initially involves the lungs, and it is either confined to the lungs or disseminated to other organ systems [3,14]. The spreading of fungal elements in the visceral organs can occur hematogenously, or by a direct infiltration through the serosa and coelomic membranes to the adjacent organs [3,14]. Pneumonia, caused by fungal infection, is relatively common in tortoises and crocodilians, but it has been reported in a variety of reptile species [3]. Although the contributing factors, including the thermotolerance of the fungi, host immune status, and host species, for establishing a fungal infection have been investigated by studies conducted in humans and plants, such factors are largely unexplored in reptiles [9,15]. Nevertheless, deep dermatomycosis and disseminated fungal infections have been reported in stressed and immunocompromised reptiles due to long distance transportation and suboptimal environmental conditions [9,15].
Fungal Identification
The DNA sequences of the ITS regions (accession num gene (accession numbers: OL468574), from the fungal cultur number: OL441035), from FFPE lung tissues, were complete hii.
Discussion
The microscopic evaluation of the submitted tissues re infection that primarily affected the left hind foot and right lisms in the lung and liver. These findings indicate that the cu and cause disseminated lesions in multiple organs. Furtherm gal species by molecular analysis through PCR and DNA se DNA extracted from not only the isolated fungal colony of f lung tissue. The results of DNA sequencing demonstrated t foot lesions and lung were T. asahii. The results of histomor quencing confirmed the final diagnosis of disseminated fu caused by T. asahii.
Dermatomycosis is a term referring to the fungal infect adnexa, and the stages of dermatomycosis can be classified seminated, based on the invasiveness of the fungi [11,12]. Sys or without concurrent dermatomycosis, and severe dermato temic mycosis [3,14]. It is a common that systemic mycosis and it is either confined to the lungs or disseminated to oth spreading of fungal elements in the visceral organs can oc direct infiltration through the serosa and coelomic memb [3,14]. Pneumonia, caused by fungal infection, is relatively c odilians, but it has been reported in a variety of reptile spec uting factors, including the thermotolerance of the fungi, h species, for establishing a fungal infection have been investig humans and plants, such factors are largely unexplored in deep dermatomycosis and disseminated fungal infections h and immunocompromised reptiles due to long distance tr environmental conditions [9,15].
Fungal Identification
The DNA sequences of the ITS regions (accession numbers: OL468573) and D gene (accession numbers: OL468574), from the fungal culture, and ITS regions (acce number: OL441035), from FFPE lung tissues, were completely matched to those of T hii.
Discussion
The microscopic evaluation of the submitted tissues revealed a disseminated fu infection that primarily affected the left hind foot and right lung lobe with fungal e lisms in the lung and liver. These findings indicate that the current fungi are angioinv and cause disseminated lesions in multiple organs. Furthermore, the identification of gal species by molecular analysis through PCR and DNA sequencing is performed b DNA extracted from not only the isolated fungal colony of foot lesions, but also the F lung tissue. The results of DNA sequencing demonstrated that the fungi observed i foot lesions and lung were T. asahii. The results of histomorphology, PCR, and DN quencing confirmed the final diagnosis of disseminated fungal infection and fung caused by T. asahii.
Dermatomycosis is a term referring to the fungal infection of the skin and cutan adnexa, and the stages of dermatomycosis can be classified as superficial, deep, and seminated, based on the invasiveness of the fungi [11,12]. Systemic mycosis can occur or without concurrent dermatomycosis, and severe dermatomycosis can progress to temic mycosis [3,14]. It is a common that systemic mycosis initially involves the lu and it is either confined to the lungs or disseminated to other organ systems [3,14] spreading of fungal elements in the visceral organs can occur hematogenously, or direct infiltration through the serosa and coelomic membranes to the adjacent or [3,14]. Pneumonia, caused by fungal infection, is relatively common in tortoises and odilians, but it has been reported in a variety of reptile species [3]. Although the con uting factors, including the thermotolerance of the fungi, host immune status, and species, for establishing a fungal infection have been investigated by studies conduct humans and plants, such factors are largely unexplored in reptiles [9,15]. Neverthe deep dermatomycosis and disseminated fungal infections have been reported in stre and immunocompromised reptiles due to long distance transportation and subop
Fungal Identification
The DNA sequences of the ITS regions (accession numbers: OL468573) and D1/D2 gene (accession numbers: OL468574), from the fungal culture, and ITS regions (accession number: OL441035), from FFPE lung tissues, were completely matched to those of T. asahii.
Discussion
The microscopic evaluation of the submitted tissues revealed a disseminated fungal infection that primarily affected the left hind foot and right lung lobe with fungal embolisms in the lung and liver. These findings indicate that the current fungi are angioinvasive and cause disseminated lesions in multiple organs. Furthermore, the identification of fungal species by molecular analysis through PCR and DNA sequencing is performed by the DNA extracted from not only the isolated fungal colony of foot lesions, but also the FFPE lung tissue. The results of DNA sequencing demonstrated that the fungi observed in the foot lesions and lung were T. asahii. The results of histomorphology, PCR, and DNA sequencing confirmed the final diagnosis of disseminated fungal infection and fungemia caused by T. asahii.
Dermatomycosis is a term referring to the fungal infection of the skin and cutaneous adnexa, and the stages of dermatomycosis can be classified as superficial, deep, and disseminated, based on the invasiveness of the fungi [11,12]. Systemic mycosis can occur with or without concurrent dermatomycosis, and severe dermatomycosis can progress to systemic mycosis [3,14]. It is a common that systemic mycosis initially involves the lungs, and it is either confined to the lungs or disseminated to other organ systems [3,14]. The spreading of fungal elements in the visceral organs can occur hematogenously, or by a direct infiltration through the serosa and coelomic membranes to the adjacent organs [3,14]. Pneumonia, caused by fungal infection, is relatively common in tortoises and crocodilians, but it has been reported in a variety of reptile species [3]. Although the contributing factors, including the thermotolerance of the fungi, host immune status, and host species, for establishing a fungal infection have been investigated by studies conducted in humans and plants, such factors are largely unexplored in reptiles [9,15]. Nevertheless, deep dermatomycosis and disseminated fungal infections have been reported in stressed and immunocompromised reptiles due to long distance transportation and suboptimal environmental conditions [9,15].
In this case, although fungemia and systemic spreading have been observed by the histopathology, the primary location and mechanism for T. asahii to establish the initial infection with subsequent systemic spreading is still undetermined. The possible mechanisms include, but are not limited to: (1) aspiration pneumonia in the right lung lobe with subsequent angioinvasion and spreading to the foot; (2) a fungal infection on the skin on the left hind foot with subsequent bone invasion, angioinvasion, and spreading to the lung; and (3) a fungal infection, simultaneously, in the foot and lung with subsequent angioinvasion. Since only the right lung lobe was affected by T. asahii, and the lesion was more consistent with bronchopneumonia rather than interstitial pneumonia, a diagnosis of aspiration pneumonia rather than a disseminated fungal infection affecting the lung is more likely. Aspiration pneumonia refers to pneumonia caused by the aspiration of foreign materials into the lungs through the airways [16]. The mechanism of fungal spreading, from the lung to the foot in this case, may be similar to that of feline lung-digit syndrome (FLDS), which is a syndrome used to describe the clinical progression of cutaneous metastases on digits metastasized from a primary pulmonary carcinoma [17]. In such a condition, multiple digits are commonly affected, which are consistent with our case. Therefore, although other possibilities cannot be completely excluded, aspiration pneumonia with a subsequent systemic spreading is the most likely underlying mechanism for this case.
In addition to the gross and histopathological findings, radiographic examination can be worthwhile to identify disseminated fungal infection in reptiles. In general, lizard lungs are more easily evaluated by the lateral horizontal radiographic views [18]. The utility of the DV vertical view is usually not satisfied due to the overlapping of the lung with the other coelomic organs (such as the heart and liver), but it is the only method that can be employed to evaluate bilateral lung lobes individually [18]. Based on the experience from this case, DV vertical and lateral horizontal radiographic views of the lung and other internal organs may be considered if there is a suspicious fungal infection on the feet and digits.
In the present study, the current captive environment was recently established (i.e., a new facility), and the lizard was introduced from another facility. There are several possible contributing factors that can compromise the immune function and lead to a disseminated fungal infection, including transportation, environmental change, and different housemates. For the source of the current T. asahii, since T. asahii is widely distributed in the environment, the lizard in this study can be infected in the new facility. However, it is also possible that the lizard in the present study was infected by T. asahii in the previous facility (an old facility) as a subclinical/latent infection with the subsequent development of the disseminated infection in the current facility.
In conclusion, the current report describes the first confirmed case of disseminated fungal infection and fungemia caused by T. asahii in a captive plumed basilisk and discusses the underlying mechanism of fungal spreading. This case report can provide information for the accurate diagnosis of a disseminated fungal infection, which is important to gain a better understanding of the pathogenesis of fungal infections and to improve the animal welfare of reptiles in captivity. Institutional Review Board Statement: Ethical review and approval were waived for this case report, since all samples taken were used for the diagnostic/medical procedures.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to large file size without sustainable storage space. | 4,620 | 2021-11-24T00:00:00.000 | [
"Biology"
] |
Plant HKT Channels: An Updated View on Structure, Function and Gene Regulation
HKT channels are a plant protein family involved in sodium (Na+) and potassium (K+) uptake and Na+-K+ homeostasis. Some HKTs underlie salt tolerance responses in plants, while others provide a mechanism to cope with short-term K+ shortage by allowing increased Na+ uptake under K+ starvation conditions. HKT channels present a functionally versatile family divided into two classes, mainly based on a sequence polymorphism found in the sequences underlying the selectivity filter of the first pore loop. Physiologically, most class I members function as sodium uniporters, and class II members as Na+/K+ symporters. Nevertheless, even within these two classes, there is a high functional diversity that, to date, cannot be explained at the molecular level. The high complexity is also reflected at the regulatory level. HKT expression is modulated at the level of transcription, translation, and functionality of the protein. Here, we summarize and discuss the structure and conservation of the HKT channel family from algae to angiosperms. We also outline the latest findings on gene expression and the regulation of HKT channels.
Introduction
More than 25 years ago, in 1994, the first HKT channel was cloned from wheat roots-TaHKT2;1 [1]. HKT proteins were initially characterized as high-affinity K + (potassium) transporters, leading to the HKT acronym. It was soon demonstrated that HKT channels could transport other cations, namely sodium (Na + ) ( [2],for concise synopsis see: [3]).
HKTs are part of the Trk/Ktr/HKT family and share a characteristic structure made up of four transmembrane domain-pore domain-transmembrane domain units (MPM 1 -MPM 4 ) ( Figure S1). In phylogenetic analyses, the HKT family splits into two subgroups: class I and class II [4][5][6]. The selectivity filter sequence and physiological features generally define class assignment. Four glycine residues distributed over the pore loops form the selectivity filter. Classes I and II are differentiated by the presence of either a serine residue (SGGG) or a glycine residue (GGGG) in the first pore loop, which mostly translates into class-specific differences in ion conduction [7]. However, exceptions to this classification have been found, questioning the classification's accuracy [5,[8][9][10][11].
Generally, class I HKTs (SGGG) are characterized as Na + uniporters, while class II HKTs (GGGG) mediate the Na + and K + symport. Class II proteins are also affected by the external ion composition, which was repeatedly shown to modulate ion conduction. This
HKT Sequence and Structure
To review and consolidate the vast existing functional and structural information on the HKT family, we will first explore protein similarity and divergence of HKT family members from an evolutionary perspective. This examination allows us to revise the association between the degree of conservation in structural motifs associated with substrate selectivity and recognition, with the functional plasticity observed in HKT-mediated ion transport.
Evolution of the HKT Family
HKT channel sequences have been identified in plant species from algae to angiosperms. While angiosperm HKTs are well-studied, experimental data of evolutionarily older HKTs is scarce [5]. Figure 1 illustrates the phylogenetic relationship of HKT channels from nonflowering plants, gymnosperms, and angiosperms classified according to the species tree of The Plant Genomics Resource Phytozome 12 [39]. The evolution of HKT proteins was mostly linear in non-seed-making plants until the family split into two subgroups in seed-making plants-class I and class II HKTs [6]. It has been hypothesized that class I and II HKT genes originated through gene duplication and divergence, and evolved separately [40]. Monocotyledons have representatives of both classes, while in dicotyledons, only class I HKTs have been identified, and class II has not been established (Figure 1 and Figure S1). This may suggest that class II evolved exclusively in monocotyledons. Nevertheless, based on the currently available data, it cannot be conclusively ruled out that the separation of the two classes occurred before the separation of mono-and dicotyledons with a subsequent loss of class II in dicotyledons. Interestingly, class II members seem closer, sequence-wise, to non-seed-making plant members, while class I HKTs show new sequence features (see below). In the following, we will use the HKT phylogeny and the underlying sequence and Nevertheless, based on the currently available data, it cannot be conclusively ruled out that the separation of the two classes occurred before the separation of mono-and dicotyledons with a subsequent loss of class II in dicotyledons. Interestingly, class II members seem closer, sequence-wise, to non-seed-making plant members, while class I HKTs show new sequence features (see below). In the following, we will use the HKT phylogeny and the underlying sequence and structural features to discuss published functional data and place it in a broader context. This comparison manifests intriguing insights into the history of nonflowering plants, gymnosperms, and angiosperms. Figure S1. While there was mostly a gradual evolution of HKTs until the occurrence of seed plants, in monocotyledon angiosperms they split into two sub-branches. Class 2 HKTs can only be found in monocotyledons, while HKTs of class 1 can be found in monocotyledons and dicotyledons.
Pore Domains Bear a High Degree of Conservation
Several structural features of HKT channels have been studied so far. These include conserved key residues as parts of the selectivity filter and pore domains, residues in the second transmembrane domain of unit 4 (U4M2), the intracellular loop between units 1 and 2, as well as an extracellular cation coordination site [for summary, see : 9,14,24]). Nevertheless, the HKT family is, functionally speaking, exceptionally diverse, and key functional differences are not yet understood at the molecular level. Therefore, the primary sequences of HKTs hold additional peculiarities worth investigating. Figure 2 illustrates a high degree of amino acid sequence conservation in the four pore domains of the distinct evolutionary groups. The pore domain of unit 4 (P4) appears to be the most conserved over time. This observation has also been highlighted by Diatloff and colleagues while comparing HKT's primary sequence with yeast Trk sequences [41]. Nine residues are conserved in more than 80% of HKT sequences examined, defining the sequence motif EVxSAYGNVG (Figure 2A, All sequences, P4, Figure S2). Some deviations from this P4 motif are evident in monocotyledons and non-seed-making plant sequences (algae to ferns). In contrast to P4, the first three pore domains appear less conserved over time when all sequences are taken into account. P1 contains three residues-D[…]SA-P2 has six residues-FxxFxxxSxFxNxG-and P3 has three residues-F[…]RxxG-that are conserved in more than 80% of sequences. Additionally, monocotyledons, together with non-seed-making plants, show a higher sequence variety than other groups. Figure S1. While there was mostly a gradual evolution of HKTs until the occurrence of seed plants, in monocotyledon angiosperms they split into two sub-branches. Class 2 HKTs can only be found in monocotyledons, while HKTs of class 1 can be found in monocotyledons and dicotyledons.
Pore Domains Bear a High Degree of Conservation
Several structural features of HKT channels have been studied so far. These include conserved key residues as parts of the selectivity filter and pore domains, residues in the second transmembrane domain of unit 4 (U4M2), the intracellular loop between units 1 and 2, as well as an extracellular cation coordination site (for summary, see: [9,14,24])). Nevertheless, the HKT family is, functionally speaking, exceptionally diverse, and key functional differences are not yet understood at the molecular level. Therefore, the primary sequences of HKTs hold additional peculiarities worth investigating. Figure 2 illustrates a high degree of amino acid sequence conservation in the four pore domains of the distinct evolutionary groups. The pore domain of unit 4 (P4) appears to be the most conserved over time. This observation has also been highlighted by Diatloff and colleagues while comparing HKT's primary sequence with yeast Trk sequences [41]. Nine residues are conserved in more than 80% of HKT sequences examined, defining the sequence motif EVxSAYGNVG (Figure 2A, All sequences, P4, Figure S2). Some deviations from this P4 motif are evident in monocotyledons and non-seed-making plant sequences (algae to ferns). In contrast to P4, the first three pore domains appear less conserved over time when all sequences are taken into account. P1 contains three residues-D[ . . . ]SA-P2 has six residues-FxxFxxxSxFxNxG-and P3 has three residues-F[ . . . ]RxxG-that are conserved in more than 80% of sequences. Additionally, monocotyledons, together with non-seed-making plants, show a higher sequence variety than other groups. Figure S2 and the phylogenetic tree in Figure S1 were generated by WebLogo 3.7.4 [42,43]. (A) Logos of the four pore domains P1-P4 for sequences of algae to ferns, gymnosperms, monocotyledons, dicotyledons, and all 73 sequences together. The illustration above the sequence logos represents an approximation of the secondary structure of each pore domain, indicating the pore helix and pore loop. Numbers in parenthesis next to the names of evolutionary groups indicate the number of sequences used for the generation of the logos. Please note that the sequence logo for gymnosperms was generated with only three sequences. For the sequence logos based on all sequences, the percentage of the most frequent amino acid per position is indicated. Numbers above amino acids are percentages. The asterisk indicates residues with 100% presence in all structures. A quotation mark (") indicates that two amino acids appeared with the highest frequency in the alignment (e.g., 50" indicates that the two most frequent amino acids each appear in 50% of the sequences). S/G1, G2, G3, and G4 indicate the selectivity filter residues. A motif switch in P1, including the selectivity filter residue, is shaded, as well as the selectivity filter residues in P2 to P4. Dashed boxes mark amino acids six, nine or ten, and thirteen positions upstream of selectivity filter residues. Residues are coloured according to their chemical properties: polar residues (G, S, T, Y, C) in green, neutral residues (Q, N) in purple, basic residues (K, R, H) in blue, acidic residues (D, E) in red, and hydrophobic residues (A, V, L, I, P, W, F, M) in black. The number signs below the logos indicate positions for which mutations are available. Mutations are described in the following references: #1 [7], #2 [7], #3 [2], #4 [44], #5 [2], #6 [45], #7 [41], #8 [41]. (B) Sequence logos of the four pore regions specified for early angiosperms, class I, and class II HKTs. Highlights as in A.
However, defined positions relative to the selectivity filter residues are often highly conserved. Amino acids upstream of the selectivity filter at position -6, -9/-10, and, in part, -13 are conserved (dashed boxes in Figure 2). In three of the four units, a serine residue is highly conserved at the sixth position upstream of the selectivity filter residues. It is only in P3 that this position is not conserved, and, at least in seed-making plants, it is predominantly conserved as a valine residue. However, in angiosperms, the neighboring position -5 is conserved as an asparagine (N). Figure S2 and the phylogenetic tree in Figure S1 were generated by WebLogo 3.7.4 [42,43]. (A) Logos of the four pore domains P1-P4 for sequences of algae to ferns, gymnosperms, monocotyledons, dicotyledons, and all 73 sequences together. The illustration above the sequence logos represents an approximation of the secondary structure of each pore domain, indicating the pore helix and pore loop. Numbers in parenthesis next to the names of evolutionary groups indicate the number of sequences used for the generation of the logos. Please note that the sequence logo for gymnosperms was generated with only three sequences. For the sequence logos based on all sequences, the percentage of the most frequent amino acid per position is indicated. Numbers above amino acids are percentages. The asterisk indicates residues with 100% presence in all structures. A quotation mark (") indicates that two amino acids appeared with the highest frequency in the alignment (e.g., 50" indicates that the two most frequent amino acids each appear in 50% of the sequences). S/G1, G2, G3, and G4 indicate the selectivity filter residues. A motif switch in P1, including the selectivity filter residue, is shaded, as well as the selectivity filter residues in P2 to P4. Dashed boxes mark amino acids six, nine or ten, and thirteen positions upstream of selectivity filter residues. Residues are coloured according to their chemical properties: polar residues (G, S, T, Y, C) in green, neutral residues (Q, N) in purple, basic residues (K, R, H) in blue, acidic residues (D, E) in red, and hydrophobic residues (A, V, L, I, P, W, F, M) in black. The number signs below the logos indicate positions for which mutations are available. Mutations are described in the following references: #1 [7], #2 [7], #3 [2], #4 [44], #5 [2], #6 [45], #7 [41], #8 [41]. (B) Sequence logos of the four pore regions specified for early angiosperms, class I, and class II HKTs. Highlights as in A.
However, defined positions relative to the selectivity filter residues are often highly conserved. Amino acids upstream of the selectivity filter at position -6, -9/-10, and, in part, -13 are conserved (dashed boxes in Figure 2). In three of the four units, a serine residue is highly conserved at the sixth position upstream of the selectivity filter residues. It is only in P3 that this position is not conserved, and, at least in seed-making plants, it is predominantly conserved as a valine residue. However, in angiosperms, the neighboring position -5 is conserved as an asparagine (N).
Similarly, the residues at the 9th (P3 and P4) or 10th (P1 and P2) position upstream of the selectivity filter are conserved as phenylalanine (F) throughout the first three pore domains and as a glutamate (E) in the 4th pore domain. Interestingly, in the P4 position -10 holds a conserved phenylalanine (F) in non-seed-making plants, which disappears in dicotyledon angiosperms but is conserved in clade II monocotyledons ( Figure 2B). Position -13 upstream of the selectivity filter is highly conserved in P1 and P2. In P3, this position is somewhat conserved as a valine (V), similar to position -6 in P3. In P4, position -13 is mostly conserved as an asparagine (N), except in clade I monocotyledons, early angiosperms, and non-seed-making plants.
Substitutions of some of the mentioned residues resulted in alterations of the conduction behavior. For example, expressing wild-type TaHKT2;1 in yeast produced Na + toxicity and resulted in growth inhibition [2]. Using this yeast screening system, two TaHKT2;1 mutants that revoked the sensitivity of the yeast towards high salinity conditions were identified. One was TaHKT2;1-A240V; residue A240 corresponds to the sixth position upstream of the selectivity filter glycine in P2. Exceptionally, TaHKT2;1 and HcHKT2;1 contain an alanine at this position, instead of the otherwise conserved serine. Substitution of alanine by valine at this position (i.e., A240V) altered the functional characteristics of TaHKT2;1 [2]. As described for the class II HKT channels, TaHKT2;1 transports Na + and K + ions. The A240V mutation resulted in reduced Na + transport and increased K + uptake. Mutation of the position 5 upstream of the selectivity filter residue in P3 (TaHKT2;1-N365S) reduced the Na + transport at low and high Na + concentrations, also indicating the importance of this position for ion conduction [45]. Mutations of residues at the 9 (TaHKT2;1-E464Q) and 10 (TaHKT2;1-F463L) positions upstream of the selectivity filter in P4 were identified based on sequence analysis and characterized in yeast uptake experiments. The highly conserved glutamate (E) at position -9 in P4 and the phenylalanine (F) at position -10, which is conserved in clade II monocotyledons and evolutionary older HKTs, both resulted in reduced Na + transport upon mutation. TaHKT2;1-F463L additionally showed a reduced affinity for K + transport [41].
Altogether, these mutations demonstrate that the mentioned conserved positions are key determinants for ion conduction characteristics. Structurally, in the three-dimensional space, these residues are located in the pore helix, which is positioned in the back of the pore loop that lines the conduction pathway and contains the selectivity filter residues. Figure 3 illustrates their position relative to the selectivity filter residues. Most of the residues are oriented towards the pore loop, quite likely serving a stabilizing function. Interestingly, the residue at position -6 relative to the selectivity filter in P3 (valine (V) in plant HKTs) is oriented away from the filter. However, the neighboring residue at position -5 (conserved asparagine (N) in plant HKTs), which resulted in altered Na + affinity upon substitution, is instead oriented towards the pore loop ( Figure 3B, orange and purple residues). Structural data are based on the crystal structures of KtrB from Bacillus subtilis and TrkH from Vibrio parahaemolyticus. Mutating residues behind the pore loop may directly affect the shape of the pore region above the selectivity filter, thereby impacting the transport characteristics of HKT channels, as was demonstrated by the above-mentioned mutations. It will be interesting to see whether mutations of the potentially stabilizing residues allow the alteration of the HKT's selectivity and conduction characteristics, and whether it can be used to modulate HKT functionally and exploit this feature to alter and improve Na + and K + homeostasis in plants. Also compelling to research is the reason for the highly conserved fourth pore domain, which has been shown to play a role in ion transport but comprises a higher level of conservation than other pore domains.
A Selectivity Filter Motif in the First Pore Region Changed
It has been established that the nature of the residue of the first pore loop forming the selectivity filter determines the cation selectivity of HKT channels [7,46]. Generally, a serine residue is associated with Na + -selective channels, while a glycine substitution at this position results in permeation of multiple cations. Residues in the P2 to P4 region of the selectivity filter are highly conserved throughout all evolutionary stages (G in P2: 100%, P3: 98%, P4: 97%, gray boxes in Figure 2A). In contrast, the serine/glycine substrate specificity determined by P1 at the selectivity filter experienced a change, where the Na + -selective configuration (SGGG) appears to have evolved later. Non-seed-making plants exclusively contain a glycine residue in P1, while a serine appears at this position for the first time with the emergence of gymnosperms (Figure 2A, P1). To our knowledge, no experimental data is available to document the transport properties of non-seed-making plant HKTs and whether or not this serine/glycine substitution translates into the same functional differences described for angiosperms. Monocotyledon sequences contain both configurations, while dicotyledons solely contain a serine residue. Sequence analysis reveals that the selectivity filter residue in P1 is part of a broader motif that evolved from TGL in non-seed-making plants to SSM in class I monocotyledons and dicotyledons. The highest variability in this motif is seen in class II monocotyledons, which may partially account for the high functional variety within this subgroup ( Figure 2B, P1).
The functional importance of the residue at position 3 of the selectivity filter motif has been described in ion selectivity for AtHKT1;1 and TaHKT2;1, albeit weaker than the effect of mutations of the second position, which corresponds to the established selectiv-
A Selectivity Filter Motif in the First Pore Region Changed
It has been established that the nature of the residue of the first pore loop forming the selectivity filter determines the cation selectivity of HKT channels [7,46]. Generally, a serine residue is associated with Na + -selective channels, while a glycine substitution at this position results in permeation of multiple cations. Residues in the P2 to P4 region of the selectivity filter are highly conserved throughout all evolutionary stages (G in P2: 100%, P3: 98%, P4: 97%, gray boxes in Figure 2A). In contrast, the serine/glycine substrate specificity determined by P1 at the selectivity filter experienced a change, where the Na +selective configuration (SGGG) appears to have evolved later. Non-seed-making plants exclusively contain a glycine residue in P1, while a serine appears at this position for the first time with the emergence of gymnosperms (Figure 2A, P1). To our knowledge, no experimental data is available to document the transport properties of non-seed-making plant HKTs and whether or not this serine/glycine substitution translates into the same functional differences described for angiosperms. Monocotyledon sequences contain both configurations, while dicotyledons solely contain a serine residue. Sequence analysis reveals that the selectivity filter residue in P1 is part of a broader motif that evolved from TGL in non-seed-making plants to SSM in class I monocotyledons and dicotyledons. The highest variability in this motif is seen in class II monocotyledons, which may partially account for the high functional variety within this subgroup ( Figure 2B, P1).
The functional importance of the residue at position 3 of the selectivity filter motif has been described in ion selectivity for AtHKT1;1 and TaHKT2;1, albeit weaker than the effect of mutations of the second position, which corresponds to the established selectivity filter residue [7]. It will be interesting to investigate whether substitutions in the first position residue of the motif result in any functional alterations.
Monocotyledons Are Sequence-and Function-Wise Versatile
Among angiosperms, monocotyledons demonstrate higher HKT sequence variability than dicotyledons. This conservation pattern is not surprising since monocotyledons express two different classes of HKT proteins, while only class I HKTs have been identified in dicotyledons so far [4]. Separating monocotyledons into class I and class II HKTs and early monocotyledons reveals sequence differences between classes that are often highly conserved ( Figure 2B). In class II, the pore helix's residues (approximate position indicated in Figure 2A) upstream of the selectivity filter residues are highly conserved. In contrast, in class I, these sequences show less conservation and are more diverse. This is exemplified by the class I residues D67, T73, and V75 (Y67, L73, and T75 in class II) in P1, and T699, F703, and T704 (L699, S703, and V704 in class II) in P2 (the numbering refers to the position in the multiple sequence alignment Figure S1 and Figure S2). In P3 of class I HKTs, on the other hand, only two positions in the helical structure are highly conserved (F861 and N865), while the pore domain helix in class II HKT is well conserved (NAxFMxV). In general, the pore domain's helical region shows a higher grade of conservation in class II HKTs, compared to class I HKTs.
Also, the pore loops contain intriguing residues worth investigating, particularly around the selectivity filter. Similar to the SSM/TGL motif in P1 described above, amino acids in close proximity to the selectivity filter glycine residues are conserved ( Figure 2). Some conserved residues are even class-or clade-specific. For example, in P3, a glutamate (E) is conserved next to the selectivity filter glycine only in seed-making plants. An asparagine (N) next to the same glutamate is highly conserved in class II HKTs of seed-making plants and, to some extent, in non-seed-making plants. Another class-specific conservation is a leucine (L) residue next to the selectivity glycine in P2 of class II HKT, which is conserved as phenylalanine (F) in class I HKTs. Mutation of this leucine to a phenylalanine in wheat HKT (TaHKT2;1-L247F) resulted in reduced Na + currents and a reduced negative influence of high Na + concentrations on K + transport in yeast uptake experiments [2].
Two positions upstream of the selectivity filter glycine in P2 is an asparagine (N), which is conserved in monocotyledons and present in many, but not all, dicotyledons. Ali et al. mutated this residue in the Arabidopsis thaliana HKT1;1 to an aspartate (D) (AtHKT1;1-N211D), which is present in the dicotyledon halophyte Thellungiella salsuginea HKT1;2 (TsHKT1;2) at this position [44]. TsHKT1;2 is classified as a class I transporter due to a serine in the first pore loop, although it showed K + uptake ability when expressed in yeast [10]. Interestingly, AtHKT1;1-N211D showed improved K + transport relative to Na + , and transgenic Arabidopsis plants expressing AtHKT1;1-N211D demonstrated increased tolerance towards saline conditions. It is compelling that this position in monocotyledons contains a conserved asparagine, such as AtHKT1;1, independent of the class, and therefore, the functional properties (Na + uniport vs. Na + -K + symport).
The EVxSAYGNVG motif identified earlier in the pore domain of unit 4 is highly conserved in class I HKTs but only moderately conserved in the other two monocotyledon groups. Interestingly, an additional sequence motif (namely NxIF) appears to exist upstream of this motif and is only present in class II HKT sequences. The asparagine (N) in this motif is present in 74% of the HKT sequences examined and is conserved in dicotyledons and gymnosperms. The last amino acid of this motif, phenylalanine (F), is conserved in non-seed-making plants, gymnosperms, and class II HKTs, but diverts over time, becoming less frequent in class I HKTs and dicotyledons. Interestingly, this amino acid was mutated to leucine in a wheat class II channel (TaHKT2;1-F463L), as mentioned above [41]. The mutant showed not only an altered Na + transport as observed for most HKT mutants described so far, but additionally had a reduced affinity for K + in yeast uptake experiments, which is a known feature of class II HKTs.
Structure of the Second Transmembrane Segment in Unit 3
Plant HKT channels are homologues of bacterial and fungal Ktr and Trk channels [9,47]. The bacterial TrkH from Vibrio parahaemolyticus and KtrB from Bacillus subtilis have been functionally characterized to a great extent, and crystal structures are available for both proteins [48][49][50][51][52]. For the Trk/Ktr family, the presence and function of an internal loop in the center of the second transmembrane domain of unit 3 (U3M2) have been well-described and experimentally investigated [52]. This loop looms into the ion conduction pathway and can be pulled away from the pathway in response to conformational changes of regulatory proteins bound to Trk/Ktr on the intracellular side. Given these structural and conformational changes, it is hypothesized that this loop plays a role in channel gating [52,53].
In HKT proteins, this loop structure has not attracted much interest so far as it is generally considered to be absent. Sequence alignment and secondary structure predictions of selected HKT sequences (one per evolutionary group) indicate that HKT sequences indeed do not align with the internal loop of KtrB and TrkH (Figure 4). The sequence lengths between the first half of the second transmembrane domain of unit 3 (U3M2a) and the beginning of the first transmembrane domain of unit 4 (U4M1) are also shorter in HKT proteins relative to those in KtrB and TrkH (except for Mar-pol-HKT1, Phy-pat-HKT1, and Pic-abi-HKT2) ( Figure 4C).
Although these observations may suggest that the loop is absent in plant HKT proteins, the sequence between U3M2a and U4M1 is generally not predicted as a helix, or in the best of cases, the confidence for the helix prediction is low ( Figure 4B). In general, the low prediction confidence in this region is independent of the secondary structure prediction. A low confidence value results from increased probabilities for more than one secondary structure. For instance, if a probability of 0.45 for coil, 0.4 for helix, and 0.15 for a sheet is calculated, the final prediction will be coil but with a low confidence value due to the also high prediction probability for the helix at this position (secondary structure prediction server Porter5.0, Figure 4B). Therefore, the predominantly low confidence in the secondary structure prediction gives an ambiguous view of the structural organization of U3M2b in plant HKT channels.
In conclusion, the secondary structure prediction does not reliably suggest that U3M2b forms a helix, nor that the internal loop is simply eliminated in plant HKT. Neither is U3M2b confidently predicted as a loop or coil. Instead, the structural organization of U3M2b in plant HKT channels appears ambiguous and deserves to be reconsidered and investigated.
tions of selected HKT sequences (one per evolutionary group) indicate that HKT sequences indeed do not align with the internal loop of KtrB and TrkH (Figure 4). The sequence lengths between the first half of the second transmembrane domain of unit 3 (U3M2a) and the beginning of the first transmembrane domain of unit 4 (U4M1) are also shorter in HKT proteins relative to those in KtrB and TrkH (except for Mar-pol-HKT1, Phy-pat-HKT1, and Pic-abi-HKT2) ( Figure 4C). .0 that was recently tested among the best performing secondary prediction programs [55,56]. Prediction is based on three classes, which correspond to helix structures represented by H in blue (DSSP classes H, G, I), to strand structures represented by E in red Figure 4. Secondary structure of the second transmembrane spanning segment of the third unit. Secondary structure analysis of the following sequences that represent each phylogenetic group. Sequences were aligned with MUS-CLE [54]. Used sequences in this analysis: Ara-tha-HKT1;1 (Arabidopsis thaliana, AT4G10310), Ory-sat-HKT1;1 y Ory-sat-HKT2;2 (Oryza sativa, Os04g51820 y Q93XI5), Mus-acu-HKT4 (Musa acuminate, GSMUA_Achr1T23660_001), Pic-abi-HKT2 (Picea abies, MA_54668g0010), Azo-fil-HKT2 (Azolla filiculoides, Azfi_s0027.g023651), Sel-moe-HKT1 (Selaginella moellendorffii, PACid_15414191), Phy-pat-HKT1 (Physcomitrella patens, Pp3c1_15810V3), Mar-pol-HKT1 (Marchantia polymorpha, Mapoly0009s0076) and Cha-bra-HKT1 (Chara braunii, g19013). Additionally, sequences of Vibrio parahaemolyticus (Vib-par-TrkH) and Bacillus subtilis (Bac-sub-KtrB) are included for reference. (A) Schematic representation of the Trk/Ktr/HKT family protein structure. Structures in dashed lines (unit 0 and the internal loop in unit 3) are only present in some members of the family. Unit 0 is found in bacterial Trk channels only, and the internal loop is found in Trk and Ktr channels but not in plant HKT. (B) The secondary structure was determined with Porter5.0 that was recently tested among the best performing secondary prediction programs [55,56]. Prediction is based on three classes, which correspond to helix structures represented by H in blue (DSSP classes H, G, I), to strand structures represented by E in red (DSSP classes E, B), and to coil and turn structures represented by C in yellow (DSSP classes S, T.). Limits for transmembrane domains, loop and turn are approximated based on the secondary structure of TrkH and KtrB extracted from the respective crystal structures (TrkH: 6V4L, KtrB: 4J7C) using the visualization software VMD [57]. The secondary structure corresponds to the DSSP classes and colour code as described above with turns (T) coloured in green and isolated bridges (B) in white. Below each prediction is the confidence score for the prediction ranging from 0 (lowest confidence) to 9 (highest confidence). (C) Illustration of the sequence length between the end of the first half of the second transmembrane domain of unit 3 (U3M2a) and the beginning of the first transmembrane domain of unit 4 (U4M1). The length of the internal loop in TrkH and KtrB is marked in darker grey. Numbers next to the bars indicate the absolute length of sequence between the indicated transmembrane domains.
HKT Gene Expression and Regulation
In addition to the functional diversity originating from structural variation, gene and protein regulation and localization of the transporter expression at particular developmental periods and specific plant tissues impart an additional layer of functional versatility. The following section summarizes the current literature regarding the regulation and localization of the expression of HKT family members in different plant species.
Gene Expression Regulation and Protein Localization
HKT channels are expressed in all plant parts from roots to shoots and leaves to the point of flowers (Table 1). They are often found in vascular tissue, mainly xylem parenchyma, but not exclusively. With one exception, all HKT proteins are localized to the plasma membrane. OsHKT1;3 is the only described family member so far that localizes to the Golgi membrane (Rosas-Santiago 2015). The proposed role of the Na + selective OsHKT1;3 is the transport of Na + into the cytoplasm, functioning as an alternative shunt conductance for proton pumps in the Golgi apparatus. HKT gene expression is often affected by stress conditions, such as high sodium or low potassium concentrations. However, there seems tob e no general pattern (Table 1). High Na+ concentration, for example, increases the gene expression of some HKT members, while it decreases the expression of others. Similarly, in some species, gene expression is upregulated in shoots and downregulated in roots, while the opposite effect is observed in other species. This shows that the expression and regulation of the HKT family are complex and equip plants with versatile mechanisms to react to stress situations.
Regulation of AtHKT1;1 Gene in Arabidopsis
While the function, structure, and evolutionary relationships of HKT proteins have been described, the molecular mechanisms regulating HKT expression are less well understood. Studies of Arabidopsis, rice, and other plant species have identified several transcription factors that modulate HKT gene expression ( Table 2). The first transcription factor elucidated was AtbZIP24 (Arabidopsis thaliana basic leucine Zipper 24), which was shown to modulate AtHKT1;1 (A. thaliana High-Affinity K + Transporter 1;1) expression. Arabidopsis mutants with RNAi-mediated AtbZIP24 repression showed an increase in AtHKT1;1 transcript levels compared to wild-type Arabidopsis, indicating that AtbZIP24 functions as a negative regulator of AtHKT1;1 expression [79]. Subsequent studies identified three additional Arabidopsis transcription factors-ARR1 (Arabidopsis Response Regulator 1), ARR12 (Arabidopsis Response Regulator 12), and ABI4 (Abscisic Acid Insensitive 4)-that modulate AtHKT1;1 expression [80,81]. ARR1 and ARR12 were identified due to altered sodium accumulation phenotypes observed in Arabidopsis mutants of response regulators of the cytokinin signal transduction pathway. Additionally, ARR1 and ARR12 expression were rapidly induced by cytokinins. The expression of AtHKT1;1 was decreased due to cytokinin treatment, while AtHKT1;1 expression was increased in the arr1 arr12 double mutant background. These observations indicated that ARR1 and ARR12 are negative regulators of AtHKT1;1 and that cytokinins have a role in salt responses in Arabidopsis as cytokinins affected the expression of ARR1, ARR12, and AtHKT1;1 [80]. The Arabidopsis transcription factor ABI4 and the abscisic acid signal transduction pathway are involved in HKT expression regulation. Arabidopsis abi4 mutants have increased salt tolerance due to higher AtHKT1;1 expression and lower levels of sodium ion accumulation. Overexpressing ABI4 resulted in salt hypersensitivity due to lower AtHKT1;1 expression and a reduced accumulation of sodium ions. Additionally, ABI4 was shown to interact with the AtHKT1;1 promoter through in planta chromatin immunoprecipitation and electrophoresis mobility shift assays. These results indicated that ABI4 is a negative regulator of AtHKT1;1 expression and that abscisic acid is involved in salt responses in Arabidopsis [81].
OsMYB106 is another MYB transcription factor that also regulates an OsHKT gene. Os-MYB106 interacts with OsBAG4 (O. sativa BCL-2-Associated Athanogene 4) and OsSUVH7 (O. sativa Suppressor of Variegation 3-9 Homolog 7) to form a complex that positively regulates OsHKT1;5 (O. sativa High-affinity K + Transporter 1;5) expression. The promoter of OsHKT1;5 was shown to be methylated at specific DNA sequence patterns (CHG and CHH, with H being either A, T, or C), indicating that methylation plays a role in transcriptional regulation. OsSUVH7, which functions as a DNA methylation reader, was found to bind to the promoter of OsHKT1;5 at CHG and CHH sites. OsBAG4, which functions as a chaperon regulator, was shown to bind OsSUVH7 directly. Finally, OsMYB106, OsBAG4, and OsSUVH7 formed a stable complex on the promoter of OsHKT1;5, increasing OsHKT1;5 expression, indicating that the complex is a positive regulator of OsHKT1;5 expression [83].
A final transcription factor that has been shown to positively modulate rice HKT gene expression is Osbhlh035, a modulator of OsHKT1;3 (O. sativa High-affinity K + Transporter 1;3) and OsHKT1;5. Rice osbhlh035 mutants cannot recover from salt stress treatment and have an overaccumulation of sodium ions in their shoot tissue. Additionally, the expression of OsHKT1;3 and OsHKT1;5 is reduced in the osbhlh035 mutant, compared to wild-type rice plants. These results indicated that Osbhlh035 positively regulates OsHKT1;3 and OsHKT1;5 gene expression [82].
Regulation of HKT Gene Expression in Other Plant Species
A poplar transcription factor, PalERF109 (P. alba Ethylene Response Factor 109), was identified as a regulator of PalHKT1 (P. alba High-affinity K + Transporter 1) expression. This transcription factor was identified due to its rapid increase in expression after salt treatment. Overexpression of PalERF109 resulted in increased salt tolerance and increased PalHKT1 expression. These results indicated that PalERF109 is a positive regulator of PalHKT1 expression and that ethylene is involved in salt responses in poplar [84].
Non-Transcription Factor-Mediated HKT Gene Regulation
The barley transporter HvHKT2;1 (H. vulgare High-affinity K + Transporter 2;1) was shown to have intron-retaining and exon-skipping variants in barley. These variants included HvHKT2;1-e (retained first exon region), HvHKT2;1-i1 (retained first intron), and HvHKT2;1-i2 (retained second intron). Salt treatments of barley resulted in a change in these variants' ratios, with a drastic increase in HvHKT2;1-i1 as salt stress increased. Additionally, the expression of HvHKT2;1-i (all introns retained) in the trk1 trk2 yeast (Saccharomyces cerevisiae) strain defective in K + uptake allowed for growth in media containing different concentrations of K + ions. These results indicated that different intron-retaining and exon-skipping HKT variants play a role in salt responses in barley [85].
Conclusions
High functional variability of HKT channels and other transporters involved in the Na + and K + homeostasis demonstrates the complexity and diversity of Na + detoxification and usage strategies. Further studies are necessary to fully decrypt different systems, especially their position and role within the cellular machinery. HKT proteins are regulated on many levels, including transcriptional and translational levels, and directly at the protein level. Therefore, it is essential to continue to study gene and protein regulation to fully understand the range of influence of saline stress-and K + starvation-related effects and the role of HKT channels in these stresses.
Although the trend goes towards investigating and understanding gene expression regulation in response to environmental factors, the primary structure of HKT proteins still holds many interesting unexplored features worth investigating as they determine functionality. Some of these differences may account for or be related to the high functional diversity observed in this protein family, especially the class II subgroup. Deciphering novel structural and regulatory features will improve our understanding of how versatility is determined among HKT channels, which is crucial if we are to improve the salt tolerance of plants. | 8,766.8 | 2021-02-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Dispersion-theoretical analysis of the electromagnetic form factors of the nucleon: Past, present and future
We review the dispersion-theoretical analysis of the electromagnetic form factors of the nucleon. We emphasize in particular the role of unitarity and analyticity in the construction of the isoscalar and isovector spectral functions. We present new results on the extraction of the nucleon radii, the electric and magnetic form factors and the extraction of $\omega$-meson couplings. All this is supplemented by a detailed calculation of the theoretical uncertainties, using bootstrap and Bayesian methods to pin down the statistical errors, while systematic errors are determined from variations of the spectral functions. We also discuss the physics of the time-like form factors and point out further issues to be addressed in this framework.
Introduction
Nucleons and electrons are the constituents of everyday matter with nucleons accounting for essentially all of its mass. The mass of the nucleon as a bound state of quarks and gluons, on the other hand, arises from the complicated strong interaction dynamics of quarks and gluons in Quantum Chromodynamics (QCD) [1]. The electromagnetic form factors of the nucleon describe the structure of the nucleon as seen by an electromagnetic probe. As such, they provide a window on strong interaction dynamics over a large range of momentum, for recent reviews see, e.g. Refs. [2,3]. Moreover, they are an important ingredient in the description of a wide range of observables ranging from the Lamb shift in atomic physics to the strangeness content of the nucleon [4,5]. At small momentum transfers, they are sensitive to the gross properties of the nucleon like the charge and magnetic moment as well as the radii. At large momentum transfer, in contrast, they probe the quark substructure of the nucleon as described by QCD.
A new twist was recently added to this picture by measurements of the proton charge radius in muonic hydrogen. The proton charge radius was first indirectly measured in the Nobel prize winning electron scattering experiments by Hofstadter et al. [6,7]. While electron scattering was the method of choice to refine the proton radius in the decades following these pioneering experiments, the Lamb shift in electronic hydrogen and muonic hydrogen is also sensitive to the proton radius [8]. The electronic Lamb shift measurements as well as most electron scattering experiments gave the so-called large radius, r P E 0.88 fm, which was also the value given by CODATA [9]. 1 It then came as a true surprise to most researchers when the first measurement of the muonic hydrogen Lamb shift, which has a larger sensitivity to r P E because of the much larger muon mass, led to the so-called small radius, r P E = 0.84184 (67) fm, differing by 5σ from the CODATA value [10]. At about the same time, a high-precision electronproton scattering experiment performed at the Mainz Microtron (MAMI) reinforced the large radius [11]. This glaring discrepancy in such a fundamental quantity, which was believed to be understood since long, became known as the "proton radius puzzle". It led to much experimental and theoretical activity dedicated to uncover its cause, either physics beyond the standard model, or more mundane reasons, such as an underestimation of the experimental uncertainties. Recent experiments on the electronic Lamb shift [12,13,14] and a novel measurement of electron-proton scattering at unprecedented small momentum transfer [15] now point to the latter reason. With the exception of Ref. [13], all of these new determinations of r p consistently give a small proton radius. Consequently, the newest addition of the CODATA compilation lists the proton charge radius as r p E = 0.8414 (19) fm [16]. A short review of the current status is given in Ref. [17]. The important role of dispersion theory in solving this "puzzle" will be discussed below.
This paper is structured as follows: In Sec. 2, we briefly review earlier dispersion-theoretical analyses of the electromagnetic nucleon from factors. The complete formalism to perform such type of analyses is given in Sec. 3, where all basic definitions are given and the various contributions to the spectral functions, the central objects of the dispersive method, are discussed in detail. Furthermore, constraints on the nucleon form factors and two-photon corrections to the electron-proton scattering cross section are presented. Finally, we display in detail methods to determine the theoretical uncertainties, both the statistical and the systematical ones. In Sec. 4, we display the results of our new dispersion-theoretical analysis of the electromagnetic form factors in the space-like region, including novel determinations of the various radii, form factors as well as the ω-meson couplings. Then, we consider the extension to the form factors in the time-like region and discuss the physics encoded in these. We end with a brief summary and an outlook in Sec. 5. In the appendices, we give further details on the extraction of neutron form factors from light nuclei as well as on the construction of the continuum contributions to the spectral functions. We also collect the various parameters of our best fit discussed in the main text.
Short history of dispersive analyses of the nucleon form factors
Here, we briefly review earlier work using dispersion theory to analyze the electromagnetic structure of the nucleon. To be more precise, we only consider investigations that include explicitly the two-pion continuum, which generates the ρ-meson in the isovector part of the spectral function in addition to a very important uncorrelated twopion contribution as first discussed by Frazer and Fulco [18,19,20]. For other work on dispersion relations applied to the nucleon electromagnetic from factors, we refer the reader to the review Ref. [21].
The first groundbreaking work was done by the Karlsruhe group in 1976 [22]. Here, electron-proton (ep) cross section data supplemented by neutron form factor data from elastic and quasi-elastic electron-deuteron scattering were fitted. Besides the two-pion continuum, the spectral functions contained the ω-meson plus additional isoscalar and isovector poles and normalization constants for the data sets. It should be noted that the ep data base was pruned in the sense that in case of inconsistencies between data sets, only one was retained. A dozen of fits with varying number of vector mesons poles and excluding various subsets of data were performed. The best fit (fit 8.2) featured 8 resonance parameters. Theoretical errors were estimated from the variations in the different fits. The resulting proton radii are tabulated in Tab. 1 and the neutron radii in Tab. 2. The neutron magnetic radius could not be determined very precisely at that time. Also notable were the sizable φN N couplings, where N denotes the nucleon, at odd with expectations from the OZI rule [23,24,25].
In 1995, the Bonn-Mainz group (MMD) rejuvenated the dispersion-theoretical approach to the nucleon electromagnetic form factors, as many new form factor results had become available and perturbative QCD had firmly established the behavior of the form factors at large momentum transfer [26]. Fits were performed to the existing form factor data basis of the Bochum group (updated from Refs. [27,28]). The two-pion continuum was still based on the Karlsruhe-Helsinki pion-nucleon (πN ) partial wave amplitudes f 1 ± (t), but the ρ-ω mixing visible in the pion vector form factor was included for the first time. The best fits where obtained with three additional isovector poles and and one additional isoscalar one (besides the ω and the φ). It was found that the onset of perturbative QCD was not seen in these data and the radii and vector meson couplings were consistent with the findings of the Karlsruhe group, see Tab. 1 and Tab. 2. Remarkably, these dispersive fits could not be made consistent with the then existing best value for r p E from ep scattering, r p E = (0.862 ± 0.012) fm [29]. Further, the large deviation in the OZI rule of the φ couplings was confirmed. One year later, the sparse and not very precise existing data on the proton and neutron form factors in the time-like region were included, which revealed some inconsistencies in the time-like data basis for the neutron [30].
In view of new data on the proton and neutron form factors, in particular the first polarization transfer measurements at Jefferson Lab at few GeV 2 squared momentum transfer [31,32], the MMD work was updated, with a particular emphasis on the magnetic radius of the proton and the neutron in [33]. In this work, no error analysis was performed.
A significant improvement of the dispersion relation (DR) analysis was performed in Ref. [34] (BHM). Not only was the data basis enlarged, but also the description of the isoscalar spectral function was improved by including the KK [35,36] and the πρ [37] continua. Furthermore, the 2π continuum was updated in view of new data for the pion vector form factor [38]. All data from the space-like and the time-like regions were included in the fit. In these fits, besides the mentioned continua, the isoscalar spectral function featured the ω, the φ and two poles, where as the 2π continuum was supplemented by 5 effective poles. The uncertainties were calculated by large scale Monte Carlo samplings of all solution with a χ 2 /dof in the range [χ 2 min , χ 2 min + 1.04], where χ 2 min refers to the best fit, corresponding to the 1σ coincidence in the p-value. Different to all earlier fits, the neutron charge radius squared was not included as a constraint. Nonetheless, the extracted value of (r n E ) 2 came out consistent with determinations from low energy atom-neutron scattering, see Tab. 2
that paper, it was stated that the then accepted proton charge radius determined from the Lamb shift in electronic hydrogen, r p E = 0.88 . . . 0.90 fm, see Ref. [39] (and references therein), was inconsistent with the dispersion analysis of the electron scattering data, thus previewing what was later called the "proton radius puzzle". The various radii came out consistent with earlier DR determinations, see Tab. 1 and Tab. 2. The same spectral functions were also used to extract the strength of two-photon corrections from the difference of data obtained by Rosenbluth separation and direct polarization transfer measurements [40]. The so determined two-photon corrections came out comparable to direct calculations available in the literature, such as Refs. [41,42,43].
The high-precision data with Q 2 ≤ 1 GeV 2 that emerged from MAMI-C in 2010 [11,44] called for a further update of the DR analysis. A first DR analysis in Ref. [45] utilized the same continua as BHM with the ω, the φ and three/five effective isoscalar/isovector poles. The fit was done to the reconstructed MAMI cross section data in the one-photon approximation and simultaneously to the neutron form factor data. The uncertainties were obtained varying the continua within reasonable ranges, namely the 2π continuum by 5% and the KK and πρ continua by 20%. Again, a small proton charge radius, r p E = 0.84 fm, emerged and the other radii were also agreeing with early DR determinations, very different to the values quoted in [11].
This work was further improved in various aspects in Ref. [46]. Here, only proton data were investigated, but two-photon corrections to the cross section were calculated and systematically included to the MAMI-C data, overcoming some inconsistencies in older approaches to this problem. Furthermore, to extract the statistical error due to the fitting procedure, a bootstrap approach was implemented. The spectral function was the same as in [45], but in addition, normalization constants for the various data sets were included (in total 31 new parameters) and the χ 2 definition was augmented by the correlation matrix. This method constituted an improvement about earlier error determinations. The uncertainties in the radii were somewhat increased compared to earlier determinations, see Tab. 1 and Tab. 2. The measured proton form factor ratio data for Q 2 < 1 GeV 2 [47,48] were not included in the fits but well described.
The work of Ref. [46] was extended by including neutron space-like form factor data as well as then existing data for the proton and the neutron in the time-like region in [49]. The emphasis of this work was to understand the structures seen by the BaBar collaboration [50] in the region between threshold up to highest measured momentum transfers. These structures (and similar but less pronounced ones in e + e − → nn) could be explained by including a φ(2170) meson as well as the N∆ and ∆∆ thresholds.
A significant improvement of the isovector spectral functions was reported in Ref. [51], based on the highprecision analysis of pion-nucleon scattering in the framework of the so-called Roy-Steiner equations [52]. This work also featured a detailed investigations of the corresponding isospin breaking effects in the pion form factor and the pion-nucleon P-wave amplitudes. The spectral functions given there serve as input for any DR analysis.
The most recent DR analysis in [53] was triggered by the PRad data [15], that measured ep cross sections at extreme forward angles corresponding to unprecedented small momentum transfers. In Ref. [53], fits to the PRad as well as the PRad and MAMI-C data were performed. The best fit to the combined data featured 5 isoscalar and 5 isovector poles, while the PRad data could be well described with 2+2 poles only. Again, the low-Q 2 data for µ p G p E /G p M were not included in the fit but could be well described. The error analysis was improved compared to earlier DR work, the bootstrap method was used to determine the "statistical" error, while the "systematic" error was obtained from varying the number of effective poles, e.g. in the combined analysis the range from (2+2) to (7+7) isoscalar + isovector poles was covered. This led to the very precise proton radii given in Tab. 1. It was pointed out that the statistical error in the PRad analysis is underestimated, consistent with the earlier findings of Ref. [54]. In Ref. [53], no uncertainties on the proton form factors were given and no neutron data were analyzed. In this review, we will fill this gap and present detailed results on these topics. Also, a Bayesian approach to calculate the statistical errors will be presented and compared to the bootstrap method.
It is remarkable how little fluctuations in the extracted values of the nucleon em radii based on dispersion relations have appeared with time, despite a dramatic improvement in the data base and a number of theoretical improvements, related in particular to the isoscalar and the isovector spectral functions and calculation of the theoretical uncertainties. There has also been some related work in the so-called dispersively improved chiral perturbation theory, see [55,56,57,58]. The extracted proton charge radius is consistent with our result, but as noted in Ref. [59], this approach is subject to uncertainties in the ρ-region, different from the exact representation used in the papers discussed above.
We end this section by noting that the so-called strangeness form factors of the nucleon can also be calculated (under certain assumptions) using the DR results for the isoscalar vector mesons, see e.g. Refs. [60,61,62,36]. For more details on this interesting topic, see the reviews [4,5].
Definitions
The electromagnetic (em) structure of the nucleon is determined by the matrix element of the vector current operator for the light quarks q = (u, d, s) T with the charges Q = diag(2, −1, −1)/3 (in terms of the elementary charge), sandwiched between nucleon states as depicted in Fig. 1. Denoting a nucleon state with four-momentum p as |p (for ease of notation, we do not display the corresponding spin or helicity index), with the help of Lorentz and gauge invariance and assuming CP invariance, this matrix element can be expressed in terms of two form factors, where m is the nucleon mass (which can be either the neutron, the proton or the isospin averaged mass) and t = (p − p) 2 the four-momentum transfer squared. For the analysis of data in the space-like region, it is convenient to use the variable Q 2 = −t > 0. The scalar functions F 1 (t) and F 2 (t) are the Dirac and Pauli form factors, respectively. They are normalized at t = 0 as with κ p = 1.793 and κ n = −1.913 the anomalous magnetic moment of the proton and the neutron, respectively, in units of the nuclear magneton, µ N = e/(2m p ). The magnetic moment of the proton and the neutron is thus given by µ p = 1 + κ p and µ n = κ n , respectively. For the theoretical analysis, it is often convenient to work in the isospin basis and to decompose the form factors into isoscalar (s) and isovector (v) parts, where i = 1, 2 . The experimental data are usually given in terms of the Sachs form factors where τ = −t/(4m 2 ). In the Breit frame, G E and G M may be interpreted as the Fourier transforms of the charge and magnetization distributions, respectively. The nucleon radii are defined via the low-t expansion of the form factors, where F (t) is a generic form factor. In the case of the electric and Dirac form factors of the neutron, G n E and F n 1 , the expansion starts with the term linear in t and the normalization factor F (0) is dropped. Note that the slopes of G n E and F n 1 are related via dG n E (Q 2 ) dQ 2 with m n the neutron mass. We remark that alternative information on the proton charge radius can be obtained from Lamb shift measurements in electronic as well as muonic hydrogen, see e.g. the reviews [63,64].
In the space-like region with t < 0, the form factors are real valued quantities. This is different in the timelike region with t > 0. By their very definition, at the nucleon-antinucleon threshold, t thr = 4m 2 , they fulfill the relation for both the proton and the neutron. In the physical region t > 4m 2 , the FFs are complex valued quantities. In Fig. 2, we sketch an exemplary form factor (here: G p M (t)) for all values of t. More precisely, the modulus of the form factor is depicted. For the space-like region, the threshold is located at t = 0, whereas the corresponding threshold in the time-like region is t = 4m 2 . In between these two thresholds, the various vector meson poles (plus continua) build up the spectral function to be discussed in detail below. This region cannot be observed. We note that for the form factors in the time-like region, an additional complication arises due to the strong near-threshold nucleon-antinucleon interactions, which will be considered in Sec. 4.4. [30]. The colored area between the two dashed lines at t = 0 and t = 4m 2 is the unphysical region where the form factor cannot be observed.
Elementary cross section and polarization transfer
The form factors (FFs) can not be measured directly but are encoded in observables related to electron scattering. Consider for definiteness electron-proton (ep) scattering, where the four-momenta p i are subject to the constraint p 1 +p 2 = p 3 +p 4 . At first order in the electromagnetic finestructure constant α, the Born-approximation, the differential cross section can be expressed through the Sachs FFs as is the virtual photon polarization, θ is the electron scattering angle in the laboratory frame, and (dσ/dΩ) Mott is the Mott cross section, which corresponds to scattering off a point-like particle, where E 1 (E 3 ) is the energy of the incoming (outgoing) electron. Two quantities out of the energies, momenta and angles suffice to determine this cross section and are related for such an elastic process. Specifically, in the laboratory frame with the initial nucleon at rest and neglecting the electron mass, we can write In experiment, the differential cross section is usually given for a fixed total energy as a function of the scattering angle, so that a small scattering angle corresponds to a small momentum transfer. This is exactly the reason why a precise determination of the em radii is so difficult. At large momentum transfer, the contribution from the magnetic FF dominates the cross section. The contribution from the electric and the magnetic form factor can be read off form the reduced cross section σ R defined in Eq. (11). The reduced cross section σ R depends linearly on for a given Q 2 , with slope G E (Q 2 ) and intercept τ G 2 M (Q 2 ). This is called the Rosenbluth separation [65]. Two-photon corrections to this cross section will be discussed in Sect. 3.11. Also, to investigate the neutron FFs, one measures electron scattering of a light nucleus like deuterium or 3 He. This requires, however, some accurate few-body technique to disentangle the neutron contribution from the scattering cross section, as discussed briefly in App. A.
In early ep scattering experiments, it was found that the form factors could be well approximated by the dipole form, G dip (Q 2 ), with G n E (Q 2 ) = 0 in this approximation. Employing these dipole FFs in the integrated cross section Eq. (11) defines the so-called dipole cross section, σ dip . Often, the form factors or the measured cross sections are given relative to G dip (Q 2 ) and σ dip , respectively.
A method to directly measure the form factor ratio G E /G M in polarized electron scattering off the proton, − → e p → − → e p (or similarly for scattering off the deuteron or 3 He), has been proposed in Refs. [66,67]. A simultaneous measurement of the two recoil polarizations (longitudinal, P l , and transverse, P t ) allows one to measure directly the ratio While this only determines the form factor ratio (and not the individual FFs), many systematic uncertainties cancel out and make this observable an important benchmark for any theoretical form factor calculation. Let us briefly discuss the determination of the form factors in the time-like region. They can be extracted from the cross section data e + e − ↔pp and e + e − →nn for the proton and the neutron, respectively. As only very few differential cross section data exist in the time-like region, a separation of G E and G M is often not possible and one either makes an assumption like e.g. G E = G M in the analysis of the data or one extracts the effective form factor |G eff |, discussed below. For a review on the nucleon em form factors in the time-like region, see Ref. [2].
We now consider the process, in more detail. It is convenient to choose the center-ofmass (CM) frame, i.e., p 1,2 = (E, ±k e ) and p 3,4 = (E, ±k p ).
The photon momentum q then determines the center-ofmass energy by q 2 = (p 1 + p 2 ) 2 = t = E 2 CM = (2E) 2 . In the metric used here, time-like q implies positive q 2 . The three-momenta k e , k p appear in the phase-space factor β = k p /k e , which in the limit of neglecting the electron mass yields the velocity of the proton, and m p is the proton mass. We denote the emission angle of the proton by θ. The differential cross section in the one-photon-exchange approximation then is in terms of the electric and magnetic Sachs form factors and C(q 2 ) is the Sommerfeld-Gamow factor that accounts for the Coulomb interaction between the final-state particles Integrating over the full angular distribution gives the total cross section This defines the effective form factor G eff For neutrons, the formulas are equivalent except for the Sommerfeld-Gamow factor which is not present in that case. Beyond the Coulomb final-state interactions, higher order QED corrections are usually neglected. For the timereversed process, the phase space factor is inverted, yielding Taking into account the angular dependence of pp production, one can express the differential cross section via the angular asymmetry A, see Ref. [68], This can be determined from the FF ratio R = |G E /G M |. The dispersion integral is calculated for the space-like value of t also shown by a cross.
Dispersion relations and spectral decomposition
Dispersion relations (DRs) are based on unitarity and analyticity. Here, DRs relate the real and imaginary parts of the electromagnetic nucleon form factors. Let F (t) be a generic symbol for any one of the four independent nucleon form factors. We write down an unsubtracted dispersion relation of the form where t 0 is the threshold of the lowest cut of F (t) (see below) and the i defines the integral for values of t on the cut. The convergence of an unsubtracted dispersion relation for the form factors has been assumed. For proofs of such a representation in perturbation theory, see Ref. [69] (and references therein). One could also use a once subtracted dispersion relation, since the normalization of the form factors at t = 0 is known. However, in what follows, we will only employ the unsubtracted form give in Eq. (26). Most importantly, by Eq. (26) the electromagnetic structure of the nucleon can be related to its absorptive behavior. In Fig. 3 we display the analytic structure underlying the dispersion integral in Eq. (26). The various ingredients (continuum cuts, vector meson poles) will be discussed in detail below. The imaginary part Im F entering Eq. (26) can be obtained from a spectral decomposition [70,71]. For this purpose, consider the electromagnetic current matrix element in the time-like region (t > 0), which is related to the space-like region (t < 0) via crossing symmetry. This matrix element is given by where p andp are the momenta of the nucleon and antinucleon created by the current j em µ , respectively. The four-momentum transfer squared in the time-like region is t = (p 3 + p 4 ) 2 .
Using the LSZ reduction formalism, the imaginary part of the form factors is obtained by inserting a complete set of intermediate states as [70,71] (28) where N is a nucleon spinor normalization factor, Z is the nucleon wave function renormalization, andJ N (x) = J † (x)γ 0 with J N (x) a nucleon source. This decomposition is illustrated in Fig. 4. It relates the spectral function to on-shell matrix elements of other processes, as detailed below.
The states |λ are asymptotic (observable) states of momentum p λ . They carry the same quantum numbers as the current j em µ : for the isovector component (29) of the current j em µ . Here, I and J denote the isospin I = 0, 1 and the angular momentum J = 1 of the photon, whereas G, P and C give the G-parity, parity and charge conjugation quantum number, respectively. Furthermore, these currents have zero net baryon number. Because of Gparity, states with an odd number of pions only contribute to the isoscalar part, while states with an even number contribute to the isovector part. For the isoscalar part the lowest mass states are: 3π, 5π, . . . , KK, KKπ, . . . , (30) and for the isovector part they are: 2π, 4π, . . . .
Associated with each intermediate state is a cut starting at the corresponding threshold in t and running to infinity. As a consequence, the spectral function Im F (t) is different from zero along the cut from t 0 to ∞, with t 0 = 4 (9) M 2 π for the isovector (isoscalar) case. The spectral functions are the central quantities in the dispersion-theoretical approach. Using Eqs. (27,28), they A B Fig. 5. Two-pion cut contribution to the isovector form factors, given in terms of the pion vector form factor F V π (represented by A) and the ππ →N N P-waves f 1 ± (represented by B). Solid, dashed, and wiggly lines denote nucleons, pions, and the external photon, respectively, while the dash-dotted line indicates the cutting of particle propagators.
can in principle be constructed from experimental data. In practice, this program can only be carried out for the lightest two-particle intermediate states.
The longest-range, and therefore at low momentum transfer most important continuum contribution comes from the 2π intermediate state which contributes to the isovector form factors [72]. A novel and very precise calculation of this contribution has recently been performed in Ref. [51] including the state-of-the-art pion-nucleon scattering amplitudes from dispersion theory, as detailed below. In the isoscalar channel, the inclusion of the KK [35,36] and ρπ continua [37] was first introduced in Ref. [34] in the dispersive analysis of the em form factors. These important ingredients are discussed in more detail below. Apart from the continua, there are also single vectormeson pole contributions. As will become clear in the following, the contributions from the continua and the poles are sometimes strongly intertwined, e.g. the ρ-meson pole is indeed generated as part of the 2π-continuum, as known since long [18,19,20].
Two-pion continuum
In the isovector channel, the lowest continuum contribution is given by the two-pion exchange as depicted in Fig. 5. Therefore, the unitarity relations for the nucleon form factors reads [20] is the vector form factor of the pion and the f 1 ± (t) are the P-wave πN partial waves in the t-channel. Watson's theorem ensures that the lefthand side of the equations stays real, as long as the same ππ phase shift is used in the calculation of the pion form factor and the ππ →N N partial waves. Therefore, in the most recent determination of the two-pion continuum, the same three variants of the phase shift δ 1 1 in the data fits for F V π (t) were used as in the Roy-Steiner analysis of pionnucleon scattering [52]. The full consistency among all ingredients entering the unitarity relation that was achieved in Ref. [51] was a key improvement over earlier calculations, and thus the representation of the two-pion continuum given there will be discussed in what follows.
It is important to discuss the range of validity of the 2π approximation to the unitarity relation. Strictly speaking, the 4π threshold opens at √ t = 4M π = 0.56 GeV, but it is well known from phenomenology that the 4π contribution is completely negligible below the ωπ threshold at √ t = 0.92 GeV, see e.g. [73], and only becomes sizable once the ρ , ρ resonances are excited. This can also be understood from chiral perturbation theory, where the 4π contribution appears first at three loop order [74]. For this reason, the two-pion cut contribution to the isovector form factors is Let us now discuss in more detail the various ingredients entering the Eqs. (32). We start with the vector (em) form factor of the pion. It is given by The ππ intermediate states produce the unitarity relation with the ππ P-wave phase shift δ 1 1 . Eq. (34) reflects Watson's final-state theorem [75], which states that the phase of F V π has to coincide with the ππ scattering phase shift (up to multiple integers of π). Neglecting higher intermediate states unitarity determines F V π (t) up to a polynomial P (t) in terms of the Omnès factor Ω 1 .
(35) In fact, the representation (35) provides a very efficient and accurate parameterization of the experimental data, up to the distortions due to ρ-ω mixing. This isospinviolating effect can be included via a modification of F V π (t), with the ω mass M ω and width Γ ω . The parameters α and , where parameterizes the strength of the ω-ρ mixing, are fit to recent form factor data, see Refs. [77,78,79] below √ t = 1 GeV, using the same ππ phase shifts as in the RS analysis [52]. The latter has been determined from Roy and Roy-like equations by the Bern [80] and the Madrid-Cracow group [81]. To get a better handle on the uncertainty estimate for the final spectral functions from the pion vector FF, in Ref. [51] a variant of the Bernese phase shift was also considered. It includes effects from the ρ and the ρ in an elastic approximation [82].
Next, we discuss the the t-channel partial waves f 1 ± (t), given by [19] f J with the t-channel scattering angle z t = (s − u)/(4p t q t ), the P J are the Legendre polynomials, and the momenta are q t and p t = t/4 − m 2 . Further, the standard decomposition of the πN scattering amplitude T (π a (q)+N (p) → π b (q ) + N (p )) in the isospin limit has been used, where a, b are isospin indices, the τ a are isospin Pauli matrices, I = ± refers to isoscalar/isovector amplitudes and s = (p+q) 2 , t = (p −p) 2 , u = (p−q ) 2 are the Mandelstam variables subject to the constraint s+t+u = 2(m 2 +M 2 π ). The best way to determine the pion-nucleon scattering amplitudes are undoubtedly dispersion relations, as they allow for a systematic continuation from the physical region into the unphysical ones and further make best use of the existing scattering data. The most modern and accurate investigations are based on the Roy-Steiner (RS) equation analysis of the Bonn group, developed and performed in Refs. [83,84,85,52,86] (for earlier work by the Karlsruhe-Helsinki group, see e.g. Refs. [87,88]). The RS equations, originally developed in [89,90] (and references therein), are hyperbolic DR of the form which have a number of advantages compared to other formulations (like e.g. fixed-t DR). They combine all physical regions, display an explicit s ↔ u crossing, require the absorptive parts only in regions where the corresponding partial wave expansions converge, and, further, a judicious choice of the parameter a allows to increase the range of convergence. The RS equations have a limited range of validity, where √ s m , √ t m denotes the so-called matching point for the s-and t-channel partial waves, respectively. The required inputs to solve the RS equations are the S-and Pwaves above the matching point, the higher partial waves (D-, F-, . . .) and the inelasticities. An important constraint are the pion-nucleon scattering lengths deduced from pionic hydrogen and pionic deuterium [91] (for a recent update, see [92]). The output of the RS equations are the so-called subthreshold parameters, which allow one to reconstruct the scattering amplitude in the unphysical region, such as the f 1 ± (t) in the pseudophysical region required for the isovector spectral functions. Some basic definitions of the πN scattering amplitude in the unphysical region are given in App. B. It should also be mentioned that the results of the RS analysis were given with theoretical uncertainties, which to our knowledge has been the first time that a dispersive analysis of pion-nucleon scattering provided these, for details see [52].
In Ref. [51], the isospin-violating effects beyond the ρω mixing in the pion form factor where also worked out, leading to an improved representation of the unitarity relations, Eq. (32), namely For a more detailed discussion of this representation, the reader is referred to Ref. [51].
Putting pieces together, the isovector spectral functions divided by t 2 based on Eq. (40) are shown in Fig. 6. These nicely exhibit the ρ-resonance at √ t = 0.77 GeV as well as a remarkable enhancement on the left shoulder of the resonance. This shows that the ρ is indeed generated by unitarity [18] and thus no explicit ρ-meson is required in the isovector spectral function. The visible enhancement on the left shoulder of the ρ can be traced back to the fact that the partial wave amplitudes f 1 ± (t) have a singularity on the second Riemann sheet [88] (originating from the projection of the nucleon pole terms in the invariant pion-nucleon scattering amplitudes) located at very close to the physical threshold at t 0 = 4M 2 π . The isovector form factors inherit this singularity (on the second sheet) and the closeness to the physical threshold leads to the pronounced enhancement between √ t = 0.3− 0.6 GeV shown in Fig. 6. This issue will be taken up below.
isospin limit with ρ-ω mixing isospin limit with ρ-ω mixing The uncertainties displayed in Fig. 6 originate from three different sources: 1) the subthreshold parameters b − 00 , b − 01 , a − 00 and a − 01 (as defined in App. B), 2) the pion-pion phase shift δ 1 1 (t) and 3) the data for the pion form factor F V π (t). In fact, the uncertainty of the subthreshold parameters from the RS analysis is in fact the dominating effect below 1 GeV. We note that the effect of the ρ-ω mixing is small, as the comparison of the black and red dashed lines in Fig. 6 shows. Note also that this consistent inclusion of isospin-breaking effects in the pion em form factor and the πN partial waves constitutes a major achievement compared to earlier analyses. The two-pion continuum contribution to the isovector form factors is displayed below in Fig. 8 .
Based on the DR, Eq. (26), it is is straightforward to derive sum rules for the normalizations and radii of the isovector form factors. These were first considered in Ref. [72] for the various nucleon radii, see also [51], where 353 is the isovector magnetic moment of the nucleon. Note that the sum rules for the radii remain unchanged if a once-subtracted dispersion relation is used instead of the unsubtracted one. Cutting the integrals at Λ = 2m, one finds It is remarkable that just using a simple ρ-exchange using e.g. a Breit-Wigner or a Gounaris-Sakurai form [93], the corresponding isovector radii would be sizeably underestimated (by about 40%), as inspection of Fig. 6 reveals. Thus, any dispersive analysis that does not include the full two-pion continuum but only the ρ-resonance in the isovector spectral function below 1 GeV will simply miss important physics. We will come back later to these sum rules.
Three-pion continuum
The lowest isoscalar continuum is given by 3-pion exchange as depicted in Fig. 7. There, A refers to the γ → 3π transition amplitude, that is given at low energies by the anomalous Wess-Zumino-Witten Lagrangian [94,95] and B corresponds to the 3π →N N amplitude. An analysis based on unitarity alone of this contribution does not exist, but it has been shown in chiral perturbation theory at leading [96] and subleading [97] orders, that there is no enhancement on the left wing of the ω resonance. This argument is outlined in App. C. Thus, the usual inclusion of the ω as a vector meson pole is justified. In case of the φ, the situation is, however, more complicated as discussed next.
KK continuum
The first important continuum contribution to the isoscalar spectral function is the one from KK states, as evaluated in Refs. [35,36] from an analytic continuation of KN scattering data. The KK contribution to the imaginary part A B Fig. 7. Three-pion cut contribution to the isoscalar form factors, given in terms of the γ → 3π (represented by A) and the 3π →N N (represented by B) transition amplitudes. Solid, dashed, and wiggly lines denote nucleons, pions, and the external photon, respectively, while the dash-dotted line indicates the cutting of particle propagators.
of the isocalar form factors is given by [35,36] Im F where p t = t/4 − m 2 and q t = t/4 − M 2 K , with M K the charged kaon mass. Further, F V K (t) is the kaon form factor, defined via whereas the b 1/2, ±1/2 1 (t) are the J = 1 partial wave amplitudes for KK → NN [35,36]. Having determined these imaginary parts, the contribution of the KK-continuum to the form factors is obtained from the dispersion relation Eq. (26).
The b (t) in the above equations are the kaon-nucleon partial wave amplitudes with total angular momentum J = 1 (for definitions, see App. D). For t ≥ 4m 2 the partial waves are bounded by unitarity, In the unphysical region 4M 2 K ≤ t ≤ 4m 2 , however, they are not constrained by unitarity. In Ref. [35], the amplitudes b calculated. Strictly speaking this calculation provides an upper bound on the spectral function since one replaces the amplitudes and the form factor in Eqs. (44,45) by their absolute values. The striking feature in the spectral function is a clear φ resonance structure just above the KK threshold. The resonance emerges in the partial wave amplitude b 1/2, 1/2 1 as well as in the kaon form factor F K . In contrast to the 2π continuum, there is no strong enhancement on the left wing of the φ resonance which sits directly at the KK threshold. This is completely analogous to the lowest-lying isoscalar resonance, the ω(782), which also does not exhibit any enhancement on its left shoulder.
The resulting contribution to the nucleon form factors can be parameterized by a pole term at the φ mass [34]: with a KK 1 = 0.1054 GeV 2 and a KK 2 = 0.2284 GeV 2 . As a consequence, the contribution of the KK continuum to the electromagnetic nucleon form factors can conveniently be included in the analysis via Eq. (48). The form factor contributions from Eq. (48) are also shown in Fig. 8.
ρπ continuum
Another important contribution to the isoscalar spectral function is the correlated ρπ exchange, that was investigated in the Bonn-Jülich nucleon-nucleon interaction model in Ref. [98]. Since in that work cancellations between φ-exchange contribution and this correlated πρ-exchange was found, the ρπ contribution to the isoscalar spectral function was worked out in Ref. [37]. This continuum contribution was evaluated in terms of a dispersion integral which in turn can be represented by an effective pole term γ V N N Fig. 9. Vector meson dominance: The photon only couples through vector mesons, V = ρ, ω, φ, . . ., to the nucleon.
for a fictitious ω meson with a mass M ω = 1.12 GeV [37]: with a ρπ 1 = −1.01 GeV 2 and a ρπ 2 = −0.04 GeV 2 . In the form factor analysis, one uses this effective pole instead of the full spectral function.
There is very little sensitivity in the dispersive fits to a ρπ 2 , which can vary between −0.04 and −0.4 without affecting the outcome of the fit. If the ω pole is treated as a real resonance, the latter value is consistent with f ω ∼ 10 for a ρπ 1 = −1.01 if the coupling constants g i ω N N (i = 1, 2) from Ref. [37] are used as input (for a precise definition of these couplings, see Sec. 3.8).
In Fig. 8, we show the contribution of the 2π, KK, and ρπ continua to the electromagnetic nucleon form factors F 1 and F 2 . The 2π contributes to the isovector form factors while the KK and ρπ continua contribute to the isoscalar form factors. The KK and ρπ contributions have opposite sign and partially cancel each other. The dominant contribution to F s 1 comes from the ρπ continuum while for F s 2 the KK contribution is larger. While the KK and ρπ contributions can be represented by simple pole terms, the expressions for the 2π continuum Eq. (40) are more complicated. This is related to the strong enhancement close to the 2π threshold on the left wing of the ρ resonance discussed above. Finally, note that these continuum contributions enter as an independent input in the dispersive analysis. They are not fitted to cross section or form factor data.
Vector meson poles
In the most simple picture, the photon couples to the nucleon through vector mesons only (i.e. there is no direct photon-nucleon coupling), the so-called vector meson dominance (VMD) picture, see e.g. [99,100,101,102], as depicted in Fig. 9. In this picture, the form factors take the simple form with and the couplings f V can be deduced from the leptonic decay widths V → e + e − , Also, we have identified s 1 , s 2 with the ω, φ and v 1 with the ρ. Each such vector meson comes with two couplings, the vector coupling a V 1 and the tensor coupling a V 2 . One also employs the ratio of the tensor to the vector coupling, κ V , defined via While for some resonances these couplings can be deduced from nucleon-nucleon scattering data, in the dispersive analysis, they are considered as fit parameters (with the exception of the ρ, which is completely determined from the 2π continuum). In the pure VMD picture with only ρ and ω vector mesons contributing, one can relate the tensor-to-vector coupling ratio to the isovector and isoscalar anomalous magnetic moments of the nucleon, such that However, extracting κ ρ from the two-pion continuum leads to a larger value, κ ρ 6, consistent with extractions from nucleon-nucleon scattering, see e.g. the discussion in [26]. The corresponding imaginary part, i.e. the contribution to the spectral function for any vector meson reads: As already discussed in Sec. 3.4, the ρ-meson is entirely generated by the two-pion continuum, so that an explicit ρ will never appear in the spectral function. Different to that, the lowest isoscalar mesons are the ω and the φ, which are explicitly taken into account. As noted before, the related 3π continuum has a very small nonresonant contribution, that can be safely neglected [96], see also App. C. Also, in the isoscalar region around 1 GeV, we consider the KK and ρπ continua, which tend to cancel, and an additional residual φ pole. Because of the complicated structure of the isoscalar spectral function around 1 GeV, it is no longer possible to extract useful φN N couplings, as it was done in earlier works, where one just had the φ-pole in this region. The large φ-couplings found in these earlier studies are clearly an artifact of the simplified isoscalar spectral function assumed in this region.
Structure of the spectral functions
As discussed above, the spectral function can at present only be obtained from unitarity arguments and experimental data for the lightest two-particle intermediate states (2π and KK). Furthermore, the ρπ continuum contribution has been calculated in the Bonn-Jülich N N model.
The remaining contributions to the spectral function can be parameterized by vector meson poles. On the one hand, the lower mass poles can be identified with physical vector mesons such as the ω and the φ. The higher mass poles on the other hand, are simply an effective way to parameterize higher mass strength in the spectral function. These effective poles at higher momentum transfers appear in the isoscalar (s 1 , s 2 , ...) and isovector channels (v 1 , v 2 , ...) It should be noted that we are dealing with an ill-posed problem here [103,104], that means increasing the number of poles will from some point on not improve the description of the data. Therefore, the strategy has always been to use as few poles as possible. We come back to this issue in Sec. 3.12.
Putting all pieces together, the spectral function has the general structure For the light isoscalar vector mesons, the residua in the pole terms can be related to their couplings. Only rough estimates exist for these: [37]. Note that the dominant vector ωN N coupling is taken to be positive, consistent with one-boson-exchange in the nucleon-nucleon interaction. These ranges are used as constraints in the fits. The masses of the effective poles (s 1 , s 2 ,-. . . , v 1 , v 2 , . . .) are fitted to the data. We remark that to ensure the stability of the fit [104], we demand that the residua of the vector meson poles are bounded, |a V i | < 5 GeV 2 (this can also be considered a naturalness argument for the couplings), and that no effective poles with masses below 1 GeV appear. Furthermore, the masses of these effective poles should also be smaller than 5 GeV. We generally do not include widths for the effective poles. However, if one wants to mimic the imaginary part of the form factors in the time-like region, one can e.g. allow for a large width for the highest mass effective pole (see, e.g., Ref. [34]). A cartoon of the resulting (isoscalar and isovector) spectral functions is shown in Fig. 10. The vertical dashed line separates the phenomonologically wellconstrained low-mass region from the effective vector meson poles at higher masses.
Constraints
The number of parameters in the spectral function (i.e. the various meson couplings a V i (i = 1, 2) and the masses of the effective poles) is reduced by enforcing various constraints.
The first set of constraints concerns the low-t behavior of the form factors. We enforce the correct normal- ization of the form factors as given in Eq. (3). The nucleon radii, however, are not included as a constraint. The exception to this is the squared neutron charge radius, which in some dispersive fits has been constrained to the value from low-energy neutron-atom scattering experiments [106,107]. In the new fits discussed later, we implement this constraint using the high-precision determination of the neutron charge radius squared based on a chiral effective field theory analysis of electron-deuteron scattering [108,109], Another set of constraints arises at large momentum transfers. Perturbative QCD (pQCD) constrains the behavior of the nucleon electromagnetic form factors for large momentum transfer. Brodsky and Lepage [110] worked out the behavior for Q 2 → ∞, in terms of the leading order QCD β-function. The anomalous dimension γ ≈ 2 depends weakly on the number of flavors, N f [110]. The power behavior of the form factors at large Q 2 can be easily understood from perturbative gluon exchange. In order to distribute the momentum transfer from the virtual photon to all three quarks in the nucleon, at least two massless gluons have to be exchanged. Since each of the gluons has a propagator ∼ 1/Q 2 , the form factor has to fall off as 1/Q 4 . In the case of F 2 , there is additional suppression by 1/Q 2 since a quark spin has to be flipped. The analytic continuation of the logarithm in Eq. (59) to time-like momentum transfers −Q 2 ≡ t > 0 yields an additional term, ln(−t/Λ 2 ) = ln(t/Λ 2 ) − iπ for t > Λ 2 . Employing the Phragmen-Lindeloef theorem [88], it follows that the imaginary part has to vanish in the asymptotic limit. Taking these facts into account, the proton effective FF can be described for large time-like mo-mentum transfer t by [111] |G p with the parameters from a fit to data prior to the 2013 measurement by the BaBar collaboration [50], given as A = 72 GeV −4 and Λ = 0.52 GeV. The power behavior of the form factors leads to superconvergence relations of the form with n = 0 for F 1 and n = 0, 1 for F 2 . These will be employed in the current analysis. In earlier DR analyses, modifications of the superconvergence relations were used including e.g. some higher order corrections. These should be, however, abandoned as the data are simply not sensitive to such corrections. We note that these superconvergence relations have already been used in Ref. [22], i.e. before the pQCD analysis. Consequently, the number of effective poles in Eqs. (56,57) is determined by the stability criterion mentioned before, that is, we take the minimum number of poles necessary to fit the data. The number of free parameters is then strongly reduced by the various constraints (unitarity, normalizations, superconvergence relations). These constraints can be implemented as what is called "hard constraints" or "soft constraints", respectively. In the former case, one solves a system of algebraic equations relating the various parameters (couplings, masses), thus reducing the number of free parameters in the fit (for an explicit representation, see e.g. [26]). In the latter case, the χ 2 is augmented by a Lagrange multiplier enforcing the corresponding constraints, see Sec. 3.12. Both options are viable and have been used.
It is straightforward to enumerate the nunber of fit parameters, which is given by the couplings and masses of the vector meson, N V = 4 + 3N s + 3N v , with N s (N v ) the number of the effective isoscalar (isovector) poles and the 4 represents the ω and φ couplings, minus the number of constraints, given by N C = 4 + 6 + 1, referring to the low-t, the high-t constraints and the neutron charge radius squared, respectively. If the latter in not included, N C = 10. Putting pieces together, we have in total
Two-photon effects
The interest in two-photon corrections was triggered by the high precision measurements of the form factor ratio G E /G M using the polarization transfer method reported in Refs. [31,32]. These results were found to be in striking disagreement with the world data based on the Rosenbluth separation. Even after removing inconsistencies from the Rosenbluth data base [112], this discrepancy remained, triggering a flurry of works on two-photon corrections beyond the work of Refs. [113,114,115], which neglected, however, the effects of the structure of the nucleon in the calculation of the two-photon box and crossed-box diagrams, see Fig. 11. These diagrams were calculated in various approaches, like in hadronic models, using generalized parton distributions or using dispersive methods, see e.g. Refs. [116,117] for reviews and very recent work in Ref. [118]. Here, we concentrate on the work presented in [46], because the two-photon corrections given there are applied in the DR analyses since then. The corrections to the electron-proton cross sections at order α 3 are given by the interference of the one-photonexchange amplitude M 1γ and the amplitudes from vacuum polarization, vertex corrections, self-energy corrections and the two-photon-exchange amplitude M 2γ and additionally the contribution from Bremsstrahlung. The main data set that are considered in the DR analyses already contains a set of calculations of such corrections by Maximon and Tjon [115]. This calculation contains improvements towards earlier works by Mo and Tsai [114] but still uses a soft-photon approximation, particularly relevant for the two-photon exchange (TPE) contribution. This contribution to the corrected cross section can be expressed through a factor of (1 + δ 2γ ) as so that We briefly discuss the soft-photon approximation by Maximon and Tjon since only the difference between any new evaluation of the 2γ corrections and this approximation is required for the purification of the ep scattering data.
Ref. [115] separates the IR-divergent part of the TPEamplitude by considering the poles in the photon propagators, i.e. one vanishing photon momentum. The resulting factor is where λ is an infinitesimal photon mass and E 1 (E 3 ) the incoming (outgoing) electron energy. The logarithmic infrared singularity in λ is canceled by a term in the Bremsstrahlung correction, so that the complete cross section is λ-independent. The same cancellation takes place, if both δ 2γ,IR and the Bremsstrahlung correction are calculated in the older approximation scheme by Mo and Tsai. In Ref. [46], the interference between the 1γ-amplitude and the 2γ-amplitude was calculated. In this notation, the metric tensor from the photon propagator has already been contracted. Then, M 1γ is given in terms of the conventional lepton spinors u(p) and the elastic nucleon-vertex Γ ν (q) from Eq. (2). The 2γ-amplitude contains the lepton tensor whereas the hadronic tensor for nucleon or ∆ intermediate states are respectively. Here, Γ µα γ∆→N (p, k) is the transition vertex in terms of the Raita-Schwinger spinor field Ψ (a) µ (p) [119]. Various parameterizations of this transition matrix element exist, see e.g. Refs. [120,121]. The corresponding electric G E , magnetic G M and Coulomb G C transition form factors can be related to the helicity amplitudes measured in pion electroproduction off the nucleon, see e.g. Ref. [122]. Accounting for this momentum dependence also in the ∆N γ vertices in the box and crossed box diagrams was the main improvement in Ref. [46] compared to some earlier calculations. Further, in the denominator of the photon propagator for the pure nucleon graph, one includes an infinitesimal photon mass λ to regulate the infrared divergences. The loop containing the ∆ is not IR divergent because of the mass of the ∆.
The S F and S αβ in Eqs. (68,69) are the conventional spin-1/2 and spin-3/2 propagators, respectively. The calculation of the crossed box graph proceeds accordingly. In Fig. 12, the -dependence at Q 2 = 3 GeV 2 is shown, which allows for a comparison to previous calculations Ref. [42] (red solid line). Displayed is the difference of the calculation in [46] to the soft-photon approximation by Maximon and Tjon [115]. such as [42]. In this case, the dependence on the nucleon FF parameterization largely cancels out. The use of the pole fit parameterization from Ref. [42] indeed reproduces their result. Lowering the Q 2 -value in the calculation decreases the nucleon-TPE correction. For the intermediate ∆, the situation is different, one finds a stronger dependence on the FF parameterizations. In Fig. 13 the results of the calculation that employs the helicity amplitudes obtained from data on electroproduction of nucleon resonances [123] are displayed. These corrections can be parameterized conveniently by a set of FFs, determined in Ref. [122] and used in Ref. [124] for a similar calculation albeit without realistic NFFs. This form of the γN ∆-vertex does not deviate significantly from recent data and is numerically well-treatable. These results are similar to the ones of Ref. [125], where different transition form factors are employed. As stated before, the sum of δ 2γ,N and δ 2γ,∆ from Ref. [46] constitute the two-photon corrections employed in the DR analysis of the Bonn-Darmstadt group. Their effect on the high-precision data from Mainz [11] is displayed in Fig. 14. Note that the original data contain an approximation of the two-photon correction given by [126].
where Z is the nuclear charge (here, Z = 1). As pointed out in Ref. [127], this approximation is only valid as Q 2 → 0 and has the wrong sign for some kinematical regions. Thus, this contribution is subtracted from the data and the two-photon corrections from Ref. [46] are added. The differences are quite visible. The corrections from Ref. [46] will be also employed in the calculations presented in the next section. Nevertheless, an updated calculation of these corrections would be welcome.
Fit strategies and error analysis
In this section, we briefly describe how the fits of the spectral functions to data are performed and how the statistical and systematic errors can be determined. First, we discuss the quality of the fits, which is measured in terms of the total (traditional) χ 2 , where C i are the cross section data at the points Q 2 i , θ i and C(Q 2 i , θ i , p ) are the cross sections for a given FF parameterization for the parameter values contained in p. Moreover, the n k are normalization coefficients for the various data sets (labeled by the integer k), while σ i and ν i are their statistical and systematical errors, respectively. A more refined definition of the χ 2 is given by [46] in terms of the covariance matrix V ij = σ i σ j δ ij + ν i ν j . This latter definition accounts for the correlation between the various fit parameters. A fit to form factor data uses the same definitions, except for the absence of the normalization factors. One also considers the reduced χ 2 , which is given by: with N D the number of fitted data points and N F the number of independent fit parameters, see Sec. 3.10.
As noted in Sec. 3.10 the various constraints on the form factors can be implemented algebraically (hard constraints) or by modifying the χ 2 (soft constraints). The latter type of constraints are implemented as additive terms to the total χ 2 in the following form where x is the desired value and p is a strength parameter, which regulates the steepness of the exponential well and helps to stabilize the fits [128,34]. One method to estimate the fit (statistical) errors is the bootstrap procedure, see e.g. Ref. [129]. One simulates a large number of data sets compared to the number of data points by randomly varying the points in the original set within the given errors assuming their normal distribution. Let us consider the radius extraction. In that case, one fits to each of these data sets separately, extracts the radius from each fit and consider the distribution of these radius values, which is sometimes denoted as bootstrap distribution. The artificial data sets represent many real samples. Therefore, this radius distribution represents the probability distribution that one would get from fits to data from a high number of measurements. The precondition for using this method are independent and identically distributed data points which is fulfilled when the χ 2 sum does not depend on the sequential order of the contributing points. For n simulated data sets, the errors thus scale with 1/ √ n. However, to get a more realistic uncertainty, we exclude one percent of the data points from the sample and can so determine the lowest and highest value of the extracted radius. The same procedure can, of course, also be applied to the full form factors. In Fig. 15, we again use the 71 PRad data points to show the bootstrap extraction of the proton charge radius and its statistical uncertainty based on 1000 samples. The extracted error thus reads (a similar plot is obtained for the magnetic radius) [53] δ(r p E ) stat. = ±0.012 fm , δ(r p M ) stat. = ±0.005 fm . (77) We note that the bootstrap error for r p M for the PRad data given in [53] is corrected here.
Another statistical tool to estimate the error intervals of our model parameters is the Bayesian approach, see e.g. Ref. [130] (and references therein). In contrast to the interpretation of probabilities in the classical (also called frequentist) approach, where the probability is the frequency of an event to occur over a large number of repeated trials, the Bayesian method uses probabilities to express the current state of knowledge about the unknown parameters, which allows one to estimate the uncertainty as a statement about the parameters. The key ingredients to a Bayesian analysis are the prior distribution, which quantifies what is known about the model parameters prior to data being observed, and the likelihood function, which describes information about the parameters contained in the data. The prior distribution and likelihood can be combined to derive the posterior distribution by means of Bayes' theorem: where "paras" denotes the parameters. It is the main goal of a Bayesian statistical analysis to obtain the posterior distribution of the model parameters. The posterior distribution contains the total knowledge about the model parameters after the data have been observed. From a Bayesian perspective, any statistical inference of interest can be obtained through an appropriate analysis of the posterior distribution. For example, point estimates of parameters are commonly computed as the mean of the posterior distribution and interval estimates can be calculated by producing the end points of an interval that correspond with specified percentiles of the posterior distribution. A powerful and easyto-implement method to access posterior distribution is Markov Chain Monte Carlo (MCMC) algorithm. A systematic illustration of Bayesian analysis application can be found in Ref. [131].
As an example, we implement a Bayesian analysis for the fit to PRad data where the 2s+2v configuration of the spectral function is used. The likelihood function is given by with the χ 2 objective function defined in Eq. (74). Here, p contains the model parameters and D = {d i } denotes the PRad data points. N is the normalization constant. Two different prior distributions shown in Fig. 16 are considered to test the stability of the obtained statistical outputs from our Bayesian analysis. We apply a particular MCMC sampling algorithm called ParaMonte [132] to acquire a Monte Carlo sample from the posterior distribution. The obtained posteriors of the parameters m V 1 and a ω 1 are taken as an example to show the equivalence of normal and uniform priors we used as shown in Fig. 17. The statistical estimates of form factor and radius errors from our Bayesian analysis are discussed in next section.
Next, we discuss the extraction of the systematic uncertainties, which is always the most difficult task. Our strategy is similar to what was already done in Ref. [22], namely to vary the number of isoscalar and isovector poles around the values corresponding to the best solution, where the total χ 2 does not change by more than 1%. An example of this is given in Tab. 3 taken from Ref. [53]. Here, only the PRad data [15] are considered. The best fit corresponds to 2 isoscalar and 2 isovector poles, so we can read off the systematic errors in this case as [53] δ(r p E ) syst. = ±0.001 fm , δ(r p M ) = +0.018 −0.012 fm .
We note that while the absolute χ 2 does not change, the reduced one worsens as the number of fit parameter increases. As expected, the systematic error is larger for the magnetic radius as at low Q 2 , the electric FF dominates.
More detailed results will be given below. Table 3. Fit to the PRad data with varying numbers of isoscalar (s) and isovector (v) effective poles. Given are the total and the reduced χ 2 and the resulting values for the proton radii. The * marks the best solution which defines the central values for the radii.
Physics results
In this section, we display a number of physics results, in particular we discuss fits including proton polarization transfer and neutron form factor data, and present new uncertainty analyses, thus extending and deepening the work of Ref. [53]. We also discuss the inclusion of data for the time-like form factors and the related physics. First, however, we want to sharpen and validate our toolbox to pin down the errors on the example of the PRad data.
Detailed analysis of the PRad data
The PRad data [15] are given at two beam energies, E = 1.1, 2.2 GeV, covering squared momentum transfers in the range Q 2 = 2 · 10 −4 − 6 · 10 −2 GeV 2 , in total 71 differential cross section data points. Using this data set, we will make a detailed comparison of the bootstrap and the Bayesian methods to extract the statistical uncertainty. The extraction of the systematic uncertainty for these data was already discussed in Sec. 3.12. Before continuing, it is worth noting that from the proton data alone, the isospin of a given pole is not determined. One can, however, simply assign a given number of isoscalar and isovector poles besides the continuum contributions, which have a given isospin, as well as the ω and φ mesons. This ambiguity will be resolved once neutron data are also fitted, see Sec. 4.2.
We consider first the Bayesian analysis described in Sec. 3.12. We assume two sets of priors, the normal and the uniform distributions depicted in Fig. 16. In both cases, the constraints on the various couplings and masses discussed in Sec. 3.10 are already included. Note further that in case of the uniform distribution, the prior for the unknown mass m v 1 is biased towards smaller values. The resulting proton em radii are equal within 3 significant digits for these two different prior distributions, see Tab. 4. Note that the systematic errors of these data have already discussed in Sec. 3.12.
Next, we compare the radius extraction from the Bayesian and the bootstrap method, which are shown in Fig. 18 for the proton charge radius. As can be read off from this figure and also seen in Tab. 4, the results are very similar, Table 4. Statistical uncertainty in the proton electromagnetic radii from the PRad data using two different Bayesian distributions and the bootstrap approach.
with the bootstrap having slightly larger errors, which is due to our conservative choice considering the 99% quantile. The resulting normalized form factors G E (Q 2 )/G dip (Q 2 ) G M (Q 2 )/(µ p G dip (Q 2 )) as well as their uncertainties for the two methods are shown in Fig. 19. The differences are negligible. The form factor ratio µ p G p E /G p M measured below Q 2 = 1 GeV [47,48] is also well described, as already displayed in Fig. 2 of Ref. [53] Note also that the magnetic form factor does not display any bump-dip structure below Q 2 < 1 GeV 2 as found in the MAMI analysis [44].
Having shown the equivalence of both methods here, in what follows we will stick to the bootstrap procedure, which is easier to implement in case of large data sets with a larger number of fit parameters.
Fits to proton and neutron data
We are now in the position to analyze the the full data set.
To be more precise, for the proton we fit to the cross section data from PRad [15] and from MAMI-C [44] as well as to the polarization transfer data from Jefferson Lab [133,134,135,136] above Q 2 = 1 GeV 2 (note that the data from Refs. [31,32] are updated in Refs. [133,136], respectively, and thus do not appear in the data base) together with the neutron form factor world data base already used in [45]. The size of the data base and the Q 2 -ranges we are fitting is provided in Tab. 5. We also include the constraint on the neutron charge radius squared, updated to the latest value given in Eq. (58) from Ref. [108]. Ultimately, we need to reassess the neutron data base by performing chiral EFT analyses of electron scattering of the deuteron and (polarized) 3 He. This, however, goes beyond the scope of the present work. Table 5. Data base used in the fits.
Before showing the results of the best fit and the corresponding statistical and systematic uncertainties, it is worth pointing out we made extensive searches for solutions with altogther 36 combinations of isoscalar (is) and isovector (iv) poles, ranging from 3 + 3 to 8 + 8 is+iv poles, with the reduced χ 2 varying by less than 5%, in most cases even by less than 1%. We noticed that the fits with a larger number of is than iv poles turned out to be slightly better.
The best solution has 6 + 4 is+iv poles, and the fits to the ep cross section data with Q 2 < 1 GeV 2 , the proton form factor ratio with Q 2 > 1 GeV 2 , the electric form factor of the neutron and the magnetic form factor of the neutron are shown in Figs. 20, 21, 22, 23, respectively. The corresponding central values for the various vector mesons masses, vector meson couplings and the normalization constants of the MAMI and PRad data are collected in Tab. 6 in App. E. It is remarkable that while the isoscalar spectral functions requires a number of high mass effective poles, the effective isovector poles all have masses below 2.3 GeV. We note that the vector coupling of the residual φ comes out small, consistent with expectations from the OZI rule. The tensor coupling is, however, quite large, but the next effective isoscalar pole has a comparable tensor coupling of opposite sign, see also the discussion in Sec. 4.3. We also note that the various normalization factors are deviating from one by less than 1%. Let us now discuss the predictions of and physics related to these fits. First, we extract the various radii from these fits, where the first error is statistical (based on the bootstrap procedure explained in Sec. 3.12) and the second one is systematic, based on the variations in the spectral functions discussed before. The values for the radii are completely consistent with earlier determination, cf. Ta-0.5 1 Fig. 23. Best fit to the neutron magnetic form factor data. For notations, see Fig. 20. bles 1,2, but with a much improved uncertainty estimation. Clearly, given the data set we fitted, the systematic uncertainty is largest for the neutron magnetic radius. The statistical uncertainty is small for all three radii. It is also interesting to give the radii of the Dirac and Pauli from factors in the isospin basis, in particular for the comparison with lattice QCD results. The reason for this is that the contribution of the so-called disconnected diagrams to the isoscalar FFs, which are notoriously difficult to calculate. These radii are given by: Similarly, the electric and magnetic radii in the isospin basis are (using the conventions given in Eqs. (6,7)) r s E = 0.773 ± 0.002 +0.002 −0.003 fm, r v E = 0.900 ± 0.002 ± 0.002 fm, r s M = 0.801 ± 0.008 +0.010 −0.038 fm, We note that the central values in Eq. (83) lead to the squared isovector radii, of (1/2)(r v E ) 2 = 0.405 fm 2 and µ v (r v M ) 2 = 1.72 fm 2 , which are perfectly consistent with the sum rule estimates in Eq. (43) but have, of course, much smaller uncertainties. It is also interesting to compare the isovector radii We also note that the value for the squared isovector with a recent state-of-the-art lattice QCD calculation at physical pion masses [137], While the value of the isovector charge radius is consistent with ours, the latttice value for the isovector magnetic radius is smaller than ours, that is there is some tension. It remains to be seen what future lattice calculations will give. As in earlier fits [46,53], the data for the proton form factor ratio µ P G p E /G p M for Q 2 < 1 GeV 2 , which do not participate in the fit, are well described, see the inset in Fig. 21. This points towards consistency between the twophoton corrected cross section data and the ratio data, that are not affected by such corrections. The situation is, however, different for larger momentum transfers. In Figs. 24,25 we display G p E (Q 2 ) and G p M (Q 2 ), that did not participate in the fits. Because the proton form factor ratio tends to zero at Q 2 8 GeV 2 , marked deviations from the dipole form are observed. Only at very large momentum transfer, the fall-off required by pQCD is observed. More precisely, we find that Q 4 F p,n 1 (Q 2 ) starts to level off beyond 30 GeV 2 , whereas that is not the case yet for Q 6 F p,n 2 (Q 2 ). Clearly, in this region of momentum transfer, more data are needed to pin down the form factors more precisely and to eventually see the onset of perturbative QCD. This is entirely consistent with earlier findings, see e.g. Refs. [26,34].
Note that the long-range part of the Breit-frame charge and magnetization distributions that follows from the Sachs form factors can be interpreted in terms of a "pion cloud" and some additional short-range contributions from the ρ and other short-ranged physics. However, we emphasize that this separation is scale-dependent and thus not unique. A general discussion of the pion cloud can be found in Refs. [138,139].
Vector meson couplings
As noted before, in the earlier DR analyses, in the isoscalar spectral function below 1 GeV, only the ω and the φ mesons were retained and thus using Eq. (51), one was able to extract the vector and the tensor couplings of these mesons. However, we have shown that in the region of the φ, the isoscalar spectral function is much more complicated, preventing one from extracting φ-meson couplings.
In what follows, we will thus only consider the ω-meson. In this case, we have The earlier fits, which had no restrictions on the residua led to a large vector and a small tensor coupling, from Ref. [22] and Ref. [26], respectively. This vector coupling is sizeably larger than from the determination us-ing forward dispersion relations in nucleon-nucleon scattering, g ωN N 1 = 10.1±0.9 [105]. This smaller value is, however, inconsistent with the approximate dipole behavior of F s 1 (Q 2 ) [140]. Note, however, that in one-boson-exchange models of the NN interaction, one typically finds values of g ωN N 1 (M 2 ω ) 20 which for typical strong form factors translates to g ωN N 1 (0) 10 [141]. Starting with the work of BHM [34], the isoscalar spectral function was considerably improved. In that work, the vector coupling was still large, but the tensor coupling could not be pinned down so precisely, Values within this range where also found in the analysis of the MAMI-C data combined with the proton form factor data for Q 2 > 1 GeV 2 and the neutron FF data base [45] where only central values were given. Finally, the analyses that concentrated mostly on the high-precision ep data from MAMI-C and PRad, the ω couplings took the values from Ref. [46] and Ref. [53], respectively. Finally, we present the results of our new fits, including the statistical and the systematical uncertainty: For the central values, the tensor-to-vector coupling ratio is small, κ ω = 0.06. We note that the uncertainties on the vector coupling are modest, they are much larger for the suppressed tensor coupling. Similar to the findings in BHM, the sign of the tensor coupling is not determined and the range of allowed values is sizeable.
Time-like form factors and final-state interactions
Before discussing the DR fits including the data from the time-like region, it is worth noting two very intriguing experimental findings related to the cross section σ(e + e − → pp) (and the reversed reaction) and the corresponding effective form factor |G p eff |. First, as shown in the upper panel of Fig. 26, there is a strong enhancement in the close-to-threshold region, as comparison with the phase space behavior (normalized to the data at about 50 MeV excess energy) clearly reveals. Note also that due to the Coulomb interaction between the proton and the antiproton, the cross section does not start at zero. No such effects are seen in σ(e + e − → nn). We note that such threshold enhancements are also observed in other processes like e.g. J/Ψ → xpp, Ψ (3686) → xpp with (x = γ, ω, ρ, π, η) and B + → K + pp, see e.g. Refs. [146,147,148]. Second, extending further out in momentum transfer, the BaBar data [50] and also the BESIII data [142] exhibit some oscillating structures, most pronounced for invariant masses M pp below 2.5 GeV, see the lower panel in Fig. 26. The corresponding neutron data from FENICE [145] and SND [149] are less precise than the proton data, but show a similar behavior for q 2 4 GeV 2 . For recent fits to the timelike proton effective FF accounting for these structures, see [150].
DR fits including space-and time-like data were performed in Refs. [30,34,49]. Here, we focus on the work done in the latest paper. Though that work investigated some issues related to FSI in an exploratory way, it provides the most detailed information on the physics contained in the time-like FFs. In this work, the spectral functions was enlarged to account for the coupling to the newly established φ(2170) vector meson, as well as baryonic triangle graphs with virtual N N π, N ∆π and ∆∆π particles, the first of these giving a simple representation of the strong final-state interactions (FSI), see the discussion below.
sections and the ratio G E /G M from polarization observables on the scattering side in addition to the effective FF and |G E /G M | on the production side were included for the proton. In case of the neutron, G E and G M from scattering data and again the effective FF on the production side were considered. The spectral function included the the 2π, KK and ρπ-continuum, the ω-and φ-contribution and three/five isoscalar/isovector poles restricted in the mass range M V = 1 . . . 1.8 GeV. In addition, the new vector resonance was taken at a mass of 2.125 GeV and its width was determined in the fit, which turned out to be Γ = 0.088 GeV. Good agreement with the existing data, as shown in Fig. 27 for the proton and neutron effective form factors and in Fig. 28 for the proton form factor ratio in the space-like and the time-like region, was obtained. In particular, the SND data for the neutron effective FF show a very similar behavior to the proton effective FF over a large range, which can be well described in this approach. However, the range around the φ(2170) calls for further neutron measurements to allow for a determination of the isospin of the structures in G p eff . Let us now discuss the various threshold effects in the time-like data, in particular the strong enhancement at the pp threshold. This was first observed at LEAR in the inverse reaction pp → e + e − [152] and substantiated by the BaBar collaboration [153], which provided data for e + e − → pp down to the threshold region. As noted before, this threshold enhancement was also observed in a number of other production reactions such as J/ψ and B decays. Several explanations involving unobserved me- son resonances or scenarios that involve NN bound states (baryonium) have been put forward, see e.g. Ref. [146]. More conventional but plausible interpretations of this phenomenon were given in terms of the FSI between the proton and the antiproton, employing either a Migdal-Watson approximation or meson-exchange models of various levels of sophistication to describe the pp interaction, see e.g. the earlier works Refs. [154,155,156,157,158,159]. The latest and arguably most sophisticated approach to this phenomenon employs simple point-like form factors, whose energy dependence is entirely given by the protonantiproton FSI (or the pp initial state interactions in the annihilation process) [160]. The nucleon-antinucleon interaction is based on chiral effective field theory at NLO [161] and NNLO [162]. In this approach, the steep rise of the effective FF for energies close to the pp threshold is explained solely in terms of the pp interactions, cf. Fig. 29, consistent with the findings in Refs. [159,163,164,165,166,167,168,169,170,171]. Also existing experimental information (differential cross sections, form factor ratio) is quantitatively described in this approach. In addition, predictions for various spin-dependent observables, that can be tested in the future with PANDA at FAIR, are also given in that work. Note, however, that this framework is only applicable to the threshold region, that is up excess energies of about 100 MeV. Triangle graphs with virtual N∆π and ∆∆π states were also considered in Ref. [49], picking up an idea of Rosner [172], in order to approximate possible cusp ef- fects. However, the vertices are not well known for these kinematics, so the calculation was reduced to the scalar loop integral parameterized in terms of fitted strength parameter f N ∆/∆∆ ∼ O(1) multiplying each loop structure. The explicit momentum dependence of the vertices was accounted for in terms of overall form factors, with Λ N ∆/∆∆ the respective fitted cut-off parameter. Interestingly, performing fits with these structures instead of the explicit poles discussed before leads to an equally good description of the proton effective FF, very similar to what is shown in Fig. 27 (upper panel). Furthermore, in Ref. [49] the nucleon FFs were also discussed in region of t 0 = 4M 2 π < t < t thr = 4m 2 p which is not accessible by direct measurements, but by analytic continuation in t = q 2 = −Q 2 . In fact, an additional particle emission from the initial state proton can lower the energy of the (virtual proton) to reach below the threshold, as discussed in Ref. [173] for the process pp → e + e − π 0 . To get insight into the unphysical region, it is instructive to use a DR for the logarithm, see e.g. Refs. [174,175,176,177,178]. In principle, this also allows for a separation of the FF phase δ(t) and modulus in the representation G(t) = |G(t)|e iδ(t) . A once subtracted DR for the function ln where the first term vanishes due to the normalization G E (0) = G M (0)/µ p = 1. Experimental information on this integral equation (93) is available in the space-like region t < 0 on G(t) and in the time-like region for t > t thr on the modulus |G(t)|. The solution of this integral equation is not straightforward, it requires additional information to be included (as the problem is ill-conditioned). One possible solution was proposed in Refs. [176,177], namely to consider the integral contributions to the logarithm ln |G(t)| in the space-like region, using definite values for the known part above t thr In Ref. [49], as input for the left-hand-side of Eq. (94), the discretized result of a simultaneous fit to data in all accessible regions were used. A typical result for the modulus of G p E (t) is shown in Fig. 30. The enhancement just below the production threshold could signal the appearance of a broad baryonium state, but clearly more precise data on the time-like nucleon FFs are required to come to a definite conclusion here.
Summary and outlook
This paper served two purposes. First, we have reviewed the dispersion-theoretical approach to the electromagnetic form factors of the nucleon, with particular emphasis on the constraints posed by unitarity and analyticity on the spectral functions. Second, we have performed new fits including recent high-precision data on electron-proton scattering for squared momentum transfers Q 2 < 1 GeV 2 , the proton form factor ratio in the range Q 2 1 · · · 8.5 GeV 2 and the world data basis on the neutron electric and magnetic form factors, including the recent accurate extraction of the neutron charge radius squared from chiral nuclear EFT. We also have sharpened the toolbox to determine the statistical and systematical uncertainties. This led to a number of new results concerning the various nucleon electromagnetic radii, the form factors and the ωN N couplings. We would like to stress again that DRs have always found a small proton charge radius, r p E 0.84 fm, with a slightly larger proton magnetic radius, r p M 0.85 fm. As before, we find that the neutron magnetic radius is the largest, r n M 0.87 fm. Consistent with earlier analyses, the onset of pQCD is not seen in the existing form factor data. We have also discussed our present understanding of the physics in the time-like region, where a strong enhancement of the cross section for e + e − → pp (and in its reversed process pp → e + e − ) is observed, that can be understood in terms of proton-antiproton final-state interactions (or initial-state interactions for the reserved process). Furthermore, there are interesting oscillating structures in the cross section that require additional poles and/or thresholds.
Clearly, there are a number of issues that require more data and/or further investigations: -For the neutron data basis, a thorough analysis of the existing electron-deuteron and electron-3 He scattering data based on chiral effective field theory and including two-photon corrections should be performed. This would allow to consistently analyze the proton and neutron form factors based on the dispersive approach applied directly to cross section data. -A new combined analysis of the space-like data (as done here) with the time-like data should be performed, including the improved knowledge of the pp final-state interactions obtained in the last decade. This would also sharpen the predictions for future measurements with PANDA at the FAIR facility. -Data on ep scattering or the polarization transfer at Q 2 10 GeV 2 are urgently needed to investigate the onset of perturbative QCD. It will also be interesting to find out whether the form factor ratio really crosses zero as the present data seem to indicate.
-It would also be useful to improve our understanding of the spectral functions in the unphysical region, see Figs. 2 and 30. This requires more work based on logarithmic dispersion relations such as Eq. (93). Finally, let us point out the the dispersion-theoretical approach to the nucleons electromagnetic form factors has matured and become a precision tool to analyze electron scattering and form factor ratio data. In the future, it will also be extended to analyze the upcoming muon-proton scattering data from the MUSE [179] and AMBER [180].
A Neutron form factors from light nuclei
As there are no free neutron targets, the neutron form factors must be extracted from electron scattering off light nuclei. In this appendix, we briefly outline how this can be achieved in case of the simplest nucleus, the deuteron. The extension to systems with 3 or 4 nucleons is similar but more complicated.
The deuteron is a spin-1 particle. Using Lorentz invariance, time-reversal invariance as well as parity and current conservation, the matrix element of the deuteron em currents takes the form, see e.g. [67,181] (as usual, in units of the elementary charge e) where p , p are the deuteron four-momenta, λ , λ the corresponding helicities, m d = 1.8756 GeV is the deuteron mass and Q 2 ≡ −k µ k µ = −(p − p) 2 ≥ 0. Furthermore, the polarization four-vectors ξ µ are subject to the constraints ξ µ (p, λ)p µ = 0 and ξ * µ (p , λ )p µ = 0. Instead of the scalar functions G i (Q 2 ) (i = 1, 2, 3), one often uses the deuteron charge (G C ), magnetic (G M ) and quadrupole (G Q ) form factors given by with η = Q 2 /(4m 2 d ). These form factors are subject to the normalizations: in terms of the deuteron magnetic moment, µ d = 0.857 µ N , and the deuteron quadrupole moment, Q d = 0.286 fm 2 .
In the one-photon exchange approximation, the unpolarized elastic electron-deuteron (ed) scattering cross sec-a) b) Fig. 31. Electromagnetic response of the deuteron. Diagram a) depicts the IA, that is sensitive to the nucleon em FFs (black circle) whereas diagram b) denotes the so-called two-body corrections (depicted by the shaded box) as explained in the text. The hatched triangles denote a deuteron wave function and solid (wiggly) lines represent nucleons (photons).
tion in the laboratory (lab) frame is given by with E the energy of the incoming electron, θ the electron scattering angle in the lab frame and Q 2 ≥ 0 is the squared momentum transfer. The structure functions A(Q 2 ) and B(Q 2 ) are related to the three form factors of the deuteron via As can be seen, from the unpolarized cross section one can not disentangle the charge and the quadrupole form factors. This can be achieved by considering polarization data, e.g. the tensor analyzing power T 20 (Q 2 , θ) is sensitive to a different combination of the three deuteron FFs.
Using a chiral expansion (or a meson-exchange model in older times), these deuteron FFs can now be related to the single nucleon form factors as depicted in Fig. 31. The so-called impulse approximation (IA), where the photon couples to one nucleon, is depicted in graph a). This contribution is evidently sensitive to the single nucleon em FFs. To be precise, one takes the proton FFs as given and is then entirely sensitive to G n E and G n M , see below. There are, however, a number of corrections as displayed by graph b) in Fig. 31. This include one-and two-pion exchange currents as well as photon-four-nucleon vertices (or heavy meson exchanges in the older language). The status of the nuclear currents based on a systematic evaluation of the few-body wave functions and the current operators is given in Ref. [182]. Consider for example the charge density that generates the charge form factor. As the deuteron is an isoscalar, its em response is entirely sensitive to the isoscalar form factors. Modulo higher order corrections, at leading order in the chiral expansion, the charge density gives rise to the form factors: ∞ 0 (u 2 (r) + w 2 (r)) j 0 kr 2 dr , where one works in the Breit frame with k 2 = Q 2 , the direction of the photon momentum is taken along the positive z-axis and k = |k|. Also, u(r) and w(r) are the deuteron S-and D-wave wave functions, normalized to one, see e.g. [141], and j 0 (x), j 2 (x) are the conventional Bessel functions. Corrections to this leading order results can be worked out straightforwardly, these are given for a chiral EFT approach in Ref. [109] (and references therein).
B Pion-nucleon scattering in the unphysical region
Here, we briefly discuss the subthreshold expansion of the πN amplitudes, which proceeds in terms of the variables ν = (s − u)/(4m) and t around ν = t = 0 , a ± mn ν 2m t n , where the upper/lower entry corresponds to I = ±, and the a ± mn , b ± mn are the subthreshold parameters. TheĀ,B are the Born-term-subtracted amplitudes, defined as X ± (ν, t) = X ± (ν, t) − X ± pv (ν, t), X ∈ {A, B}, (102) with Here, the subscript 'pv' refers to the pseudovector πN coupling as required from chiral symmetry and g denotes the πN coupling constant, to be identified later with g c for the charged-pion vertex. In the RS analysis of πN scattering, the value g 2 c /(4π) = 13.7(0.2) [91] has been used. This value is in line with the most recent determination from nucleon-nucleon scattering [183,184]. More details on the subthreshold expansion of the πN scattering amplitude is given in Refs. [88,52].
C Analysis of the three-pion contribution
Here, we briefly review the arguments of Ref. [96] that there is no strong enhancement on the left shoulder of the ω resonance, the lowest vector meson in the three-pion channel (for a recent update, see Ref. [97]). The imaginary parts of the isoscalar electromagnetic form factors open at the three-pion threshold t 0 = 9M 2 π . The three-pion cut contribution is given by where the symbol 'A' refers to the γ → 3π and 'B' to the 3π →N N transition, respectively, and dΓ 3 is the measure on the invariant three-body phase space. This can be explicitly worked out in baryon chiral perturbation theory [185], where to leading order in the small parameter p, namely at order O(p 7 ), the two-loop diagrams shown in Fig. 32 can contribute to the isoscalar imaginary parts, however, graph (d) vanishes because of an isospin factor zero. The isoscalar imaginary parts in the heavy nucleon limit m → ∞ can be given in compact form (note that this corresponds to switching off all higher order corrections starting at O(p 8 )) Im G S M (t) = g A m (8π) 4 F 6 π L(t) 3t 2 − 10tM 2 π + 2M 4 π +g 2 A 3t 2 − 2tM 2 π − 2M 4 π +W (t) t 3 + 2t 5/2 M π − 39t 2 M 2 π − 12t 3/2 M 3 π +65tM 4 π − 50 √ tM 5 π − 27M 6 π +g 2 A 5t 3 + 10t 5/2 M π − 147t 2 M 2 π + 36t 3/2 M 3 π +277tM 4 π − 58 √ tM 5 π − 135M 6 π , with and in terms of the kinematical variables of the two independent pions (the third one can be expressed in terms of these and the nucleon momenta), The behavior near threshold t 0 = 9M 2 π of the imaginary parts for finite pion mass, Eqs. (105,106), is given by which corresponds to a stronger growth than pure phase space This feature indicates (as in the isovector case) that in the heavy nucleon mass limit m → ∞ normal and anomalous thresholds coincide. In order to find these singularities for finite nucleon mass m an investigation of the Landau equations is necessary [186]. By using standard techniques [186] one finds one anomalous threshold of diagrams (a) and (b) at √ t c = M π 4 − M 2 π /m 2 + 1 − M 2 π /m 2 , which is very near to the (normal) threshold t 0 = 9M 2 π and indeed coalesces with t 0 in the infinite nucleon mass limit. Note that diagram (d) does not possess this anomalous threshold, but only the normal one.
The resulting spectral distributions weighted with 1/t 2 are shown in Fig. 33. Very different to the isovector spectral functions discussed in Sec. 3.4, they show a smooth rise and are two orders of magnitude smaller than the corresponding isovector ones, cf. Fig. 6. There is indeed no enhancement of the isoscalar electromagnetic spectral function near threshold. Even though the isoscalar and isovector electromagnetic form factors behave formally very similar concerning the existence of anomalous thresholds t c very close to the normal thresholds t 0 , the influence of these on the physical spectral functions is rather different for the two cases. Only in the isovector case a strong enhancement is visible. This is presumably due to the different phase space factors, which are (t − t 0 ) for the isovector and isoscalar case, respectively. In the latter case, the anomalous threshold at t c = 8.9M 2 π is thus effectively masked. This justifies the standard DR approach of only taking the ω-meson as the lowest pole in isoscalar spectral function. | 23,060.8 | 2021-06-11T00:00:00.000 | [
"Physics"
] |
Architecture and basic conditions for organizing video hosting based on a p2p-network
The paper discusses the principles of developing a software package based on a distributed registry (proprietary blockchain) for performing applied tasks within a decentralized video portal. Such structure is concerned to distribute of usergenerated content, implying the principles of "reverse economy" that suggest to reward proactive users of the service via sharing advertising revenue of the video portal amongst all participants in the advertising chain.
Introduction
The paper proposes the principle of an architectural solution for a software package based on a proprietary blockchain registry, designed to encourage user activity. This approach allows to implement one of the applied functions of the blockchain -monitoring the chronology of the consensus execution.
Unlike popular blockchain platforms, the proprietary system operates on the server side, rather than the client, what eliminates the burden on the computing power of users.
The innovativeness of the proposed approach is presented in the consensus of the distributed registry system, which allows to distribute advertising revenues in real monetary units amongst all participants in the content distribution chain (advertising aggregator, web portal, author, audience). The creation of a video portal in this project is a secondary mandatory task that is concerned to demonstrate the system's performance.
The basis for the software architecture is complex, according to the fundamental conditions of the distributed ledger system that is functioning as proprietary cryptographic system. Hereinafter, we will refer to it as "blockchain". Such title is referring to its architectural features, established through peer-to-peer network (P2P network) and based on the principle of distributing tasks and responsibilities between its nodes. The nodes of such a network are equal [1][2][3].
There is no hierarchy in a peer-to-peer network, as well as all its nodes are interconnected simultaneously providing and consuming data on equal terms. Such point makes the P2P network potentially resilient to individual node failures, since there is no single point of failure in such architectural construct. Importantly, all nodes in a peer-to-peer network are equal in rights, but their certain roles may differ. In particular, in the Bitcoin peer-to-peer network, clients vary greatly in functionality -from "full nodes" that store the entire history of the blockchain and give any block on request, to "light nodes" that store only headers and the last few blocks. This principle of distribution of roles was taken as a basis in this decision.
The main nodes of the system are: service users; promotional videos; comments in discussions; blocks with stored results of quantitative and qualitative analysis of user activity; interests of users (change over time); advertisement income from commercial views that is distributed amongst all stakeholders of the service (Figure 1). The implementation of such architecture is shown in the startup VideoStake, which is now being developed on the basis of DSTU-students (headed by I.S. Kalashnikov). The interaction scheme of the nodes is shown in Figure 2.
Backend components
The peculiarities of the organization of the peer-to-peer structure are closely related to the problem of finding information in it [4][5][6]. The search begins with the fact that the node, concerned to find a file, is regarded to generate a request containing the signs of this file. The task of the search algorithm is to ensure that this request is delivered in the most efficient way to the node(s) that actually has/have the desired file. The main stages of the search process, regardless of the specific algorithm, are: 1) formation of a search query; 2) sending a search query; 3) local execution of the request; 4) sending the search result; 5) processing of search results; Different search algorithms differ mainly in the mechanism of forwarding and delivering the search query to the node with the desired information. Based on how the nodes of the structure are interconnected to each other and how information is searched in such network ( Figure 2), the approach of the structured P2P network was chosen for the implementation of the project. Structured peer-to-peer systems use routing algorithms to find nodes. The routing algorithm determines exactly where the target node should be located in the overlay structure. This process is closely related to the geometry of the P2P network and the connectivity or information stored at each node (Fig.3).
In this case, the most convenient is the DHT (Distributed Hash Table) -based routing algorithm, which uses the hash of the node identifier to form a uniformly distributed identifier space [7,8].
The file identifier is generated using the same hash function. Hence, node IDs and data IDs fall into the same ID space. Files are usually stored at the closest node with a node ID greater than or equal to the file ID. This approach allows any node to find a specific file by its name. If the file you are looking for is not found on the node with the closest identifier, then such a file does not exist on the network (Fig.4). In our case, the SHA256 algorithm is used. An example of a class that implements this stage is shown in Figure 5.
Frontend components
For the web portal and related user functions of the VideoStake service, a web interface was developed that meets all modern requirements and principles of video hosting. Figure 6 shows a screenshot of the user authorization panel.
After logging into his account, the user can watch the videos of interest to him (Figure 7). After that, information about the funds collected through views is accumulated in his personal account.
In the user's personal account, it is possible to track the dynamics of the accumulation of funds collected by users, who is watching advertising videos ( Figure 8).
Practical implementation
Taking into account a dynamic nature of digital technologies, it is important for practitioners and academics to understand why users intend to continue to be its users or not [9][10][11]. The study of the performance of the software complex was carried out as part of a group experiment of user experience. Table 1 shows the result of a comparative calculation of income per 1 video channel with 4 pieces of video content uploaded per month. The comparison represents high effectivity of native advertisement towards classical ads that interrupt user experience of video watching. A possible operator for native ads might be "Mirriad", a UK company that is concerned to add advertising by real-time editing video canvas via CV [12]. Virtual loans are used as real currency. | 1,499.4 | 2021-01-01T00:00:00.000 | [
"Computer Science",
"Business"
] |
Absence of dynamical localization in interacting driven systems
Using a numerically exact method we study the stability of dynamical localization to the addition of interactions in a periodically driven isolated quantum system which conserves only the total number of particles. We find that while even infinitesimally small interactions destroy dynamical localization, for weak interactions density transport is significantly suppressed and is asymptotically diffusive, with a diffusion coefficient proportional to the interaction strength. For systems tuned away from the dynamical localization point, even slightly, transport is dramatically enhanced and within the largest accessible systems sizes a diffusive regime is only pronounced for sufficiently small detunings.
Introduction
The last decade has witnessed an increasing interest in isolated out-of-equilibrium (OOE) quantum systems both in theory and experiment. In particular, periodically-driven, or Floquet many-body systems have become one of the most active areas in condensed matter theory and many-body physics, due to the exciting possibilities of finding novel OOE phases of matter, without an analog in static systems.
Driven interacting systems typically heat up, until they approach a nonequilibrium steadystate (NESS) "locally identical" to a featureless state of maximal Gibbs entropy. That is, local observables cannot differentiate between the two states [1,2,3].
This "local identity" between the NESS and the maximal Gibbs entropy state is broken when conservation laws exist, for example in noninteracting systems [4,5]. The systems still approach a NESS, but this NESS is "locally identical" to a state which maximizes the Gibbs entropy under all the constraints imposed by the conserved quantities. A large body of work has concentrated on "Floquet engineering" of such noninteracting systems, namely using periodic driving as a tool to create effective time-independent Hamiltonians of desirable but difficult to implement static Hamiltonians [6,7,8].
For interacting systems it is possible to avoid the featureless NESS by the addition of a quenched disorder. In the absence of driving, these systems are many-body localized (MBL) [9,10,11,12,13,14,15,16], and for sufficiently high driving frequencies they do not heat up indefinitely [15,16]. Such systems were also shown to host nontrivial eigenstate order, corresponding to a new OOE phase of matter, colloquially dubbed a "time-crystal" [17] and subsequently studied in both experimental and theoretical studies [18,19,20,21].
In the present work we explore a different potential avenue to suppress the indefinite heating of interacting driven systems by starting with a noninteracting translationally invariant system which is dynamically localized [22,23,24,25,26] (see also the closely related study of a disordered system [27], which appeared while our work was in preparation). The special form of the driving term in dynamically localized systems leads to a frozen stroboscopic dynamics and therefore to disorder-free spatial localization. Whether adding interactions to such disorder-free localized systems results in heating, and what is the nature of the transport are the main questions we consider in our work. We would like to stress that the mechanism behind this localization, while of quantum nature, is not due to Anderson localization, as happens in related disorder-free kicked rotor systems which unfortunately are also dubbed "dynamically localized" [28,29,30]. The interacting versions of such systems were recently studied, but are not of direct relevance to our work [31,32].
The structure of this work is as follows: We present the model in Section 2, briefly survey dynamical localization adapted to the many-body context in Section 3 and in Section 4 present our main results showing that interactions destroy dynamical localization and cause diffusive dynamics with a diffusion coefficient proportional to the interaction. We end with our conclusions, a brief discussion of the implications of our results for the locality of Floquet effective Hamiltonians and an outlook.
Model
In this work we consider a driven Hamiltonian of the form, where the static Hamiltonian is given bŷ Here L is the length of the lattice,ĉ † m creates a spinless fermion at site m,n m is the number operator, J is the hopping and U is the interaction strength, and the driving protocol is where T = 2π/ω is the period of the driving, ω is the driving frequency and A is the amplitude of the drive. This model corresponds to a constant potential gradient with a strength proportional to A, the sign of which is flipped every half period.
Dynamical localization
For ∆ = 0, the model (1) is noninteracting and exactly solvable [22,23]. For specific ratios of A/ω it exhibits dynamical localization, namely, an initially localized density excitation does not spread over time. For convenience of the reader in this Section we adapt the derivation of dynamical localization of Ref. [23] to the many-body setting. For this purpose it is instructive to perform a time-dependent unitary transformation which eliminates the drive. This can be always achieved, since setting |ψ = V (t) |φ and using the Schrödinger equation, i∂ t |ψ = H |ψ it is easy to show that |φ satisfies i∂ t |φ =H (t) |φ , with the transformed Hamiltonian, Therefore if the original Hamiltonian is of the following form, the second term can be eliminated by choosing, where The transformed Hamiltonian is then given by, While the original driving term was eliminated, it appears that nothing was gained since the transformed Hamiltonian is still time-dependent. For a spatially uniform and temporally periodic force on the system, namely,Ĥ further progress can be made. For such a drive the destruction and creation operators transform as,â and the number operators are invariant under this transformation,n m =ĉ † mĉm =â † mâm . Therefore a generic Hamiltonian of the form, will be transformed to,H with time-dependence appearing only in the hopping term and with an arbitrary function g({n i }) depending only on the number operators. For simplicity we will now restrict the discussion to a single band by setting h nm = J 2 (δ n,m−1 + δ n,m+1 ). The total current is given byĴ which can be diagonalized toĴ wheref k = n exp (ikn)â n / √ L. For g ({n i }) = 0, namely in the absence of interactions and external potentials, the Hamiltonian can be diagonalized simultaneously withĴ, and therefore also the transformed Floquet operator, can be computed. Although the current is not conserved its expectation can be readily calculated, Dynamical localization occurs when the total number of particles transported in one period vanishes, that is, when the integral of the current over one period yields zero for all k, In this case the Floquet operator (14) reduces to the identity and its spectrum collapses such that all its eigenvalues are degenerate, making the stroboscopic dynamics trivial. Concretely, for the driving we use in this work (3), A (t) = A |t| (for |t| < T /2), and we have, which vanishes for e iAT /2 = 1, or A/ω = 2n n ∈ Z. When an external potential or interactions are present the Hamiltonian cannot be diagonalized in the basis ofĴ and the above derivation does not apply since quasimomentum is not conserved. Notwithstanding, one might still expect some residual suppression of transport, since using the periodicity of exp[iA (t)] = n C n exp [iωnt] one can cast the generic Hamiltonian (11) into the form which includes a static Hamiltonian with a hopping term renormalized by C 0 . This description is however oversimplified, due to the presence of a nontrivial driving term, which is coupled to the hopping. In the following Section we will explore what happens to transport in an interacting model, where the first term is set to vanish, namely C 0 = 0.
Results
We study density transport in a driven and interacting system. For this purpose we calculate the density-density correlation function, which describes the spreading of density excitation in the system. Here Z is the dimension of the relevant Hilbert space, and the operators are written in the Heisenberg picture with respect to the driven Hamiltonian. Intuitively this describes the spreading of a density excitation in the presence of driving. Here, we fix the number of fermions to N = (L − 1) /2, and use odd sizes L, such that an excitation at the center is at the same distance from the left and right (open) boundaries. To compute the excitation profile we stroboscopically evolve two initial states over n periods, |ψ R (n) =Û (nT ) n 0 − 1 2 |ψ 0 and |ψ L (n) =Û (nT ) |ψ 0 , using a numerically exact method. The method is based on projecting the time evolution operator to an orthonormal basis of a truncated Krylov space spanned by the initial state, which is updated at each time step (for a pedagogical review of the method, see Sec. 5.1.2 of Ref. [12]). We then calculate the matrix element C x (nT ) = ψ L (n) n x − 1 2 ψ R (n) . For the driving protocol we use in this work (3) the propagatorÛ (nT ) over n periods is simplified whereĤ ± corresponds to the driven Hamiltonian (1) in the first and second halves of the period and T is the time ordering operator. The initial state |ψ 0 is sampled randomly from the Haar measure, which due to Lévy's Lemma approximates the trace in (19) with an error which is exponentially small in the system size [33,12]. This computational scheme allows us to reach systems sizes as large as L = 31 sites, which corresponds to a Hilbert space dimension of 300, 540, 195 (see Ref. [12] for more details on the numerical procedure). (24)). The colored lines indicate system size L = 31 and the grey lines systems size L = 27, simulated with the same parameters. The interaction strength is fixed at ∆ = 0.5.
To characterize the transport we calculate the mean-square displacement (MSD) of the density excitation, and the time-dependent diffusion coefficient [34,35,12,36], We note in passing that within the linear response theory, namely when the drive is taken to be a small perturbation to the equilibrium state, this diffusion coefficient computed in the limit of t → ∞ corresponds to the value calculated from the Kubo formula. In this work we however do not consider the linear response regime; our driving amplitudes are large and take the state of the system very far from equilibrium. Therefore there is no reason to expect that the diffusion coefficient calculated from (22) and the Kubo formula coincide. In fact it is not even clear a priori if transport in such case will be diffusive. The dynamical localization of noninteracting systems, which was discussed in the previous Section, serves as an example when the diffusion coefficient (depending on the drive amplitude) is either zero or infinity and there is either no transport or the transport is ballistic. We now proceed to examine the stability of dynamical localization to the addition of interactions. To this end we set the system to the noninteracting dynamical localization point by setting the amplitude of the drive to, Throughout this work we fix the period of the drive to be T = 5, such that the frequency, ω = 2π/T ≈ 1.256 is smaller then the one-particle and many-body bandwidths, to allow effective energy distribution in the system using single and many-particle rearrangements.
While the system is in the noninteracting dynamical localization point, the left panel of Fig. 1 shows sub-ballistic spreading of the density excitation, indicating that dynamical localization is unstable under the addition of interactions. The right panel shows fixed time cuts through the excitation profile. Interestingly, even after relatively long times these profiles do not approach a Gaussian form, suggesting that the density-density correlation function C x (t) is not well described by a diffusion equation. We also added an isoline at C x (t) = 0 (the background of C x (t) is slightly negative due to the trivial correlation imposed by particle number conservation), which shows a peculiar feature at short times. We attribute this feature to short time transient dynamics which roughly corresponds to the location of the peak in the time dependent diffusion coefficient D(t) shown in Fig. 2. The asymptotic transport is only reached at later times.
To characterize the nature of transport, and to estimate the finite size effects in the system we calculate the MSD (21) for different interaction strengths and several system sizes (not all shown for readability). From the left panel of Fig. 2 it is clear that while dynamical localization is destroyed by the addition of interactions, for weak interactions transport is still significantly suppressed and signalled by a slow growth of the MSD, which allows us to reach very long times before our results are affected by finite size effects. This time becomes shorter for stronger interactions, as could be inferred from the deviation in MSD between the various system sizes. After a short time of fast transient transport the MSD appears to enter a diffusive regime, as could be seen from a linear growth of the MSD. To see it more clearly we compute the time-dependent diffusion coefficient (22) in the middle panel of Fig. 2. It shows a clear saturation to finite plateaus, which become longer for larger system sizes (while the height of the plateaus remains the same) and indicate that the departure from the plateau at later times is a finite size effect. The asymptotic diffusion coefficient is extracted from the height of the plateaus for different interaction strengths and is presented in the right panel of Fig. 2. While the overall dependence on the interaction strength is nonlinear, for sufficiently weak interaction strengths we find D ∼ ∆.
We now fix the interaction strength to ∆ = 0.5, such that we are able to reach a pronounced diffusive regime within our simulation times for A/A 0 = 1, and detune away from the dynamical localization point by slightly changing the amplitude of the drive. We characterize the strength of the detuning from the dynamical localization point by is much faster in this regime even for a few percent detuning from the dynamical localization point, requiring very large system sizes to capture the asymptotic transport. Both the MSD and the time-dependent diffusion coefficient show diffusive behavior for small detuning δ = ±0.03, while for larger (in magnitude) detuning δ = −0.06 it appears that our results do not capture the asymptotic transport even for the largest system size L = 31.
Finally we examine the broadening of the Floquet spectrum upon addition of interactions. As explained in Sec. 3, the spectrum of the operatorÛ (T, 0) in (20) collapses to a single point at the noninteracting (∆ = 0) dynamically localized point δ = 0, resulting in a trivial stroboscopic dynamics. A perturbative reasoning suggests that a finite interaction will lift this massive degeneracy and broaden the quasienergy spectrum. We have confirmed this expectation by evaluating the quasienergies α , defined asÛ (T, 0) |α = exp (−i α T ) |α , by exactly diagonalizingÛ (T, 0) up to systems sizes L = 14. The width of spectrum was calculated from the standard deviation σ ω of the α (after the quasienergies were shifted to the center of the band to avoid spurious widening due to wrapping around the Floquet Brillouin zone). The results are shown in Fig. 4, where we see that the spectrum broadens upon addition of interactions, however the location of the minimal (but still finite) width remains at zero detuning, δ = 0 (see (24)). This is consistent with our observation that slowest transport occurs at this point (Fig. 3).
Discussion
In this work we have examined the stability of dynamical localization to the addition of interparticle interactions. We have found that while dynamical localization is destroyed by interactions, transport is significantly suppressed when the interacting system is at the noninteracting dynamically localized point (see right panel of Fig. 4 where the minimum of the width occurs at zero detuning, δ = 0 for all considered interaction strengths ∆). The slow transport allows us to find clear evidence of diffusive transport with a diffusion constant proportional to the interaction strength, indicating that scattering is dominated by interparticle collisions. We have also studied the dependence of the width of the quasienergy spectrum on the interaction strength. While the spectrum is completely degenerate for zero interactions, leading to trivial stroboscopic dynamics, for finite interaction the degeneracy is lifted and the width appears to be proportional to ∆ before it saturates to its maximum value fixed by the driving frequency. From Fig. 4 one can see that for a fixed interaction strength the width increases with system size. This leads us to speculate that in the thermodynamic limit an arbitrarily weak interaction causes the spectrum to occupy the full allowed quasienergy range. While it is tempting to conclude that the width of the spectrum corresponds to the rate with which the dynamics unfolds, this is not true in general. For example, a finite width occurs for Floquet-MBL systems while the dynamics is frozen [15,16]. We therefore can only conclude that when the spectrum broadens dynamical localization may be destroyed, but the precise nature of the underlying dynamics has to be assessed by other means (as we have done in Figs. 2 and 3).
One interesting question on which our work might have some bearing is the locality of the effective Hamiltonian,Ĥ F . This quantity is defined fromÛ (T, 0) in (20) asÛ (T, 0) = exp −iĤ F T and is the generator of the stroboscopic time evolution. It has been speculated that for generic Floquet systems which are not MBL the effective HamiltonianĤ F is nonlocal [37,1]. Since it is generally believed that spreading of correlations in systems with nonlocal static Hamiltonians will be superballistic, [38,39,40,41] one may wonder whether the diffusive transport we observe in this work points toward a local effective Hamiltonian. We would like to argue that due to the ambiguity in the definition of the effective Hamiltonian the question is not well posed. The ambiguity follows from the fact thatÛ (T, 0) and therefore all stroboscopically measured observables are invariant under the transformationĤ F =Ĥ F + ω α n α |α α|, where |α are the eigenstates ofÛ (T, 0) and n α ∈ Z. Since in general the projectors |α α| are nonlocal objects this renders the locality ofĤ F a property which cannot be inferred from physical observables. While we believe that the discussion of locality ofĤ F is irrelevant as far as physical observables are concerned, it remains an interesting question whether the ultimate fate of driven interacting systems is encoded in the locality of the corresponding Floquet operator,Û (T, 0), which is a physical quantity and whose locality can be assessed, for example using the operator entanglement entropy [42]. We leave this question for future work. | 4,377.8 | 2017-06-28T00:00:00.000 | [
"Physics"
] |
SPInS, a pipeline for massive stellar parameter inference: A public Python tool to age-date, weigh, size up stars, and more
Stellar parameters are required in a variety of contexts, ranging from the characterisation of exoplanets to Galactic archaeology. Among them, the age of stars cannot be directly measured, while the mass and radius can be measured in some particular cases (binary systems, interferometry). Stellar ages, masses, and radii have to be inferred from stellar evolution models by appropriate techniques. We have designed a Python tool named SPInS. It takes a set of photometric, spectroscopic, interferometric, and/or asteroseismic observational constraints and, relying on a stellar model grid, provides the age, mass, and radius of a star, among others, as well as error bars and correlations. We make the tool available to the community via a dedicated website. SPInS uses a Bayesian approach to find the PDF of stellar parameters from a set of classical constraints. At the heart of the code is a MCMC solver coupled with interpolation within a pre-computed stellar model grid. Priors can be considered, such as the IMF or SFR. SPInS can characterise single stars or coeval stars, such as members of binary systems or of stellar clusters. We illustrate the capabilities of SPInS by studying stars that are spread over the Hertzsprung-Russell diagram. We then validate the tool by inferring the ages and masses of stars in several catalogues and by comparing them with literature results. We show that in addition to the age and mass, SPInS can efficiently provide derived quantities, such as the radius, surface gravity, and seismic indices. We demonstrate that SPInS can age-date and characterise coeval stars that share a common age and chemical composition. The SPInS tool will be very helpful in preparing and interpreting the results of large-scale surveys, such as the wealth of data expected or already provided by space missions, such as Gaia, Kepler, TESS, and PLATO.
Introduction
Stellar ages, masses, and radii (hereafter stellar parameters) are indispensable basic inputs in many astrophysical studies, such as the study of the chemo-kinematical structure of the Milky Way (i.e. Galactic archaeology), exoplanetology, and cosmology. Indeed, stellar parameters have long been used to answer questions on how stars populating the different structures, that is the discs, bulge, and halo, in our Galaxy were formed and evolve, and to decipher in-situ formation, migration, and mergers. In this context, stellar parameters are the basis of stellar age-metallicity and age-velocity relations, the stellar initial mass function (IMF), or the stellar formation rate (SFR) [see Haywood (2014) for a review]. Also, the ages of the oldest stars provide a robust lower limit to the age of the Universe. Recently, with the discovery of several thousands of exoplanetary systems, it has become evident that no characterisation of the internal structure and evolutionary stage of planets is possible without a precise determination of the radius, mass, and age of the host stars (see e.g. Rauer et al. 2014).
Today, the availability of observations from large-scale astrometric, photometric, spectroscopic, and interferometric surveys has made the demand for very precise and accurate stel-lar parameters acute. With Gaia (Gaia Collaboration et al. 2018) and large spectroscopic surveys being conducted in parallel (see details and references in Sect. 2 of Miglio et al. 2017), the number of stars with precise astrometry, kinematics, and abundances will increase by more than three orders of magnitude. With high-precision photometry space-borne missions such as CoRoT (Baglin et al. 2006), Kepler (Borucki et al. 2010), K2 (Howell et al. 2014), TESS (Ricker et al. 2015), and in the future PLATO (Rauer et al. 2014), thousands of exoplanets have been and will be discovered. For planetary host stars of F, G, K spectral type on the main sequence or on the red giant branch, it is and will be possible to extract the power spectrum of their solar-like oscillations from the observed light curve, providing asteroseismic constraints to their modelling. Asteroseismology, therefore, will give access to precise and accurate masses, radii, and ages for these stars (see e.g. Silva Aguirre et al. 2015). However, for cold M-type stars, which are optimal candidates for hosting habitable planets, the availability of asteroseismic constraints is less probable. In this context, to fully exploit these rich data harvests and ensure scientific returns, we need modern numerical tools that are able to infer the stellar parameters of very large samples of stars. Soderblom (2010) reviewed the various methods that can be applied to age-date a star, pointing out that a given method (i) in most cases is only applicable to a limited range of stellar masses or evolutionary stages, (ii) can provide either absolute or relative ages, and (iii) is sometimes only applicable to very small stellar samples. Moreover, the precision and accuracy tightly depend on the age-dating method. Here, we focus on the so-called isochrone placement method (Edvardsson et al. 1993) that has long been used for age-dating and weighing stars in extended regions of the Hertzsprung-Russell (hereafter H-R) diagram. This method only requires having stellar evolutionary models available and is rather straightforward. It can provide ages and masses when other more powerful techniques, such as asteroseismology, are not applicable. It can also serve as a reference when several age-dating methods are applicable. The precision depends on the star's mass and evolutionary state. Basically, the method consists in inferring the age and mass of an observed star with measured effective temperature, absolute magnitude, and metallicity (hereafter classical data) or any proxy for them, by looking for the theoretical stellar model that best fits the observations (e.g. Edvardsson et al. 1993;Ng & Bertelli 1998). The adjustment can be performed in different ways. The simplest way to proceed is to select the appropriate isochrone by a χ 2 -minimisation, that is by searching among the isochrone points which one is closest to the star's location, in the related parameter space (see e.g. Ng & Bertelli 1998). However, the selection of the right isochrone may be difficult in regions of the H-R diagram where they have a complex shape. In such regions, the evolutionary state of the star cannot be determined unambiguously and the star's position can equally be fitted with several isochrone points of different ages, masses, and metal contents. To improve the age-dating procedure, Pont & Eyer (2004) proposed a Bayesian approach that, by adding prior information about the stellar and Galactic properties, allows the procedure to choose the most probable age. The technique has been refined and improved by Jørgensen & Lindegren (2005), da Silva et al. (2006), von Hippel et al. (2006, Takeda et al. (2007), Hernandez & Valls-Gabaud (2008), and Casagrande et al. (2011), for the earlier papers, and reviewed by Valls-Gabaud (2014) and von Hippel et al. (2014). In this context, the work by da Silva et al. (2006) gave birth to the PARAM 1 web interface for the Bayesian estimation of stellar parameters. Then, a few stellar age-dating public codes with different specificities were made public: BASE-9 2 that allows the users to infer the properties of stellar clusters and their members, including white dwarfs (von Hippel et al. 2006), UniDAM 3 that can be used to exploit large stellar surveys (Mints et al. 2019), stardate 4 that considers constraints from gyrochronology (Angus et al. 2019), and MCMCI 5 that is dedicated to the characterisation of exoplanetary systems (Bonfanti & Gillon 2020).
In this work, we present and make public 6 a new tool based on Python and Fortran, named SPInS (standing for Stellar Parameters Inferred Systematically). SPInS is a modified version of the AIMS (Asteroseismic Inference on a Massive Scale) pipeline 7 . The AIMS code has been described and evaluated by Lund & Reese (2018) and Rendle et al. (2019). AIMS is able to perform a full asteroseismic analysis and can estimate stellar parameters from two sets of observations: classical data and detailed asteroseismic constraints (individual oscillation frequencies or a combination thereof). While AIMS is essentially an asteroseismic tool, SPInS is not intended to handle detailed seismic data but rather focuses on classical or mean observed stellar data (these will be explained in the following sections). This greatly simplifies the procedure with a substantial gain in computational time and occupied disc space. In particular, SPInS only uses the standard outputs of stellar evolution models and does not need to be provided with the detailed calculations of the oscillation spectrum of the models.
SPInS was initially created in 2018 to be used in hands-on sessions during the 5 th International Young Astronomer School held in Paris 8 . The goal of SPInS is to estimate stellar ages and masses, as well as other properties and their error bars, in a probabilistic manner. This tool takes in a grid of stellar evolutionary tracks and applies a Monte Carlo Markov Chain (MCMC) approach in combination with a multidimensional interpolation scheme in order to find which stellar model(s) best reproduce(s) the observed luminosity L (or any proxy for it, such as the absolute magnitude in a given band M b, ), effective temperature T eff, (or any colour index), and observed surface metal content [M/H]. The latter can be replaced or complemented by other data derived from observations, such as the surface gravity log g, the mass or radius, or both (for stars in eclipsing, spectroscopic, visual binaries, or with interferometric measurements), or asteroseismic parameters (the frequency at maximum power, the mean large frequency separation inferred from the pressuremode power spectrum, etc.). The advantage of this approach is that it provides a full probability distribution function (hereafter PDF) for any stellar parameter to be inferred, thereby accounting for multiple solutions when present. It also allows the user to incorporate in the calculation various priors (i.e. a priori assumptions), such as the initial mass function (IMF), the stellar formation rate (SFR), or the metallicity distribution function (MDF). SPInS can be used in two operational modes: characterisation of a single star or characterisation of coeval groups, including binaries and stellar clusters. SPInS is mostly written in Python with a modular structure to facilitate contributions from the community. Only a few computationally intensive parts have been written in Fortran in order to speed up calculations.
The paper is organised as follows. In Sect. 2, we explain the Bayesian approach used in SPInS. In Sect. 3, we present some results obtained with SPInS for a set of fictitious stars with particularly noticeable locations in the H-R diagram. In Sect. 4, we compare results obtained by SPInS with those derived by Casagrande et al. (2011). In Sect. 5, we show inferences on properties of stars observed either in interferometry or in asteroseismology. In Sect. 6, we use SPInS to study coeval stars belonging to a binary system and an open cluster. Finally, we draw some conclusions in Sect. 7.
Overview
SPInS uses a Bayesian approach to find the PDF of the stellar parameters from a set of observational constraints. At the heart of the code is a MCMC solver based on the Python EMCEE package (Foreman-Mackey et al. 2013) coupled to interpolation within a pre-computed grid of stellar models. This allows SPInS to produce a sample of interpolated models representative of the underlying posterior probability distribution. We recall that the posterior probability distribution can be obtained from the priors and the likelihood via Bayes' theorem: where O are the observational constraints and θ the model parameters. In other words, this theorem provides a way of calculating the probability distribution for model parameters given a set of observational constraints, as represented by the likelihood function P(O|θ), and priors P(θ). The next two sections will deal with these two terms in more detail.
The priors
The priors represent our a priori knowledge of how the model parameters should behave. For instance, one expects a higher number of low-mass stars than high-mass stars, and this can be expressed as a prior on the mass with a higher probability at low mass values. As described in the following subsections, the priors will apply to the following stellar parameters given that we will be working with BaSTI stellar model grids (Pietrinferni et al. 2004(Pietrinferni et al. , 2006: mass, age, and metallicity. Of course, the choice of model parameters that are included in the priors depends on the parameters that describe the grid being used with SPInS.
Initial mass function
The initial mass function (IMF) was first introduced by Salpeter (1955). It provides a convenient way of parametrising the relative numbers of stars as a function of their mass in a stellar sample (see e.g. the review by Bastian et al. 2010).
The number dN(m) of stars formed in the mass interval [m, m + dm] reads dN(m) = ξ(m)dm where ξ(m) is the IMF. SPInS can handle two forms of the IMF: a one-slope version that reads and a two-slopes version, The one parameter version (Eq. 2) is related to the IMF introduced by Salpeter (1955) with the following parameter, The two-parameters version (Eq. 3) can be used to implement the canonical IMF from Kroupa et al. (2013, section 9.1) which is suitable for stars in the solar neighbourhood. In that case, Any other form of the IMF can easily be added to the SPInS program.
Stellar formation rate
We restrict our working age domain to an upper limit of 13.8 Gyr, that is roughly the age of the Universe, and we used the following uniform truncated stellar formation rate (SFR): This translates into a prior on the age of a star.
Metallicity distribution function
We assume that the metallicity measurements are or will be available for the stars we want to age-date. Therefore, we do not introduce an a priori assumption on what their metallicity should be (see the discussion in Jørgensen & Lindegren 2005). Thus, by default, we adopt a flat prior on the metallicity [M/H] distribution function (MDF). However, any prior on the MDF can be introduced into SPInS. As an example, in Sect. 4, we introduced the prior adopted by Casagrande et al. (2011) and given in their Appendix A to correct for metallicity biases found in the Geneva-Copenhagen Survey.
The likelihood function
The likelihood function is used to introduce observational constraints. Typically, these include constraints on classic observables, such as the luminosity, effective temperature, and metallicity. However, as will be shown in the following, constraints on other observables may be used, such as the absolute magnitude in any photometric band, colour indices, asteroseismic indices, radius, and whatever parameters are available with the grid of models being used with SPInS (as described in Sect. 2.5). These constraints take on the form of probability distributions on the value of the parameter. This leads to the following formulation for the likelihood function: where the P i represent the probability distributions on individual parameters resulting from the observational constraints, and O i the values of those parameters obtained for a given set of model parameters θ. The probability distributions P i are typically normal distributions although other options are available with SPInS.
Variable changes
One of the features of SPInS is to allow variable changes. For instance, one may have observational constraints on √ L rather than L or may have a prior on log 10 M rather than M. SPInS allows such variable changes for a handful of elementary functions. Of course, such changes need to be taken into account in a self-consistent way. In other words, the underlying probability distribution should not be altered. Accordingly, variable changes on observed parameters are treated differently than those on model parameters. To understand this, we recall the relationship between probability functions after a change of variables: We then introduce this relation into Bayes' theorem and assume for simplicity that there is a single observed parameter and a single model parameter. We then assume the prior and likelihood function apply to f (θ) and g(O), respectively, instead of θ and O. This leads to: As can be seen, a change of variables on an observed constraint does not lead to any modification to the way the probability is calculated because the corrective terms cancel out. In contrast, applying a prior to a different variable than the one used in the grid requires multiplying by the term d f (θ) dθ . SPInS accordingly takes this term into account using analytic derivatives of the elementary functions used in the variable change.
Grids of stellar models
SPInS can easily include any set of evolutionary tracks or isochrones available in the literature or calculated by the user. In this work, we used the BaSTI stellar evolutionary tracks available at http://albione.oa-teramo.inaf.it/index.html and described in Pietrinferni et al. (2004Pietrinferni et al. ( , 2006. We chose to use these data rather than more recent ones in order to make comparisons with previous works. In the BaSTI database, many sets of stellar tracks, all in the mass range M ∈ [0.5M , 10M ], are available. These models are well-suited to age-dating stars of different kinds: they cover evolutionary stages running from the zero-age main sequence (we do not use here the additional pre-main sequence grid provided for a narrower range of mass) to advanced stages, including the red-giant and horizontal branches, and a metal abundance range Z ∈ [0.0001, 0.04], where Z is expressed in mass fraction. This interval of Z-values corresponds to number abundances of metals relative to hydrogen [M/H] ∈ [−2.27, +0.40], where [M/H] = log(Z/X) − log(Z/X) and X is the hydrogen mass fraction. The value of (Z/X) depends on the solar mixture under consideration. GN93's solar mixture (Grevesse & Noels 1993) has a value (Z/X) = 0.0245.
On the BaSTI website, the following grids are available: -Canonical grid: it corresponds to standard stellar models that do not include gravitational settling, radiative accelerations, convective overshooting, rotational mixing, but otherwise are based on recent physics, as detailed in Pietrinferni et al. (2004). The models are based on GN93's solar mixture. -Non-canonical grid: the difference with the canonical grid is that models in this grid account for core convective overshooting during the H-burning phase which may have a nonnegligible impact on age. As described in Pietrinferni et al. (2004), in these models, convective mixing is extended over 0.2 pressure scale-heights above the Schwarzschild's core for a stellar mass higher than 1.7 M , no overshooting is considered for a mass lower than 1.1 M , and a linear variation is assumed in-between. For each evolutionary track in the BaSTI database, the variation of the luminosity, effective temperature, absolute M b magnitude, and colour indices are provided as a function of the age and mass of the model star for several photometric systems. We here use the tracks given in the Johnson-Cousins system which provide colours, but other photometric systems are available in the BaSTI database. All these quantities can be used indifferently in SPInS.
In addition, we considered four quantities that can be inferred from BaSTI models straightforwardly: the photospheric radius R calculated from Stefan Boltzmann's law, the surface gravity (or its decimal logarithm log g), and the frequency at maximum amplitude ν max,sc and the mean large frequency separation of pressure modes ∆ν sc expressed in asteroseismic scaling relations. The latter read, and and are explained in Brown et al. (1994); Kjeldsen & Bedding (1995); Belkacem et al. (2011). In Eqs. 10, 11, and 12, G is the gravitational constant, M the mass of the star, T eff, = 5777 K, the subscript 'sc' stands for scaling, and ν max, = 3090 µHz and ∆ν = 135.1 µHz are the solar values in Huber et al. (2011). In the following, ν max,sc and ∆ν sc will be referred to as seismic indices.
Interpolation in the grids
As was the case for the AIMS code, SPInS uses a two-step process for interpolation. This then allows the MCMC algorithm to randomly select any point within the relevant parameter space. The first part of the interpolation concerns interpolation between evolutionary tracks. The second part concerns interpolation along the tracks, that is as a function of age. These are described in the following subsections.
Interpolation between evolutionary tracks
Interpolation between evolutionary tracks amounts to interpolating in the parameter space defined by the grid parameters, excluding age. In this parameter space, each track corresponds to a single point. As a first step, a Delaunay tessellation is carried out for this set of points via the Qhull package 9 (Barber et al. 1996) as implemented in SciPy 10 . As a result, the parameter space is subdivided into a set of simplices (i.e. triangles in two dimensions, tetrahedra in three dimensions, etc.). Then, for any point within the convex hull of the tessellation, SPInS searches for the simplex which contains the point and carries out a linear barycentric interpolation on the simplex. The advantage of such an approach is that the grid of stellar models can be completely unstructured, thus providing SPInS with a greater degree of flexibility. Furthermore, fewer tracks are linearly combined during the interpolation process thus potentially saving computation time compared to multilinear interpolation in Cartesian grids of the same number of dimensions.
Interpolation along evolutionary tracks
The second part of the interpolation focuses on age interpolation along evolutionary tracks. Interpolation along a track is achieved by simple linear interpolation between adjacent points thus leading to piecewise affine functions for the various stellar parameters as a function of age. What is more difficult is combining age interpolation with interpolation between tracks. As opposed to AIMS, SPInS uses two variables for the age: the physical age and a dimensionless age parameter. The purpose of the age parameter is to provide equivalent evolutionary stages on different tracks for the same value of this parameter. This then allows SPInS to combine models at the same evolutionary stage when interpolating between tracks thus improving the accuracy of the interpolation. Nonetheless, from the point of view of the MCMC algorithm, it is the physical age which is relevant, that is to say the MCMC algorithm will sample the physical age (thus bypassing the need for a corrective term as in Eq. 8). This is particularly important when fitting multiple coeval stars, that is with a common physical age. Hence, SPInS is constantly going back and forth between these two age variables. Sampling as a function of physical age while interpolating in terms of the age parameter is not straightforward as illustrated in Fig. 1. In the plot, the interpolated track is halfway between the two original tracks for fixed values of the age parameter (although, SPInS can, of course, also interpolate using other interpolation coefficients). The same is not true for fixed stellar ages: the interpolated track is not halfway between the original tracks for fixed stellar ages, as can be seen for instance with the vertical dashed line at the target age. Hence, one cannot simply find the age parameters on the original tracks for the target stellar age and interpolate between these to obtain the age parameter of the interpolated model. One solution would be to interpolate the entire track and then search for the age parameter directly on it. This is, however, not the most efficient approach computationally as most of the track is not needed and would probably considerably slow down SPInS given that this operation would need to be performed for each set of parameters tested by the MCMC algorithm. The solution implemented in SPInS consists in a dichotomic search as a function of the age parameter combined with a direct resolution once the interval is small enough to only contain a single affine section of the interpolated track. For the sake of efficiency, this part is written in Fortran.
Fitting multiple stars
As explained earlier, SPInS can simultaneously fit multiple coeval stars, such as what is expected in binary systems or stellar clusters. Accordingly, a set of model parameters is obtained for each star, with however, the possibility of imposing common parameters such as age and metal content. Individual likelihood functions are defined for each star, whereas the same set of pri- Fig. 1. Schematic plot illustrating how age interpolation works in SPInS. The two solid lines correspond to two neighbouring stellar evolutionary tracks which are involved in the interpolation. The horizontal hatch marks indicate that the interpolation takes place horizontally (i.e. models with the same age parameter rather than physical age are linearly combined). The dotted line shows the interpolated track. The vertical blue dashed line corresponds to the target age and the yellow dot to the interpolated model. ors is applied to the model parameters for each star. For common parameters, the prior is only applied once, unless the user specifically configures SPInS to apply it to each star (which amounts to raising the prior to the power n stars , where n stars is the number of stars). Hence, the overall posterior probability is obtained as the product of the likelihood functions and priors applied to the parameters of each star apart from those of the common parameters which are only applied once. Finally, for stellar samples sharing the same age, an isochrone file may be produced covering the whole mass interval spanned by the stellar models used by SPInS. Fitting multiple stars is advantageous as it can lead to tighter constraints on common parameters (e.g. Jørgensen & Lindegren 2005).
Typical calculation times
Computation times depend on a number of factors such as the number of stars being fitted, n stars , and the number of dimensions of the grid (excluding age), n dim , as well as on various MCMC parameters such as the number of iterations (both burn-in and production), n iter , the number of walkers, n walk , and the number of temperatures if applying parallel tempering, n temp . Typical computation times for individual stars, (n stars , n dim , n iter , n walk , n temp ) = (1, 2, 400, 250, 10), is of the order of 1 min when using four processes on a Core i7 CPU. When fitting 92 stars simultaneously from the Hyades cluster using age and metallicity as common parameters, (n stars , n dim , n iter , n walk , n temp ) = (92, 2, 600, 250, 10), the computation time was around 1.5 hours. However, convergence is slower in such a situation given the higher number of dimensions from the point of view of the MCMC algorithm. Hence, 20 000 burn-in plus 200 production iterations were needed, thus leading to a computation time of roughly 75 to 150 hours using four processors (although this was carried out on a slightly slower processor). Table 1. BaSTI stellar evolution tracks, some of which are labelled by their initial mass, are shown as well as isochrones, the ages of which are indicated in the legend starting from the youngest.
Parameter inference for a set of fictitious stars
As a case study, corresponding to the most common demand for age and mass inference on a survey-wide scale, we first focus on determining the properties of a small set of fictitious stars with solar metallicity and spread across the H-R diagram. The stars' properties (luminosity, effective temperature, metallicity, and their error bars) used as inputs to SPInS are listed and explained in Table 1 and its caption. The positions of the stars in the H-R diagram are shown in Fig. 2. We used the solar-scaled non-canonical BaSTI stellar model grid (cf. section 2.5), as well as the IMF from Kroupa et al. (2013) given in Eqs. 3 and 5, a uniform truncated SFR (Eq. 6), and a uniform MDF, as priors. For each star, the inferred age and mass are listed in Table 1. Depending on the position of a star in the H-R diagram, the solution may be subject to an important degeneracy which is revealed in the posterior probability distribution function (see the thorough discussions in, e.g. Jørgensen & Lindegren 2005;Takeda et al. 2007). We show in Fig. 3 several PDFs showing a different typical morphology which we examine below.
-Firstly, low-mass stars on the main sequence (hereafter MS) lie in a region where the isochrones are crowded and thus can be fitted by practically any isochrone. This is the case of star SF1, close to the Sun, whose age is very ill-defined (see the PDF in the left panel of top row in Fig. 3). -Secondly, more massive stars, either on the MS or fully installed on the subgiant branch, have a rather well-defined age and their PDF shows a single peak. This is the case for star SF12 shown in the top row, right panel of Fig. 3, but also for MS stars SF3, SF8, SF11, SF14, SF15, SF17, and subgiants SF5 and SF19. For stars close to the zero-age MS such as star SF2, that is barely evolved stars, the one-peak PDF (not shown) is very asymmetric. It is truncated close to age 'zero' of the evolutionary tracks, meaning that for these stars, tracks including the pre-main sequence phases should be used. Moreover, since the region where star SF2 lies is still crowded with isochrones, its PDF shows a very long tail towards high ages up to about 8 Gyr. -Thirdly, for stars of mass M 1.2M which had a convective core during the MS and are now lying in the so-called hook region, in the vicinity of the end of the MS, several ages are possible. However, these ages are not equally probable because of the different amounts of time spent either before the red point of the evolutionary track (i.e. minimum of T eff on the hook) or in the second contraction phase before the blue point (i.e. maximum of T eff on the hook), or at the very beginning of the subgiant phase. This translates into a PDF generally showing two peaks, as can be seen in the bottom row, left panel of Fig. 3 for star SF7, but the same behaviour is seen for stars SF4, SF9, and SF18. For star SF20, located close to the helium burning region where the star undergoes blue loops, the PDF also shows two peaks, one of them being very discreet. -Fourthly, stars lying close to the red giant branch either show a more or less well-defined peak in their PDF (such as stars SF13 and SF16) or a flattened PDF (such as star SF6 shown in the bottom row, right panel of Fig. 3 and star SF10).
Several indicators of a parameter, for instance the age, can be used. In Fig. 3, we show the median, the mean, and the posterior mode values given by SPInS for stars SF1, SF6, SF7, and SF12. The estimator of the mode is the maximum a posteriori (MAP). However, in the case of star SF7, the age PDF is multimodal, showing two maxima. Figure 3 shows that the mode provided by SPInS is close to the second maximum when, intuitively, one would have taken the mode to be the value at the maximum of the higher peak. It is because the mode, as calculated by SPInS, corresponds to the parameter set yielding the maximum posterior probability given the observations (see Eq. 1) obtained in the grid's parameter space (i.e. in the mass-age-metallicity space for this particular example). In contrast, the histogram showing the age PDF is obtained after an integration (i.e. marginalisation) with respect to mass and metallicity. For star SF7, the secondary peak in the age histogram corresponds to a higher but narrower peak (especially in terms of the variables mass and metallicity) in the original grid parameter space, thus explaining why it is lower after marginalisation. On the other hand, the mass is welldetermined for all stars, with PDFs mostly presenting one or two peaks.
Parameter determination for stars in the Geneva-Copenhagen Survey
In this section, we aim at testing and validating the SPInS tool by characterising the stars of the Geneva-Copenhagen Survey (GCS). The GCS is a compilation of observational and stellar model-inferred properties of stars belonging to the solar neighbourhood. The first version of the GCS was presented and made public by Nordström et al. (2004, hereafter GCS04). It provides a complete, magnitude-limited (V < 8.3), and kinematically unbiased sample of 16 682 nearby F and G dwarf stars. Many data can be found in the catalogue, of which the Hipparcos parallax, metallicity, effective temperature, and Johnson V-magnitude are of interest here. Later on, the data in the catalogue were assessed and refined by Holmberg et al. (2007. Then, Casagrande et al. (2011, hereafter GCS11) improved the accuracy of the effective temperatures on the basis of the infrared flux method, and consequently improved the metallicity scale. They also provided a proxy for the [α/Fe] ratio and reddening E(B-V). Each version of the GCS also provides the age and mass of the stars, and related uncertainties, derived by means of a Bayesian analysis.
In order to compare the results of SPInS with those of Casagrande et al. (2011), we used SPInS to determine the ages and masses of stars in the GCS. We adopted, as far as possible, the assumptions made by these authors, namely: -We used the following observational constraints for each star: logarithm of effective temperature log T eff , absolute Vmagnitude in the Johnson band M V , and metallicity [M/H] (not [Fe/H]). -We took the same source for stellar models, that is the solarscaled canonical BaSTI grid described in Sect. 2.5 taken from their website (Pietrinferni et al. 2004). However, the grid used by Casagrande et al. (2011) was specially prepared and is finer than the one available on the website. Furthermore, in contrast with Casagrande et al., we did not use the isochrones but the evolutionary tracks, which are the direct products of stellar evolution calculations. -As was done in Casagrande et al. (2011), we adopted stellar evolution models calculated for a solar-scaled mixture, but we re-scaled the metallicity to mimic the α-element enrichment. Since Casagrande et al. (2011) do not explicitly give the re-scaling relation they adopted, we adopted the most commonly used relation derived by Salaris et al. (1993) 11 . However, we checked that only minor differences are obtained if the re-scaling of Nordström et al. (2004) 12 , applied in the GCS04, is used instead. -To calculate the absolute magnitudes M V of each star, we used the Johnson V-magnitude provided in the GCS09 and the Hipparcos parallax provided in the GCS11 (i.e. the socalled new Hipparcos reduction from van Leeuwen 2007). We corrected the absolute magnitudes for the effects of extinction following Cardelli et al. (1989) with E(B-V) taken from the GCS11. -We assumed a Gaussian distribution in log T eff , M V , and [M/H]. We, therefore, did not implement in SPInS the particular treatment of the magnitude distribution adopted by Casagrande et al. (2011) to take into account the skewness of the magnitude distribution that appears when the relative parallax error exceeds 10 per cent. Therefore, the present comparisons will not be valid for stars with high parallax errors. -We adopted the same priors on the IMF, SFR, and MDF.
More precisely, we took the IMF from Salpeter (1955) as given by Eqs. 2 and 4, a uniform truncated SFR (Eq. 6), and the prescription of Casagrande et al. (2011) for the particular MDF of the stars in the GCS11 (see their Appendix A).
Ages
We present in Fig. 5 the age residuals between the ages obtained by SPInS (mean ages) and those of the GCS11 (referred to as 'expected' age in their terminology but which also corresponds to the mean value). The ages of 14 757 stars out of 16 682 could be determined. For the remaining stars, either the parallax, effective temperature, or metallicity were unavailable. In Fig. 5 (left panel), we only retained the stars for which the age-dating is considered to be of good quality under the criteria of Casagrande et al. (2011), that is the relative error on age is lower than 25 per cent and the absolute age error is lower than 1 Gyr. This represents a total of 5040 stars. The comparison is very satisfactory since the differences in ages between SPInS and the GCS11 mostly remain lower than 25 per cent. In Fig. 5 (right panel), we considered all stars with a determined age and a parallax error lower than 10 per cent (to minimise the possibility of a skewness in the V-magnitude distribution, see the discussion in Sect. 4 above). Even if the differences between the SPInS and GCS11 11 In Salaris et al. (1993), the solar-scaled Z is re-scaled as Z α according to Z α = Z × (0. ages are larger in that case, there are only 503 stars out of 10 865 (that is less than 5 per cent) showing an age difference larger than 25 per cent. In Fig. 6, we show the mass residuals between the masses (mean values) derived by SPInS and those in the GCS11 (expected values corresponding to mean ones). As for the ages, the masses of 14 757 out of 16 682 could be determined. In the figure, we only retained stars for which we consider the mass to be of good quality, that is the relative error on mass is lower than 10 per cent. This represents a total of 12 704 stars. The comparison is very satisfactory since the differences in mass between SPInS and GCS11 values mostly remain lower than 10 per cent. The disparity of SPInS and GCS11 results is less important for the masses than for the ages because, for a given set of stellar evolutionary tracks, the mass degeneracy in the Hertzsprung-Russell diagram is less marked than the degeneracy affecting the ages, in particular at the end of the main sequence.
Radii, surface gravities, and mean seismic parameters
In addition to the ages and masses of the stars in the GCS11, SPInS provided several interesting stellar properties related to stellar evolutionary tracks or that can easily be derived from them. In particular, we show in Fig. 7 the Kiel diagram (log T eff − log g), the mass-radius relation, and the asteroseismic diagram (ν max,sc -∆ν sc ) based on seismic indices. The discussion of the results is beyond the scope of this paper, although some wellknown trends can be highlighted, in particular the position of stars as a function of their metallicity in the Kiel diagram resulting from their different internal structures. It is worth pointing out that the combination of such diagrams, involving large stellar samples, can be very valuable when used in studies of Galactic Archaeology (Miglio et al. 2009 thus making SPInS a very interesting tool in this respect.
Stars observed in interferometry
With interferometry, the angular diameters of stars can be measured which in turn gives a direct access to their radii, provided their distance is known. These measurements are therefore independent of stellar models (except for limb-darkening of the stellar disc which has to be corrected for, based on stellar model atmospheres). Ligi et al. (2016) obtained the radii of Article number, page 9 of 16 A&A proofs: manuscript no. article 18 stars (eleven of them being exoplanet hosts) from interferometry, together with their bolometric fluxes from photometry which allowed them to infer the effective temperatures. Starting from these data, metallicities taken in the literature, and model isochrones, they applied a Bayesian method with flat priors to infer the mass and age of each star.
By using Ligi et al.'s radii, effective temperatures, and metallicities as input constraints for SPInS, we inferred the masses and ages of the stars. We used the solar-scaled non-canonical BaSTI grid including convective core overshooting and we assumed flat priors on mass and metallicity, but a uniform truncated prior on age (Eq. 6). In Fig. 8, we compare SPInS masses and ages with the values from Ligi et al. (2016). Overall, the comparison is satisfactory, except for one star, HD 167042. If we exclude this star, the mean mass (respectively age) difference is of 4 (respectively 19) per cent and maximum differences are of 15 (respectively 84) per cent. For HD 167042, shown with pink diamonds in Fig. 8, SPInS's mass is much smaller than the one found by Ligi et al. (2016) while SPInS's age is much larger. Understanding the origin of the difference is beyond the scope of this paper. However, Ligi et al. pointed out that, for this star, their results are not consistent with the models. Moreover, we point out that it may currently be difficult to characterise this K1IV subgiant. Indeed, doubts remain about its effective temperature which is found to be 4547 ± 49 K when combining interferometry and photometry (Ligi et al. 2016) and 4983 ± 10 K when using high resolution spectroscopy (Maldonado & Villaver 2016).
In the future, large samples of stars with angular radii measured by interferometry will be available. In particular, the CHARA/SPICA project 13 , based on the new visible interferometric instrument CHARA/SPICA currently under design (Mourard et al. 2018), aims at constituting a homogeneous catalogue of about a thousand angular diameters of stars spanning the whole H-R diagram, including hosts of exoplanetary systems and stars observable in asteroseismology. SPInS will enable a rapid characterisation of the fundamental parameters of these stars (mass, age), thus opening the way to an in-depth analysis of their internal structure, planet characterisation, etc.
Solar-like oscillators
SPInS has not been designed for the purpose of delivering a precise asteroseismic diagnosis. To perform detailed asteroseismic analysis, one can use, for instance, the public AIMS tool described in Lund & Reese (2018) and Rendle et al. (2019). 13 https://lagrange.oca.eu/fr/spica-project-overview However, when individual frequencies cannot be extracted from the pressure-mode oscillation spectrum, SPInS can give some characteristics of a star provided the seismic indices ν max,obs or ∆ν obs , or both have been estimated from observations. Indeed, SPInS can take ν max,obs and ∆ν obs as input constraints which, through the scaling relations (Eqs. 11 and 12), provides hints on the stellar mass and radius, provided the effective temperature is known. In the following, we study two stellar samples (an artificial and a real one), for which SPInS inferences of mass, radius, and age based on ν max,obs and ∆ν obs can be compared with the results of careful inferences based on individual oscillation frequencies.
Artificial stars: Reese et al. (2016)'s hare and hounds sample
As a first case study, we consider ten artificial stars, built and studied in the hare-and-hounds exercise of Reese et al. (2016). To build each star, a stellar model was calculated for a given mass and age. Then, starting from this model, a hare group simulated observational quantities of an artificial star, that is its oscillation frequencies and classical parameters. Finally, the results were communicated to several hound teams who applied distinct optimisation methods to characterise the stars on the basis of these constraints. We list the properties of the ten stars in Table 2. Their positions in the H-R diagram are shown in Fig. 1 of Reese et al. (2016). We applied SPInS to these stars taking their 'observed' 14 luminosity, effective temperature, metallicity, ν max,obs , and ∆ν obs as input constraints. We used the solar-scaled non-canonical BaSTI grid including convective core overshooting and we took flat priors on mass, age, and metallicity. In Fig.9, we show how the masses and radii inferred both by SPInS and by the teams that participated in the exercise of Reese et al. (2016) reproduce the true properties of the -artificial-stars. With SPInS, mean differences with the artificial stars are of 2.8 per cent on the predicted mass and of 1.1 per cent on the radius. The maximum differences are for Blofeld and to a lesser extent Diva. For Blofeld, the difference is of about 11 per cent on mass and of 3.7 per cent on radius. It is worth noting that both of these simulated stars have the same mass (M obs = 1.22M ) and are both on the subgiant branch but have different chemical compositions. Moreover, the BaSTI grid models (Pietrinferni et al. 2004) have not been calculated with the same input physics and parameters as Reese et al. (2016)'s Table 2. Simulated properties for the set of artificial stars of Reese et al. (2016). These properties are taken from their Table 1 except for ∆ν obs that we calculated as the least-square mean of the individual frequencies of radial modes given in Moreover, Blofeld includes atomic diffusion, a different solar mixture, and a truncated atmosphere. We also note that Diva is one of the least well-fitted stars in Reese et al. (2016).
Overall, as can be seen in Fig.9, except for the cases of Blofeld and Diva that, in all likelihood result from identified differences in stellar models, the masses and radii inferred by SPInS compare very well with those inferred from a thorough asteroseismic diagnosis based on individual oscillations frequencies. This confirms the power of the scaling relations to quite reasonably infer the mass and radius of solar-like oscillators (Chaplin et al. 2014).
As for the age, we show in Fig. 9 that the situation is not as good. Indeed, the scaling relations do not constrain this parameter tightly and the age inference is highly sensitive to the input physics of stellar models (see e.g. . With SPInS, we find a mean difference of 28 per cent on age for the ten stars while the mean difference obtained with the pipelines in Reese et al. (2016) is of 23 per cent. We get a maximum difference on age of 143 per cent for Blofeld. Even if, in this particular study, the ages are very well recovered by SPInS for seven artificial stars out of ten, real stars by far host much more subtle physical processes than stellar models are able to describe. Therefore, individual oscillation frequencies if available, or some combinations thereof, should always be preferred to the seismic indices when a precise and accurate age estimate is being sought (see for instance the study of the CoRoT target HD 52265 by .
We also would like to point out that the scaling relations are much less efficient at predicting masses, radii, and ages when the luminosities of the stars are not known. This was checked by applying SPInS to the ten stars using only T eff , [Fe/H], ν max,obs , and ∆ν obs as input constraints and removing the constraint on luminosity. In that case, the mean errors on the predicted masses, radii, and ages are higher, with values of 6, 2, and 64 per cent respectively and maximum errors of 19, 7 and 300 per cent for Blofeld. This favours combining all possible classical and asteroseismic parameters to characterise stars, and reinforces the need for precise luminosities from the Gaia mission and radii from interferometry or eclipsing binary light curves.
Real stars: the Kepler LEGACY sample
In the same vein, we now consider 66 stars belonging to the Kepler seismic LEGACY sample (e.g. Lund et al. 2017). Each star has at least 12 months of Kepler short-cadence data. Therefore, these stars are among the solar-like oscillators observed by Kepler that have the highest signal-to-noise ratios. As a consequence, their individual oscillation frequencies inferred by Lund et al. (2017) are among the most precise to-date for solar-like pulsators while their effective temperatures and metallicity are also available. Silva Aguirre et al. (2017) performed a thorough modelling of the stars, with different optimisation methods implemented in six pipelines. All pipelines took into account the complete set of oscillation frequencies, either individual frequencies or frequency separation ratios, or a combination thereof (see Silva Aguirre et al. 2017, for details).
We have analysed these stars with SPInS in a simplified way, by considering as observational constraints T eff , [Fe/H], log g, ∆ν , and ν max . We used the solar-scaled non-canonical BaSTI grid including convective core overshooting. As for the priors, we adopted the two-slopes IMF from Eq. 3 with Kroupa et al. (2013)'s coefficients (Eq. 5), a uniform truncated SFR (Eq. 6), and a flat prior on the MDF. We then compared SPInS inferences with those reported in Silva .
In the left panel of Fig. 10, we show the residuals (R SP −R LEG ) between the radius of each star inferred by SPInS (R SP ) and the values R LEG obtained from full optimisations as reported in Silva . In the right panel of Fig. 10, we show the residuals (M SP − M LEG ) between the masses. If we exclude the result of the GOE pipeline for star KIC 7771282 (shown by pink diamonds at R SP ≈ 1.66R , R LEG,GOE ≈ 1.4R and at M SP ≈ 1.3M , M LEG,GOE ≈ 0.8M in the panels of Fig. 10), which is well outside the range found by the others, maximum differences on the radius between SPInS and the six pipelines, over 66 stars, range from 3.5 to 9 per cent, while mean differences are in the range of 1.3-2.5 per cent. As for the mass, maximum differences are in the range of 16-26 per cent, while mean differences are in the range of 5-6.5 per cent. Finally, for the ages, we find larger mean differences ranging from 25 to 32 per cent. To get a clearer picture, if we consider the objectives of the PLATO mission (Rauer et al. 2014), that is to reach uncertainties of less than 2 per cent on the radius, 10 per cent on the mass, and 10 per cent on the age of an exoplanet host-star to be able to characterise its exoplanet correctly, there are three stars for which SPInS's radius is outside the interval corresponding to the extreme values provided by the six pipelines by more than 2.5 per cent, no star with a mass outside the pipeline mass range by more than 10 per cent, and 16 stars with SPInS's age outside the pipeline age range by more than 15 per cent. Therefore, overall, SPInS's results compare rather satisfactorily with tight asteroseismic inferences of stellar radii and masses even if ∆ν sc and ν max, sc do not perfectly represent observations. Indeed, the ability of ∆ν sc to reproduce ∆ν obs relies on asymptotic developments and, as estimated by Belkacem et al. (2011Belkacem et al. ( , 2013, on the main sequence overall departures between the two can reach up to 5 per cent. Regarding the seismic index ν max, sc , the relation ν max, sc − ν max, obs is not straightforward. It is intimately related to the acoustic cut-off frequency, a function of T eff and log g, but other properties of the surface layers also play a role which generates biases, in particular on the main sequence because of T eff dispersion (see Balmforth 1992; Chaplin et al. 2008;Belkacem et al. 2011Belkacem et al. , 2013. As a consequence, as pointed out by Silva Aguirre et al. (2015), the use of scaling relations when individual frequencies are unavailable may lead to wrong estimates of the radius and the mass. Concerning the ages, predictions based on seismic relations are very coarse and SPInS's ages are often far from being a tight inference, which lends more credence to the words of caution of Chaplin et al. (2014) regarding age estimates based on scaling relations.
To summarise, SPInS can be a very efficient tool for ensemble asteroseismology (e.g. Chaplin & Miglio 2013), that is to size up, weigh, and age-date large samples of stars with observed values of ∆ν obs and ν max, obs . However, the reliability of the results will depend on the mass and evolutionary state of the studied stars, and as such must carefully be assessed before deriving any conclusion. Furthermore, SPInS can also be very useful in providing first estimates of a star's mass and age, to be used as initial conditions for refined optimisation methods based on individual oscillation frequencies.
We point out that the asteroseismic indices used in SPInS do not necessarily have to be obtained via scaling relations. Instead, they can be calculated with stellar oscillation codes, thus increasing their accuracy, and supplied along with global properties of the model. This opens up the possibility of using many 9. Comparison of the masses, radii, and ages inferred by different techniques with the true values of the ten artificial stars of Reese et al. (2016). Full symbols with error bars correspond to SPInS inferences with ν max,obs and ∆ν obs taken as seismic constraints. Open symbols are for the results of eight pipelines where a full seismic diagnosis based on individual frequencies has been performed (see Reese et al. 2016, for details). different types of seismic indicators such as the large and small frequency separations for pressure modes, frequency separation ratios, and the period spacings for gravity modes. With these quantities, deciphering the age or evolutionary state in advanced stages is accessible with SPInS, thus paving the way to further in-depth studies based on individual oscillation frequencies.
Parameter determination for coeval stars
One attractive feature of the SPInS tool is its ability to deal with stellar groups sharing some common properties. Well-known examples are stars that are members of stellar clusters or of binary systems and for which it can be assumed that they share the same age and initial chemical composition. We illustrate the performances of SPInS with a couple of study cases below.
Stellar clusters
As a case study, we consider the Hyades which has long been known to be the nearest open cluster, hosting about 300 members. Hipparcos made it possible to determine secure individual parallaxes of ∼ 300 Hyades members and the cluster remained the only one for which this was true until the delivery of Gaia DR1 (e.g. first data release, Gaia Collaboration et al. 2017). The distance to the centre of mass of the Hyades, as determined by Perryman et al. (1998) from Hipparcos data, is 46.34 ± 0.27 pc, based on 134 stars within 10 pc of the centre. Later on, Dravins et al. (1997) metric ground-based measurements. Lebreton et al. (2001) then estimated the age of the cluster from eye-fitting of isochrones calculated with a metallicity of [Fe/H] = 0.14 ± 0.05 as determined by Cayrel de Strobel et al. (1997). They derived an age of 625 Myr ( 550 Myr) on the basis of stellar models with (respectively without) convective core overshooting calculated with the Cesam code (Morel & Lebreton 2008).
Starting from the same observed sample as used by Lebreton et al. (2001), we re-inferred the properties of the cluster using SPInS. As for the priors, we took the IMF from Salpeter (1955) as given by Eqs. 2 and 4, a uniform truncated SFR (Eq. 6), and a flat prior on the MDF. The colour-magnitude diagram of the Hyades is presented in Fig. 11. The age of the cluster is found to be 640 ± 7 Myr on the basis of BaSTI stellar models including convective core overshooting, while it is of 543 ± 6 Myr if models with no overshooting are used instead. These results are in excellent agreement with those obtained by Lebreton et al. (2001). Furthermore, the common metallicity of the cluster stars inferred by SPInS is [M/H] = 0.094 ± 0.003 with overshooting, while it is [M/H] = 0.092 ± 0.003 without overshooting. These values are lower than the observed value 0.14 ± 0.05, but remain within the error bars.
As a by-product, the age-dating of the 92 coeval cluster members has also provided inferences on their individual masses. Therefore, we drew the mass-luminosity relation (hereafter M-L relation) of the cluster as shown in Fig. 12. Furthermore, it is possible to inter-compare independently this relation with the observed one. Indeed, there are several binary systems in the Hyades whose dynamical masses have been derived from orbit analysis. We have inventoried eight binary systems. Five of them (HIP 20019, HIP 20087, HIP 20661, HIP 20885, HIP 20894) have been studied for several decades now. Their M-L relation has been compared with results of stellar models by Lebreton et al. (2001) and revisited by Torres (2019). Also, a few years ago, Beck et al. (2015) detected solar-like oscillations in the giant star HIP 20885A (θ 1 Tau A). From the oscillation power spectrum, they inferred the large frequency separation and frequency at maximum power which allowed them to improve the precision on the star's mass. More recently, the properties of two new systems have been derived by G. Torres and collaborators: the binary system 80 Tau, that is HIP 20995 (Torres 2019) and the triple system HIP 20916 . Also, Halbwachs et al. (2016Halbwachs et al. ( , 2020 obtained the individual masses of the components of HIP 20601, combining interferometry with the PIONIER instrument at ESO's VLTI and spectroscopy with the SOPHIE spectrograph at Haute-Provence Observatory. We therefore have in hand 17 stars with known individual masses. Their positions in the M-L plane shown in Fig. 12 fit very well the M-L relation provided by SPInS. Conversely, we ran SPInS with this sample of 17 stars, taking this time their mass, absolute V-magnitude, and metallicity as observational inputs with the constraint that they have the same age and metallicity. SPInS provided an age of 615 ± 95 Myr using the solar-scaled noncanonical BaSTI grid. This age, although less precise than the age of 640 ± 7 Myr derived from the colour-magnitude diagram positions of the 92 Hipparcos stars because based on a smaller sample and a poor coverage of the MS turn-off, is nevertheless in very good agreement with it. SPInS, therefore, offers many possibilities for studying and comparing coeval ensembles and can be very interesting for all kinds of studies of the dynamics and evolution of the Galaxy.
Binary stars
Binary systems have long provided solid tests of stellar evolution theory, particularly when their components are sufficiently far apart not to undergo mass transfer, since they consist of two stars with different masses that can generally be assumed to share the same age and initial chemical composition. Different quantities may be accessible depending on whether the system is seen as a visual or interferometric binary, a spectroscopic binary (SB), or an eclipsing binary (EB). Of particular interest are systems that combine the SB with the double-lined character known as SB2 and EB properties, which allows us to infer both the indi- vidual masses and radii. In this section we apply SPInS to one binary system and compare its inferences with results from the literature.
AI Phe, a double-lined, eclipsing binary
AI Phe is a double-lined, detached eclipsing binary system composed of a main sequence and a subgiant star. A few years ago, Kirkby-Kent et al. (2016) thoroughly characterised the system by obtaining the mass and radius of the components from spectroscopic and photometric measurements. Then, using these masses and radii, effective temperatures and metallicities from the literature together with stellar evolution models, they estimated the age of the system to be A = 4.39 ± 0.32 Gyr. Recently, Maxted et al. (2020) revisited the masses and radii on the basis of the light-curves provided by the TESS mission (Ricker et al. 2015). Starting from these results, compiled in Table 3, and using the BaSTI solar-scaled non-canonical model grid, a uniform truncated SFR (Eq. 6), and a flat prior both on the MDF and IMF, SPInS provided a common age for the two stars of A = 4.38 ± 0.35 Gyr, which is in excellent agreement with Kirkby-Kent et al. (2016)'s value. However, we point out that Kirkby-Kent et al.'s analysis is more in-depth since they examined the effects of the initial helium abundance and the mixinglength parameter for convection on the results. Because we used a pre-computed BaSTI grid, we did not have the possibility to make these parameters vary. This may explain why SPInS is able to reproduce the observed masses and radii but not the effective temperatures that are colder than the observed ones but remain within the error bars (see Table 3). In order to improve the fit, we would have to run SPInS with a stellar model grid with more stellar parameters.
Conclusion
We have presented SPInS, a Python-Fortran tool dedicated to the inference of stellar properties in various observational situations. SPInS is a spin-off of AIMS, a sophisticated tool focusing on thorough asteroseismic inferences using stellar models together with their individual oscillation frequencies. SPInS is simpler than AIMS in the sense that it only requires standard outputs of stellar models and no individual oscillation frequencies to operate. It can be applied to age-date, weigh, and size up stars or groups of stars, as well as to make predictions on their expected global or mean properties such as asteroseismic indices, with a considerable gain in computing time compared to AIMS. SPInS aims to be user-friendly and can run on any computer cluster or even laptop.
We have first presented the fundamentals of SPInS as well as its inputs and outputs. As a pre-requisite, SPInS needs to have a stellar evolution model grid available. In the public version of SPInS we provide the solar-scaled and α-enhanced canonical and non-canonical BaSTI grids described in Sect. 2.5, but see Pietrinferni et al. (2004) and Pietrinferni et al. (2006) for extensive descriptions. The grids have been downloaded from the BaSTI website and are available in a format compatible with SPInS. As an option, priors on the parameters to be inferred by SPInS may be provided. The inputs that need to be provided to SPInS consist in a set of observational constraints chosen by the user and satisfied by a star or a group of stars sharing common properties, such as the age. The constraints can be of any kind provided they are available as outputs of the stellar model grid or can be directly derived from them. As output, SPInS provides any unknown stellar property available in the grid, including the age, mass, radius, or seismic indices. Any quantity can be provided either as an input if observed or inferred independently, or as an output if unknown.
In order to present the different outputs of SPInS, such as histograms of the PDF of stellar parameters or the estimators of a given quantity, we have run the tool on a set of fictitious stars spanning a wide area in the H-R diagram. We then validated the SPInS program by comparing its inferences with results from the literature. We first showed that SPInS is able to reproduce satisfactorily the ages and masses of more than 10 4 stars of the GCS11 survey as derived from their absolute magnitudes, effective temperatures, and metallicities by Casagrande et al. (2011). We then re-visited the properties of different categories of single stars for which we have access to an extended set of observational constraints, such as radii from interferometric measurements or seismic indices. Overall, we obtained results in excellent agreement with what has been published before. Finally, we applied SPInS to the study of coeval stars. As case studies, we took the Hyades open cluster stars and the components of the eclipsing SB2 binary system AI Phe once more showing an excellent agreement with previous results. We therefore release SPInS 15 as a public tool in the hopes that it will prove to be useful in deciphering the large quantities of exquisite data currently available thanks to current (Gaia, CoRoT, Kepler, TESS) and future (PLATO) space missions and surveys. | 14,798 | 2020-08-31T00:00:00.000 | [
"Physics",
"Geology"
] |
Neudesin is involved in anxiety behavior: structural and neurochemical correlates.
Neudesin (also known as neuron derived neurotrophic factor, Nenf) is a scarcely studied putative non-canonical neurotrophic factor. In order to understand its function in the brain, we performed an extensive behavioral characterization (motor, emotional, and cognitive dimensions) of neudesin-null mice. The absence of neudesin leads to an anxious-like behavior as assessed in the elevated plus maze (EPM), light/dark box (LDB) and novelty suppressed feeding (NSF) tests, but not in the acoustic startle (AS) test. This anxious phenotype is associated with reduced dopaminergic input and impoverished dendritic arborizations in the dentate gyrus granule neurons of the ventral hippocampus. Interestingly, shorter dendrites are also observed in the bed nucleus of the stria terminalis (BNST) of neudesin-null mice. These findings lead us to suggest that neudesin is a novel relevant player in the maintenance of the anxiety circuitry.
INTRODUCTION
Neudesin (also known as neuron derived neurotrophic factor, Nenf) is a 21 kiloDalton secreted protein with 171 aminoacids (Kimura et al., 2005). Neudesin was classified as a member of the membrane-associated progesterone receptor family since its primary structure contains a cytochrome b5-heme/steroid binding domain (Kimura et al., 2012). Studies in mice showed that neudesin is most abundantly expressed in the brain and spinal cord (Kimura et al., 2005). While in the developing mouse brain neudesin is predominantly expressed in neurons with scattered presence in other cell types, in the adult brain its expression seems to be restricted to neurons (Kimura et al., 2005). Neudesin expression starts at approximately embryonic day 12.5 (E12.5), as evaluated by real time PCR (RT-PCR) in neural precursor cells; its expression increases during the rest of the embryonic period in inverse correlation with the expression of markers of dividing neural precursor cells (nestin) and in direct correlation with that of microtubule-associated protein 2 (MAP-2) (a marker for mature neurons) (Kimura et al., 2005). Of notice, neudesin expression is higher in the cortical preplate, an area that participates in the formation of the cerebral cortex. During embryonic and postnatal development, neurotrophic factors main functions are to provide survival and differentiation of nervous cells through activation of the p75 and tyrosine kinase (Trk) receptors and by downstream pathways such as MAP and PI-3 kinases (Russell and Duman, 2002). In adulthood, these downstream cascades activate different functional responses such as axon growth, dendrite pruning, cell fate decisions (Gray et al., 2013), as well as the modulation of neurotransmitters, thus regulating synaptic plasticity (McAllister, 2001). Importantly, studies in primary cultures of neurons revealed that neudesin display a neurotrophic activity. Specifically, neudesin was shown to induce the proliferation of cortical neural precursor cells early in development, and their subsequent differentiation into neurons (Kimura et al., 2005(Kimura et al., , 2006. Nevertheless, the identification of a receptor for neudesin is still elusive. Even less information is available on the exact function of neudesin in the adult brain, but in vitro experiments indicate that it may promote the maintenance of neurons in an autocrine/paracrine mode (Kimura et al., 2005). This neurotrophic activity has been shown to depend on the attachment of hemin to its cytochrome b5-heme/steroid domain, while a similar relevant action on the binding of steroids failed to be demonstrated (Kimura et al., 2008).
The importance of neurotrophic factors in brain maturation and function was extensively demonstrated (Snider, 1994). Furthermore, it is well-known that neurotrophic factors such as brain derived neurotrophic factor (BDNF) and fibroblast growth factor (FGF) play an important role in the etiology of mood disorders, such as depression, and in modulating emotional responses, including anxiety (Masi and Brovedani, 2011;Turner et al., 2012). In accordance, the limbic brain regions known to be involved in the modulation of emotional responses, that include the ventral hippocampus, amygdala and the bed nucleus of the stria terminalis (BNST), were shown to present an altered neurotrophins levels after an exposure to harmful stimuli, such as chronic stress, thus culminating in neurotransmission imbalance and in synaptic plasticity impairment (Taylor et al., 2011;Jung et al., 2012). Noteworthy, altered monoaminergic neurotransmission was found in BDNF-null mice (Ren-Patterson et al., 2005).
Given the in vitro available evidence on the potential neurotrophic properties of neudesin, in this study we addressed the role of Nenf in modulating behavior (emotional and cognitive) brain cytoarchitecture and neurotransmission (namely monoaminergic), for which we used neudesin-null mice.
ANIMALS
A mouse strain with targeted deletion of the Nenf gene, provided by Merck-Serono under a material transfer agreement, was used. The neudesin-null (Nenf −/− ) mouse strain was generated by using a 129/SvEv genomic library from a BAC clone and the target construct was made by deletion of the entire coding sequence of the neudesin gene (exons 1-4, ∼12 KB) and replacing it by a LacZ-neomicin cassette. The BAC targeting vector was then inserted into embryonic stem cells (FiH4 ES cells), where the homologous pieces of DNA were recombined. The cells identified as homologous recombinant clones were microinjected into C57BL/6F1 blastocysts to generate chimeric mouse. Initial genotyping was performed using a loss-of-native-allele assay. Animals used in this study were backcrossed in a C57BL/6F background.
Nenf −/− and Nenf +/+ were obtained by crossing heterozygous animals. Mice genotype was confirmed by PCR using two independent sets of primers: one for the LacZ cassette, specific for the genotype: LacZ-foward 5 -GGTAAACTGGCTCGGATTAGGG-3 and LacZ-reverse 5 -TTGACTGTAGCGGCTGATGTTG-3 ; and another for the Nenf gene, specific for Nenf +/+ animals: Nenf-intron3 5 -CTTGGAGTTTGGGGCTGATA-3 , Nenf-exon4 5 -TGGCTTTGTACACCTTGCTG-3 . The amplified fragments were of 210 and 176 bp, respectively, and distinguishable by electrophoresis through a 1.5% agarose gel. Confirmation of loss of neudesin synthesis in Nenf −/− was also obtained by performing immunohistochemistry with a neudesin-specific antibody (Sigma, St. Louis, USA) in brain samples of both control and neudesin-null mice; neuronal expression of neudesin was not detected in neudesin-null mice.
Animals were maintained under 12 h light/dark cycles at 22 ± 1 • C, 55% humidity and fed with regular rodent's chow and tap water ad libitum. This study was approved by the Portuguese national authority for animal experimentation, Direcção Geral de Veterinária (permission ID: DGV9457). All experiments were performed in accordance with the guidelines for the care and handling of laboratory animals, as described in the Directive 2010/63/EU of the European Parliament and of the Council.
ADULT BEHAVIOR
Adult behavior was assessed in 3 months-old male mice, in 3 independent sets of animals, of which one is presented here. Eight neudesin-null and 10 littermates control animals were analyzed in the open field (OF), elevated plus maze (EPM), forced swim test (FST) and Morris water maze (MWM) tests, performed by this sequential order, as an initial behavioral characterization. A 24 h time interval was used between OF, EPM, and FST tests; a 96 h time interval was used between FST and MWM. After the first behavioral characterization, the acoustic startle (AS), light/dark box (LDB) and novelty suppress feeding (NSF) tests were performed, by this sequence, in order to further evaluate the anxious-like behavior (see results section below); for this additional evaluation, another 3 independent sets of animals were analyzed, and one representative set of 10 neudesin-null males and 10 control littermate mice is presented here.
Adult behavior tests were performed in all animals during the light phase of the light/dark cycle at the same period of the day to avoid physiological differences related to the circadian cycle. The tests were performed as described next:
Open field
Animals were placed in a room adjacent to the experimental room 1 h before the test. Locomotor activity was assessed in a brightly illuminated square arena with 43.2 × 43.2 cm size surrounded by walls to prevent escape. Animals were placed in the center of the arena and allowed to explore it for 5 min. Data collected through the infrared system (MedAssociates Inc., St Albans, VT) contained total distance travelled, and distance and time spent in the center vs. the periphery of the arena.
Forced swim test
Animal learned helplessness behavior was analyzed using the FST in 2 consecutive days. Mice were placed for 5 min in a glass cylinder filled with water (24 • C) at a depth of 30 cm. Twenty-four hours later mice repeated the test in the same conditions (Porsolt et al., 1977). Trials were video recorded and manually analyzed using the Etholog V2.2 software (Ottoni, 2000). The 5 min of the second day were analyzed. Data collected consisted in the duration of swimming and of immobility time.
Morris water maze
MWM was used to evaluate mice spatial reference memory. In this test, animals were placed in a circular white pool (170 cm in diameter and 50 cm in height) filled with tap water (24 ± 1 • C) placed in a poorly lit room. The pool was divided in 4 imaginary quadrants and a transparent plexiglas platform (14 cm in diameter) was hidden 0.5 cm bellow surface in the center of one of the quadrants. For each quadrant, external clues were placed in the walls of the room. The test consisted of 4 trials per day for 4 consecutive days. In each trial, animals were randomly placed in each one of the quadrants and were allowed to swim for 120 s. Mice that failed to reach the platform within this time-period were gently guided to the platform. The distance and time animals took to find the platform were recorded using a video camera connected to a video-tracking system (Videotrack, Viewpoint, Champagne au Mont d'Or, France).
In the fifth day animals performed the probe and reversal tests. The probe test consisted of a unique trial without the platform where the animals were allowed to swim for 120 s. Time and distance swum in each quadrant were collected and analyzed for the first 60 s. To test memory flexibility we performed the reversal test, which consisted in changing the initial position of the platform to the opposite quadrant of the pool. Animals were given 3 trials of 120 s each to learn the new position. The percentage of distance swum in each quadrant is represented.
Elevated plus maze
Anxious behavior was assessed using an apparatus composed of two opposite brightly illuminated open arms (51 × 10 cm) and two opposite dark closed arms (51 × 10 × 40 cm) and a central platform, 74 cm above the floor (NIR plus maze, MedAssociates Inc.). Animals were placed in the center of the maze and allowed to explore it for 5 min. Data collected consisted of the number of entries (four paws) in each arm as well as the time spent in each arm (MedPCIV, MedAssociates software).
Acoustic startle
Startle reflex was measured in a startle response apparatus (SR-LAB, San Diego Instruments, San Diego, CA, USA), consisting of a non-restrictive plexiglas cylinder (inner diameter 2.8 cm, length 8.9 cm), mounted on a plexiglas platform and placed in a ventilated sound-attenuated chamber. Animals were habituated to the apparatus (5 min) 1 day before actual testing. Cylinder movements were detected and measured by a piezoelectric element mounted under each cylinder. A dynamic calibration system (San Diego Instruments, San Diego, CA, USA) was used to ensure comparable startle magnitudes. Startle stimuli were presented through a high frequency speaker located 33 cm above the startle chamber. Startle magnitudes were sampled every ms over a period of 200 ms, beginning with the onset of the startle stimulus. A startle response is defined as the peak response during 200 ms recording period. A higher startle reflex reflects an increased anxious state of the animal.
Light/dark box
For this test the OF arena was divided in half. One part was open and the other consisted of a black plexiglas with an entrance at the center of the arena facing the bright side. Each animal was placed alone at the center of the arena facing the lateral wall and allowed to explore it for 10 min. An infrared system (MedAssociates Inc) registered the time spent in each compartment.
Novelty suppressed feeding
Animals were food deprived for 24 h before being placed in the corner of the OF arena (MedAssociates Inc) and left to explore it for 10 min. A single pellet of food was placed in the center of the novel environment and the latency the animal took to leave the corner and feed was recorded. Upon reaching and start eating the pellet the animal was placed back in the home cage, where it was allowed to eat pre-weighted food. Food intake was recorded after 5, 15, and 30 min, as a measure of appetite drive.
DENDRITIC TREE ANALYSIS
Basolateral amygdala (BLa) pyramidal neurons, anterior medial bed nucleus of the stria terminalis (amBNST), lateral dorsal bed nucleus of the stria terminalis (ldBNST) bipolar neurons, ventral and dorsal dentate gyrus (DG) granular and ventral CA1 pyramidal-neurons of the hippocampus were chosen randomly, 3 per section; regional boundaries were defined as previously outlined (Dong et al., 2001;Paxinos and Franklin, 2001). The criteria to choose perfect Golgi impregnated neurons were the same as described previously (Uylings et al., 1986): (1) dendritic branches were not incomplete, broken or non-impregnated; (2) dendrites did not show overlap with other branches; (3) neurons were visually well-isolated. Twenty five to 30 neurons per experimental group were studied, i.e., 5-6 neurons per animal were analyzed for each of the 5 animals in the experimental groups. For each selected neuron, all branches of the dendritic tree were reconstructed at 600× magnification using a motorized microscope (BX51, Olympus), with oil objectives attached to a camera (Microbrigthfield Bioscience, Madgedurg, Germany) and using the Neurolucida software (Microbrightfield). Dendritical parameters analyzed were the total dendritic length and 3D version of the Sholl analysis (Sholl, 1956); where, the number of dendritical intersections with concentric spheres positioned at 20 µm intervals from the neurons soma was counted using the NeuroExplorer software (MicroBrightField).
NEUROCHEMICAL ANALYSIS
Monoamines' levels were measured using high performance liquid chromatography with electrochemical detection (HPLC-ED). Naive 3 months old male mice were killed by decapitation (10 Nenf +/+ and 8 Nenf −/− ). Skulls were snap frozen in liquid nitrogen to avoid degradation during macrodissection. Brains were carefully dissected for ventral hippocampus, amygdala, BNST. Dissection was performed on ice with the help of a stereomicroscope.
Dissected tissues were weighed and then homogenized and deproteinized in 100 µL of 0.2 N perchloric acid solution (Applichem, Darmstadt, Germany) containing 7.9 mM Na 2 S 2 O 5 and 1.3 mM Na 2 EDTA (Riedel-de Haën AG, Seelze, Germany), centrifuged at 20000 g for 45 min at 4 • C and the supernatant was stored at −80 • C until analysis.
The analysis was performed using a GBC LC1150 HPLC pump (GBC Scientific Equipment, Braeside, Victoria, Australia) coupled with a BAS-LC4C (Bioanalytical Systems Inc., USA) electrochemical detector, as previously described (Kokras et al., 2009). The working electrode of the electrochemical detector was set at +800 mV. In all samples reverse phase ion pairing chromatography was used to assay dopamine (DA) and its metabolites 3,4 dihydroxyphenylacetate (DOPAC) and homovanillic acid (HVA), serotonin (5HT) and its metabolite 5-hydroxyindoleatic (5HIAA) and norepinephrine (NE). The mobile phase consisted of a 50 mM phosphate buffer regulated at pH 3.0, containing 5-octylsulfate sodium salt at a concentration of 300 mg/L as the ion pairing reagent and Na 2 EDTA at a concentration of 20 mg/L (Riedel-de Haën AG); acetonitrile (Merck, Darmstadt, Germany) was added at a 9% concentration. The reference standards were prepared in 0.2 N perchloric acid solution containing 7.9 mM Na 2 S 2 O 5 and 1.3 mM Na 2 EDTA. The column used was an Aquasil C18 HPLC Column, 100 × 1 mm, 5µm Particle Size (Thermo Electron, UK). Samples were quantified by comparison of the area under the curve against known external reference standards using a PC compatible HPLC software package (Chromatography Station for Windows ver.17 Data Apex Ltd). The limit of detection was 1 pg/20 µL (of injection volume). In addition to the assay of 5-HT and 5-HIAA tissue levels, the 5-HT turnover rate was also calculated, separately for each chromatograph, as the ratio of 5-HIAA/5-HT. Similarly, the ratios of DOPAC/DA and HVA/DA were calculated as an index of DA turnover rates. Turnover rates estimate the serotonergic and dopaminergic activities better than individual neurotransmitter and metabolite tissue levels as they reflect 5-HT and DA release and/or metabolic activity as described elsewhere (Dalla et al., 2008;Kokras et al., 2009;Mikail et al., 2012).
STATISTICAL ANALYSIS
All values presented are expressed as the mean ± SEM and significance was verified by using the Mann-Whitney test for independent samples for all behavior, dendritic length, sholl analysis and neurochemical data. Differences were considered significant when p < 0.05.
ABLATION OF NEUDESIN INDUCES AN ANXIOUS-LIKE PHENOTYPE IN ADULT MICE
Despite the described neuronal expression profile of neudesin in the adult brain, no information is available on the relevance of this protein for the central nervous system (CNS) functioning. To tackle this gap we performed a wide behavioral characterization of neudesin-null mice.
From the behavioral analyses performed we observed that the ablation of neudesin induces a striking anxious-like phenotype as revealed by a series of tests that specifically assess anxiety-like behavior. We first used the EPM to analyze anxiety-like behavior and found that the percentage of time neudesin-null mice spent in the open arms (22%) was significantly lower than that of controls (37%) (Figure 1A) and, conversely, more time was spent in the closed arms (67%) when compared to controls (50%) (p < 0.05). Of relevance, the number of entries in the closed arms (Nenf +/+ = 10.8 ± 1.5 and Nenf −/− = 8.5 ± 1.8 s) did not significantly differ between the groups, indicating that the exploratory activity is preserved in neudesin-null mice. This anxious phenotype was further confirmed in other contextual behavioral paradigms, namely the LDB and the NSF. In the LDB test neudesin-null male mice spent more time in the dark zone (396.6 ± 27.5 s) when compared to controls (296.1 ± 25.0 s) (p < 0.05) and significantly less time in the light zone (Nenf +/+ = 303.4 ± 24.9 vs. Nenf −/− = 202.8 ± 27.5 s) (p < 0.05) ( Figure 1B); importantly there were no differences regarding the total distance travelled (data not shown). In the NSF test animals have an extra motivational cue-they are fastened for 24 h-so when they are placed in a novel environment, the latency to feed is a measurement of anxiety. In this test we observed that neudesin-null mice displayed increased latency time to eat in the OF arena (416.8 ± 63.6 s) when compared to controls (253.8 ± 35.9 s) (p < 0.05) (Figure 1C). The previously described tests are based on conflicts between the animal's innate Frontiers in Behavioral Neuroscience www.frontiersin.org September 2013 | Volume 7 | Article 119 | 4 exploratory/feeding behavior and its aversion for open brightly lit spaces. These are often considered tests for state anxiety and are dependent on cortical processing. On the other hand, the AS test is based on a reflex response to aversive stimuli that tests for a different dimension of anxiety, a more generalized and innate response-trait anxiety. Therefore, a more anxious animal will respond with a bigger startle at higher intensities of sound. Interestingly, the AS test failed to show an anxious like behavior in neudesin-null mice (Figure 1D), even at the higher intensities of sound (120 dB) (Nenf +/+ = 22.37 ± 3.26 vs. Nenf −/− = 23.45 ± 3.29 ms) ( Figure 1D). Regarding other behavioral traits, neudesin-null mice did not reveal any phenotype. The OF arena was used to assess locomotor activity. The analysis revealed that neudesin-null mice did not display motor impairments, when compared to control animals ( Figure 1E). We used the FST to measure helplessness behavior, a behavioral dimension relevant for depression. In the FST (Figure 1F) both controls and neudesin-null animals spent a similar amount of time immobile (Nenf +/+ = 229 ± 5 vs. Nenf −/− = 225 ± 8 s), thus indicating that neudesin-null mice do not display a depressive-like behavioral phenotype. Cognition, specifically spatial reference memory, was analyzed in the MWM. Both neudesin-null and control mice learned the position of the hidden platform as they similarly decreased the latency of time required to perform the task, indicating an absence of cognitive impairment ( Figure 1G); this was further confirmed in the probe test (data not shown). Regarding performance in the reverse learning task of the MWM, the percentage of distance swum in the new quadrant by neudesin-null mice was similar to that of control mice (Figure 1H).
NEUDESIN-NULL ADULT MALE MICE HAVE ALTERED NEURONAL MORPHOLOGY
Neuronal morphology, dendrite formation and the establishment of synaptic contacts are dependent of trophic support. Due to the potential role of neudesin as a neurotrophic factor we studied the 3D-morphology of neurons in neudesin-null mice. In light of the anxiety-like phenotype described before, we focused the analysis in brain regions implicated in the modulation of this behavioral trait, namely the ventral hippocampus, the amygdala (BLa nuclei) and the BNST. When compared to control mice, neudesin-null mice presented shorter dendritic length of ventral hippocampal DG granular neurons (Nenf +/+ = 741.8 ± 32.8 vs. Nenf −/− = 463.9 ± 83.3 µm) (p < 0.05) (Figure 2A left panel). Sholl analysis also revealed fewer intersections of the dendritic tree (with the imaginary spheres) at distances between 60 and 120 µm from the soma in neudesin-null mice (p < 0.001) (Figure 2A right panel). Nevertheless, in ventral hippocampal CA1 pyramidal-like neurons no difference was found between control and neudesin-null mice in dendritic length or dendritic arborization, both for the basal and for the apical dendrites ( Figure 2B). Noteworthy the differences found for ventral hippocampal DG neurons were not observed in the dorsal DG granular neurons (Nenf +/+ = 644.7 ± 29.3 vs. Nenf −/− = 606.3 ± 39.9 µm). Regarding the BNST, two different divisions were analyzed, the anterior medial BNST (amBNST) and the lateral dorsal BNST (ldBNST); neudesin-null mice had a statistically significant reduction in dendritic length in the anterior medial (Nenf +/+ = 584.8 ± 36.7 vs. Nenf −/− = 412.6 ± 14.7 µm) (Figure 2C left panel), but no differences in the lateral dorsal division (Nenf +/+ = 628.7 ± 64.5 vs. Nenf −/− = 485.2 ± 34.1 µm), as well as no differences regarding dendritic arborization of neurons in both BNST divisions. Finally, the morphology of pyramidal-like neurons of the BLa was analyzed and no differences were observed between control and neudesin-null mice ( Figure 2D).
NEUROTRANSMITTER ACTIVITY PROFILE IN ANXIETY RELATED REGIONS
Given the behavioral and structural alterations found in neudesin-null mice we next characterized the monoaminergic profile (major findings presented Table 1 for complete data) of several anxiety-related brain regions (amygdala, BNST, ventral hippocampus). We observed a reduction of DA levels in the ventral hippocampus of neudesin-null animals (p < 0.05) ( Table 1), which was accompanied by an elevated dopaminergic turnover of HVA/DA (p < 0.05) ( Table 1). Regarding the amygdala, there was a trend for a reduction in DA, DOPAC, and NE levels ( Table 1); in the BNST no alterations were observed in monoamines levels or turnover (Table 1).
Regarding serotonin, a significant reduction of the serotonin metabolite 5HIAA (p < 0.05) ( Table 1) was seen in the ventral hippocampus, but this difference did not translate into any alterations regarding serotonin turnover. An ∼40-50% reduction in the levels of both 5HT and 5HIAA in neudesin-null mice was also observed in the amygdala but these differences did not reach statistical significance for both metabolites. In the BNST no differences were observed for 5HT metabolites and respective turnover rates.
DISCUSSION
This work provides the first in vivo demonstration of the relevance of the neurotrophic factor neudesin for CNS normal function. In the absence of neudesin, mice display a contextual anxious like phenotype that is accompanied by impairment in the dopaminergic activity of the ventral hippocampus, where the dendritic arborization in granular neurons branching is impoverished; in addition, they also present a dendritic atrophy in the amBNST nucleus.
We first addressed the role of neudesin by exploring the behavioral consequences of its ablation. Of the several behavioral dimensions assessed, only anxiety-related phenotypes were clearly affected by the absence of neudesin, as demonstrated by the shorter time spent in the open arms of the EPM and in the light zone of the LDB and the higher latency to eat in the NSF test. Interestingly, no deficits were found in the AS, which reveals that the increased anxiety occurs specifically in contextual conflict paradigms, described to involve modulation by different neuronal circuits (Koch and Schnitzler, 1997). Anxiety in the presence of contextual anxiogenic environments, such as those observed here, is shown to be mediated by the BNST (Ventura-Silva et al., 2012) under the modulation of different cortical regions namely the hippocampus and the prefrontal cortex (Ventura-Silva et al., 2013). The BNST is closely involved in stress and in the HPA axis-dependent modulation of emotional behaviors (Herman and Cullinan, 1997). It has been reported that anxiety triggered by different stress-inducing paradigms is related to hypertrophy of the amBNST neurons (Pego et al., 2008;Oliveira et al., 2012). Surprisingly, however, neudesin-null mice display shorter dendrites in the amBNST, which suggests this might be a stressspecific effect and that other factors beyond the pure structure of these neurons might underlie the behavioral changes displayed by neudesin-null mice. Interestingly, the same holds true for the ventral hippocampus, where we also found dendritic atrophy, specifically in granule cells. The role of the ventral hippocampus in anxiety has been a matter of debate, with some studies showing that an increased activity in this brain region is related with increased emotionality (McHugh et al., 2004(McHugh et al., , 2011, while others studies do not confirm this association (Marrocco et al., 2012). Recently, however, the activity of ventral granule cells has been specifically implicated in anxiety behavior. Using optogenetics tools to activate and to inhibit DG granular cells in the ventral hippocampus, Fournier et al saw that elevating activity in this area suppresses anxiety in the EPM (Fournier and Duman, 2013). The present results seem to support the later study, in as much as we show that neudesinnull mice present an impoverished arborization in granule cells of the ventral hippocampus. Noteworthy, this later effect is specific to the ventral hippocampus since we found no differences between control and neudesin-null mice in the arborization of dorsal DG granular cells. Given the recently shown role of dorsal DG cells in the modulation of learning (Kheirbek et al., 2013), this result is in accordance with the absence of a cognitive phenotype in neudesin-null mice. This specific alteration in the atrophy of DG granule cells along the dorsal-ventral axis in the absence of neudesin is of relevance and deserves further investigation. Going further into the molecular levels, we next studied the monoaminergic profile of anxiety-related brain regions. Interestingly, the absence of neudesin is associated with low levels of DA and increased HVA/DA turnover ratio in the ventral hippocampus. Importantly, DA levels are described to be higher in the ventral than in the dorsal hippocampus (Eisenhofer et al., 2004) and DA is known to play a crucial role in modulating plasticity in the ventral portion of the hippocampus (Belujon and Grace, 2011). In addition, dopamine 1 receptor (D1) is highly expressed in the dendrites of the granular cells of the ventral DG (Mansour et al., 1992), and dopamine release from projecting ventral tegmental area neurons impacts on the modulation of synaptic plasticity and synapse strenght in this brain region (Hamilton et al., 2010). Whether decreased DA release in the ventral hippocampus, as observed in neudesin-null mice, contributes to the dendritic atrophy in ventral DG granular neurons is unknown. Nevertheless, the role of dopamine in anxiety is also endowed in controversy: while pioneer studies with systemic apomorphine (a D1/D2 agonist) administration were shown to decrease anxiety (Hjorth et al., 1986), more recent studies using specific administration of this drug in the ventral hippocampus increased anxiety in the EPM (Zarrindast et al., 2010). Regarding the levels of 5HT, it is of notice that the ventral hippocampus of neudesin-null mice displays a decrease (although not significant) of 40% in the levels of 5HT and a sharp reduction in the levels of its derived metabolite 5HIAA. This is of relevance since selective knock-down of auto 5HT-1A receptors in the raphe nuclei of mice results in a direct increase of anxiety levels in contextual paradigms, and in a concomitant decrease in the extracellular levels of 5HT in the ventral hippocampus (Richardson-Jones et al., 2011). Thus, the alteration in serotonergic metabolites observed in the ventral hippocampus may contribute to the anxiety state observed in neudesin-null mice, in accordance to previous studies in stressed animals (Dalla et al., 2008). Also interesting, although not significant, is the reduction in the 5HT levels in the amygdala of neudesin-null mice. This may be of relevance when considering the fundamental role of 5HT as a player in the crosstalk between amygdala and the ventral hippocampus in the modulation of anxiety behavior (Asan et al., 2013). Overall, these results further highlight a possible role for neudesin in the establishment and/or maintenance of hippocampal circuitry and in the modulation of contextual anxiety-behavior.
Both during embryonic and postnatal brain development several external signals including neurotrophic factors, neurotransmitters and hormones are involved in the genesis and maturation of new neurons and in their integration into functional circuitries (Abrous et al., 2005). One of the unique roles previously ascribed to neudesin, as shown in vitro, is its neurotrophic activity (Kimura et al., 2005(Kimura et al., , 2006. Thus, it is plausible that the herein described functional alterations in neuronal arborization result from the absence of neurotrophic support conveyed by neudesin. On the other hand, neudesin structure displays a heme and/or steroidbinding site (Kimura et al., 2005), which seems necessary for its function (Kimura et al., 2008). Free heme is a powerful oxidative stressor (Jeney et al., 2002;Craven et al., 2007;Abraham and Kappas, 2008) and under physiological conditions it is bound to extracellular heme binding proteins such as Nenf. Thus, we cannot discard the possibility that the deleterious effects of free heme due to the absence of neudesin might have implications in the maintenance of neurons as previously suggested (Burmester and Hankeln, 2004).
The data presented in this study suggest that neudesin modulates anxiety behavior mainly through the DG ventral hippocampus and altered dopaminergic activity. Thus, the neurotrophic action described previously for neudesin per ser, or in association with its ligands, might be of therapeutical and/or pharmacological potential in anxiety-related disorders. The modulatory role of neudesin in anxiety might be associated with its putative neurotrophic role but why this effect is specific for the anxiety circuits is still to be determined. Nevertheless, since the mouse model used in this study is a constitutive knock-out of the Nenf gene, we cannot exclude potential developmental determinants (Stevens et al., 2010) resulting from the absence of neudesin in adult anxiety circuits, which should next be investigated. | 7,214.2 | 2013-09-09T00:00:00.000 | [
"Biology",
"Psychology"
] |
Investigating the Temporal Effect of User Preferences with Application in Movie Recommendation
As the rapid development of mobile Internet and smart devices, more and more online content providers begin to collect the preferences of their customers through various apps on mobile devices. These preferences could be largely reflected by the ratings on the online items with explicit scores. Both of positive and negative ratings are helpful for recommender systems to provide relevant items to a target user. Based on the empirical analysis of three real-world movie-rating data sets, we observe that users’ rating criterions change over time, and past positive and negative ratings have different influences on users’ future preferences. Given this, we propose a recommendationmodel on a session-based temporal graph, considering the difference of longand shortterm preferences, and the different temporal effect of positive and negative ratings. The extensive experiment results validate the significant accuracy improvement of our proposed model compared with the state-of-the-art methods.
Introduction
Nowadays, a huge ecosystem of independent content providers (such as Facebook, Netflix, Google Maps, and Snapchat) and consumers (web users) is emerging on the mobile Internet.Confronted with the problem of finding a needle in the haystack, many web users usually resort to information filtering technology to find more relevant contents.Nowadays, recommender systems have been deployed on the websites of many industries [1], to make the web services more suitable and engaged to their users and promote the scale and profitability of such businesses [2].In the recent decades, recommender systems have received considerable research attention in the literature, and many effective recommendation approaches have been proposed, such as social network-based recommendation models [3], graphbased recommendation models [4,5], and context aware recommendation models [6,7]; a recent and up-to-date review can be found in the works of Lu et al. [8].
Many of these works are focused on movie recommendation or based on movie-rating data sets [9,10].Typically, in online video-watching websites with recommender systems, users are asked to rate movies with discrete scores to express their individual opinions, where a high score usually indicates user preference on this movie.Take https://www.netflix.comas an example, users are suggested to rate movies and TV shows (items in general) in a rating scale from 1 star to 5 stars, where one star means "Hate It," and five stars mean "Love It."This kind of explicit feedbacks can largely reflect user preferences.Even if a user dislikes a movie after watching it, he might be attracted by its title, cast, director, genres, or others; otherwise he would never watch it.Hence, the negative ratings indicate many useful information and thus should not be neglected or simply considered to be negative.Many works have shown that both of positive and negative opinions are effective to make effective recommendations.
First, given a rating scale where the highest score denotes the most positive opinion and the lowest score indicates the most negative opinion, users' rating scores do not distribute evenly along the whole rating scale [11].Second, different users may have different rating criterions, some goodtempered users are willing to give high scores whereas other critical people seldom give full marks to any items they have watched [12].Last but not least, the negative ratings indicate dislike and simultaneously relevance, and they may play an either negative or positive role depending on the sparsity of training set and the popularity of the corresponding items [13].
As the mobile platforms become more and more user friendly, computationally powerful, and readily available, online content providers have begun to develop mobile apps to offer more personalized contents.People can watch their favorite movies and TV shows wherever and whenever they have a break.This mobile feature poses a new challenge to recommendation systems.Most of previous works do not consider the temporal difference in the rating criterions of users.According to the memory effect of movie watching behavior [14] and the anchoring bias phenomena of movierating behavior [15], the current rating of one user will be influenced by his previous watching and rating history.Therefore, individual rating criterions may vary in different periods, depending on the previous items he has watched.Besides, a user's negative ratings may have a different temporal influence from his positive ratings on his future preference.
In this paper, we empirically analyze three typical data sets created by popular online video services (MovieLens, Netflix, and MovieTweetings) with focus on the temporal effects of the rating behavior of each individual user.We concentrate on the time-varying rating criterion and different temporal effects of positive and negative ratings on future behavior.We propose a session-based recommendation model taking into account these temporal characteristics of user ratings.Compared with five state-of-the-art methods on the aforementioned movie-rating date sets, our proposed model is validated to give more accurate prediction of user preference.
Empirical Analysis
In this section, we analyze empirically the temporal difference in users' rating criterions and the temporal effects of positive and negative ratings, with the hope of understanding the temporal characteristics of users' rating behaviors and verifying the following two assertions.
Assertion I.The rating criterion of a user varies over time.
Assertion II.The positive and negative ratings of a user have different temporal influences on his future preference.For the convenience of readers, we list all the notations used in this paper in "Notations." 1.
The Rating Criterion.
In this paper, we investigate users' rating criterions in two aspects: average rating score and rating scale.Specifically, we consider the monthly average rating score and rating scale of each user as two independent random variables and estimate their standard deviations across months.To obtain a reliable estimation, we consider only the users who are active in more than 2 months of the whole period.Figures 1(a), 1(b), and 1(c) show the distributions of standard deviation of average rating scores for MovieLens, Netflix, and MovieTweetings data sets, respectively.The mean values of the deviations for three data sets (0.36427, 0.56429, and 0.79849) are all significantly greater than 0 ( value ∼ 0, obtained by one-sided -test).Similarly, Figures 1(d), 1(e), and 1(f) show the distributions of standard deviation of users' rating scales.The mean values for MovieLens, Netflix, and MovieTweetings are 0.93737, 0.93495, and 1.05155, respectively ( value ∼ 0, obtained by one-sided -test).These observations indicate that every user has a significantly changing rating criterion over time, empirical evidence of Assertion I.
The Positive and Negative Ratings.
Note that the rating criterion on items varies from person to person; we take the median score = ( max + min )/2 of each individual user , instead of the median score of systematic rating scale, to distinguish his own positive rating (rating score no less than ) and negative rating (rating score less than ).
We use session to represent a continuous period of user activity; thus the records of user can be divided into several sequential sessions = { , 1 , , 2 , . . ., , }.In this paper, the sessions are divided by month; that is to say, two ratings of the same user are in the same session if and only if they occur in the same month.For a user , the items rated with positive ratings by user in session , constitute his positive item set , , and the negative item set , is similarly defined.For a target user , we take his latest positive item set , as future preference, and all the previous positive and negative item sets , and , are treated as previous interests, = 1, 2, . . ., − 1.
The correlation Cor(, ) between two item sets and is defined as the averaged cosine similarity on all item pairs, where one item is from and the other is from . Figure 2 plots the correlation Cor( , , , ) (the black line) and Cor( , , , ) (the red line) against the time gap | − |, of course averaged over all users, for MovieLens, Netflix, and MovieTweetings data sets.We can see that the future preference of a user is clearly more influenced by his positive ratings than the negative ratings in the past.From the temporal point of view, the bigger the time gap is, the less influenced the future preference is by the previous positive/negative ratings.However, the decay rates of influences of positive and negative opinions vary for different data sets.For MovieLens, the influence of positive ratings on future preference is more stable than negative ratings ( positive = 0.00584, negative = 0.02325), while, for Netflix, the decay rates of influences of positive and negative ratings are very similar to each other ( positive = 0.00747, negative = 0.00912).
Since the first and the last sessions contain data of only 1 day and 2 days for MovieTweetings data set, we ignore the last points of curves with time gap of 6 months.Different from above observations, we find that the influence of negative ratings is more stable than that of positive ratings in MovieTweetings ( positive = 0.01174, negative = 0.00319).Therefore, users' positive and negative ratings have different temporal influences on his future preferences, empirical evidence of Assertion II.
Recommendation Model
Based on the session-based temporal graph (STG) introduced by Xiang et al. [4], we propose a session-based recommendation model with the temporal effect of user preferences (STeuP), which is an enhanced version of the Injected Preference Fusion (IPF) model associated with STG.Users and items are represented by user nodes ∈ and movie nodes ∈ , respectively.To represent users' ratings at different periods, we associate a session node , to the movies rated by user in this session.These three types of nodes are connected by weighted directed edges, namely, , , , , , , and , .The edges affiliated to session nodes reflect the short-term rating criterions of users, while the edges affiliated to user nodes reflect users' long-term preferences.Figure 3 gives an example of session-based temporal graph.
To eliminate the effect of different rating criterions of different individuals, the rating score of a user is normalized according to his own rating scale: which reflects users' long-term rating criterions.In this way, the rating scores of all users can be strictly regulated to [0, 1], where the maximum rating score of each user is set to 1 and the minimum rating score is fixed on 0. Since the short-term rating criterion of a user varies at different periods, we normalize his rating score in a particular session , by Recall that our recommendation task is to recommend movies for a target user to watch in the future.Of course, the rating whose occurrence time is closer to the target time is more useful to the recommendation task.Since the temporal influences of positive and negative ratings may be different, following previous works [19,20], we use two exponential functions to model the relevance of positive and negative ratings at time with user's preference at the target time .Hence, edge weights in , and , are defined as ( Similarly in a specified session, the rating whose occurrence time is closer to the target time is more important in this session.We use the same exponential functions to model the temporal influences of positive and negative ratings in a session, and the median rating value in this session is taken to distinguish positive and negative ratings.Thus, the edge weights of , and , are calculated by After setting the initial edge weights of STG, we normalize these edge weights as follows: A larger indicates that users' long-term preferences play a more important role in preference propagation.Given a target user , the basic idea of the preference propagation is to first inject initial preference on both the user node and his latest session node and then propagate the preference to candidate movie nodes through various paths in the graph.As defined in [4], the preference propagated by each path is the production of the initial preference (V 0 ) assigned to the target user node (or the latest session node , ) and the weights of all edges on the path: (V 0 ) depends on the node type: where = 0 means no preference is injected into the user node, while = 1 means no preference is injected into the session node.Similar to the previous work [4], we consider only the shortest paths (distance = 3) from source node to unknown movie nodes, which can be obtained effectively by Bread-First-Search.Consequently, we use to represent the set of shortest paths from source nodes to an unknown movie node for user , and the estimated preference of user on movie is then measured as where () is the weight of path defined as (6).The topranked movies sorted by preference value are then recommended.
Evaluation Metrics.
In order to predict users' future preferences based on the past interaction records, all the records are listed in ascending order of rating time.We take the records occurred in the latest 30 days as the probe set and the remaining records as the training set for all data sets.The training set is treated as known information, while no information from the probe set is allowed to be used for recommendation.Moreover, we denote the latest time among the training set as the target time .In this paper, four typical metrics are employed to evaluate the accuracy, diversity, novelty, and coverage of recommendation results.
Accuracy.
Accuracy is one of the most important evaluation metrics of a recommendation system.Both Precision and Recall could be used to measure the accuracy of the recommendation.Precision is the fraction of recommended items that are relevant, while Recall is defined as the ratio of the number of relevant items in the recommendation list to the number of preferred items in the probe set.However, Precision and Recall seem to be two sides of the seesaw; that is to say, given a fixed length of recommendation list, when one end rises, the other end falls.The 1 measure is proposed to find a suitable trade-off between Precision and Recall, which is defined as where = (1/||) ∑ || =1 (ℎ /) and = (1/||) ∑ || =1 (ℎ / ), where || is the number of users, ℎ is the number of relevant items in the recommendation list of , and is the number of all preferred items in the probe set of user .Generally speaking, for a given length of recommendation list, the method with higher 1 value is the better one.
Diversity.
Diversity is used to measure the difference between recommendation lists of different users.An excellent algorithm should recommend as widely distributed items as possible, because people are glad to get personalized suggestions.We use the Hamming distance to measure the diversity of recommendation lists, where is the number of common items in the recommendation lists of user and user . = 0, if and get identical recommendation list consisting of items.Diversity is defined as the mean value of Hamming distance: 4.1.3.Novelty.Novelty quantifies the capacity of a method to generate novel and unexpected recommendations, which may be greatly contributed by less popular items (i.e., items of low degree) that are unlikely to be known previously.
It can be simply measured as the average degree of the recommended items.Specifically, for a target user whose top- recommendation list is denoted by , his novelty is defined as [21] Novelty Averaging over the novelty of all users, we obtain the novelty of the system.
Coverage.
Coverage measures the percentage of items that an algorithm is able to recommend to users.It can be calculated as the ratio of the number of distinct items in the users' lists to the total number of items in the system, which reads where || is the number of items in set , = 1 only if item is recommended to at least one user (i.e., is in at least one user's list), and otherwise = 0. Undoubtedly, recommending more popular items will result in lower coverage.
Parameter Adjustment.
Before comparing the proposed model with the baseline methods, we investigate the impacts of the parameters , , , and on the performance of the STeuP model.As we see in Section 2.3, the temporal effect of positive and negative opinions may be different in different online websites.Thus, we first examine the effect of parameters and , which govern the decay rates of temporal influence of positive and negative opinions on users' future preference.The bigger the parameters and are, the less affected the future behaviors are by users' past positive and negative opinions.Without loss of generality, we set = 1 and = 0.5 when tuning and .
, and 4(c) plot the heat map of 1 against parameters and for MovieLens, Netflix, and MovieTweetings, respectively.The log- is along the -axis while the -axis is for log-.The different 1 values along the -axis are indicated by different colors.Firstly, we observe that the 1 is more sensitive to than ; that is, given a fixed value of , the changing range of 1 is much bigger when traversing the parameter .Secondly, the results on all data sets show that there is an obvious "ridge" along the -axis, where we can get the optimal 1 value.Hence, we can firstly fix to a small value (10 −2 ∼10 −3 ) and tune the parameter to find the local optimal value, and then fix and adjust to find the globally optimal accuracy.
By setting the values of and to 0, we can get the recommendation results without temporal influence, which are presented in Table 2.We can see that consideration of temporal influence by weighting users' positive and negative opinions with different temporal decay rates leads to performance improvements.From the values of parameters and when we get the optimal 1, we can find that the decay rate of temporal effect of positive opinions is much smaller than that of negative opinions on MovieLens, but it is bigger than that of negative opinions for MovieTweetings.For Netflix, the decay rates of positive and negative opinions are almost the same.This validates again our inference on the different temporal influences of positive and negative opinions on three data sets in Section 2. 3.
In our STeuP model, parameter controls the ratio of injected preferences into user nodes against session nodes.If equals 0, no preference is injected into the user node; if equals 1, no preference is injected into the session node.Thus, parameter is used to balance the effect of long-term and short-term interests in the initial phase, where the larger is, the stronger the influence of long-term preferences is.The results of how accuracy changes against for three data sets are shown in Figure 5. Firstly, the results show that ignoring long-term preferences ( = 0) cannot generate good results.Secondly, in a sparser data set, the value for the optimal 1 is bigger.Generally speaking, optimal results can be obtained by combing long-term and short-term interests together.In the next discussion, we fix to 0.5, 0.9, and 1.0 for MovieLens, Netflix, and MovieTweetings data sets, respectively.
Parameter is used to balance the influence of longterm and short-term preferences in the process of preference propagation. = 0 means item nodes are only connected to users nodes and item-item similarity depends only on users' long-term preference and vice versa.Figures 6(a), 6(b), and 6(c) plot the change of 1 against parameter on three data sets.As the -axis is for the logarithmic value of , we can see that, for MovieLens and Netflix data sets, a parameter close to 1 is corresponding to the optimal 1, while the optimal value of for MovieTweetings is close to 10 2 .This observation verifies that both users' long-term and shortterm opinions are important to measure item similarity.Furthermore, users' long-term opinions are more important than short-term opinions on sparse data sets.
Comparison of Methods.
We compare our proposed model with other five models listed in Table 3.The mark √ in the table indicates whether a model distinguishes user ratings as positive and negative opinions and/or considers temporal influence.UCF is an important collaborative filtering method which calculates the similarity between users based on the rating information.NBI is a network-based What is more, we also check the performance of a famous matrix factorization recommendation algorithm [22], which is very successful in rating prediction with the help of time information.However, the 1 value of this method on the aforementioned three data sets is all less than 0.001, perhaps because it does not apply to the situation of binary preference prediction.Hence, we do not present the results of the matrix factorization methods for comparison in this paper.
Given the recommendation length = 10, the recommendation performance of these six methods for MovieLens, Netflix, and MovieTweetings is reported in Tables 4, 5
:
The highest and lowest scores in , , : The edge sets from node set to node set ℎ : Th e n u m b e r o f r e l e v a n t i t e m s (namely, the items collected by the user in the probe set) in the recommendation list of : The number of selected items in user 's probe set : The time stamp when user rates movie , : The decay factors controlling the extent of temporal influences of positive and negative ratings V , V ∩ , V ∩ : The neighbor set, neighbor session set, and neighbor user set of item V : The parameter used to adjust the preference propagation of an item node to its user neighbors or session neighbors V, : A node and a path on the session-based graph V,V : The weight of edge V,V in path (V 0 ): The value of injected preferences on the source node V 0 : The parameter used to tune the ratio of injected preferences on the user node against the session node.
Figure 1 :
Figure 1: The distributions of standard deviation of average rating scores and users' rating scales for MovieLens, Netflix, and MovieTweetings data sets, respectively.
Figure 2 :Figure 3 :
Figure 2: The temporal influence of user's positive and negative opinions for MovieLens, Netflix, and MovieTweetings.The black one and red one represent user's positive and negative opinions, respectively.
Figure 4 :
Figure 4: The heat map of 1 against parameters and on MovieLens, Netflix, and MovieTweetings.
Table 1 :
Basic statistics of three data sets.The sparsity is defined as ||/( * ), where || is the number of ratings and and denote the numbers of users and items.
Table 2 :
Performance comparison of STeuP model distinguishing the temporal influence of positive and negative ratings on MovieLens, Netflix, and MovieTweetings.
Table 3 :
The six recommendation models for comparison in this paper.The mark √ under PN or T indicates whether the model distinguishes opinions as positive and negative opinions and/or considers temporal influence.SNBI is the enhanced version of NBI which assigns two different weights to the positive and negative opinions.UOS and SNBI distinguish users' ratings as positive and negative opinions, but do not utilize temporal influence.IPF is the recommendation model proposed together with session-based temporal graph, which is based on binary data and distinguishes users' long-term and shortterm preferences.STeuP is our proposed model based on the two assertions in Section 2, which takes into account both of the temporal differences of users' rating criterions and the different temporal effects of users' positive and negative ratings.
, and 6, respectively.The highest and lowest rating scores made by user : The middle value of user 's ratings , : Th e p o s i t i v e i t e m s e t i n , , : The negative item set in , max : | 5,468.4 | 2017-03-12T00:00:00.000 | [
"Computer Science"
] |
Spontaneous symbiotic reprogramming of plant roots triggered by receptor-like kinases
Symbiosis Receptor-like Kinase (SYMRK) is indispensable for the development of phosphate-acquiring arbuscular mycorrhiza (AM) as well as nitrogen-fixing root nodule symbiosis, but the mechanisms that discriminate between the two distinct symbiotic developmental fates have been enigmatic. In this study, we show that upon ectopic expression, the receptor-like kinase genes Nod Factor Receptor 1 (NFR1), NFR5, and SYMRK initiate spontaneous nodule organogenesis and nodulation-related gene expression in the absence of rhizobia. Furthermore, overexpressed NFR1 or NFR5 associated with endogenous SYMRK in roots of the legume Lotus japonicus. Epistasis tests revealed that the dominant active SYMRK allele initiates signalling independently of either the NFR1 or NFR5 gene and upstream of a set of genes required for the generation or decoding of calcium-spiking in both symbioses. Only SYMRK but not NFR overexpression triggered the expression of AM-related genes, indicating that the receptors play a key role in the decision between AM- or root nodule symbiosis-development. DOI: http://dx.doi.org/10.7554/eLife.03891.001
Introduction
Plants circumvent nutrient deficiencies by establishing mutualistic symbioses with arbuscular mycorrhiza (AM) fungi or nitrogen-fixing rhizobia and Frankia bacteria (Gutjahr and Parniske, 2013;Oldroyd, 2013). One of the first steps in the reciprocal recognition between rhizobia and the legume Lotus japonicus is the perception of bacterial lipo-chitooligosaccharides, so called nodulation factors, by the two lysin motif (LysM) receptor-like kinases (RLKs) Nod Factor Receptor 1 (NFR1) and NFR5 Radutoiu et al., 2003Radutoiu et al., , 2007Broghammer et al., 2012). Nodulation factor application induces two genetically separable calcium signatures in root hair cells; an early transient influx into the cytoplasm and within minutes calcium-spiking -periodic calcium oscillations in and around plant cell nuclei (Ehrhardt et al., 1996;Miwa et al., 2006;Oldroyd, 2013). (Lipo)chitooligosaccharides have also been isolated from AM fungi (Maillet et al., 2011;Genre et al., 2013) and a NFR5-related LysM-RLK from Parasponia has been pinpointed as a likely candidate for their perception (Op Den Camp et al., 2011). The common symbiosis genes of legumes are required for AM and root nodule symbiosis. A subset of these genes is essential for either the generation or the decoding of calcium-spiking. In L. japonicus, the former group encodes the RLK Symbiosis Receptorlike Kinase (SYMRK; Stracke et al., 2002;Antolín-Llovera et al., 2014), two cation-permeable ion channels CASTOR and POLLUX (Imaizumi-Anraku et al., 2005;Charpentier et al., 2008;Venkateshwaran et al., 2012) as well as the nucleoporins NUP85, NUP133, and NENA (Kanamori et al., 2006;Saito et al., 2007;Groth et al., 2010). The latter group encodes Calcium Calmodulindependent Protein Kinase (CCaMK; Tirichine et al., 2006;Miller et al., 2013) and CYCLOPS (Yano et al., 2008), which form a complex that has been implicated in the deciphering of calcium-spiking (Kosuta et al., 2008). Phosphorylation by CCaMK activates CYCLOPS, a DNA-binding transcriptional activator of the NODULE INCEPTION gene (NIN; Schauser et al., 1999;Singh et al., 2014). NIN itself is a legume-specific and root nodule symbiosis-related transcription factor and regulates the Nuclear Factor-Y subunit genes NF-YA1 and NF-YB1 that control the cell division cycle (Soyano et al., 2013;Yoro et al., 2014). The paradigm of a common signalling pathway for both symbioses bears important open questions about the molecular mechanisms that ensure the appropriate cellular response for AM fungi on the one hand and for rhizobia on the other hand.
SYMRK carries an ectodomain composed of a malectin-like domain (MLD), and a leucine-rich repeat (LRR) region that experienced structural diversification during evolution and is cleaved to release the MLD (Antolín-Llovera et al., 2014). Although SYMRK has been cloned several years ago (Stracke et al., 2002), its precise function in symbiosis is still enigmatic. While nfr mutants lack most cellular and physiological responses to rhizobia , including nodulation factor-induced calcium influx and calcium-spiking, root hairs of symrk mutants respond with calcium influx to nodulation factor but not with calcium-spiking, and do not develop infection threads with rhizobia (Stracke et al., 2002;Miwa et al., 2006). Based on these phenotypic observations, SYMRK was positioned downstream of the NFRs Miwa et al., 2006). Importantly, it has not been conclusively resolved whether SYMRK plays an active signalling role in symbiosis or, alternatively, is involved in mechanical stress desensitisation (Esseling et al., 2004). To approach this issue, we built on the observation that specific mutations in, or over-abundance of eLife digest Like all plants, crop plants need nutrients such as nitrogen and phosphate to grow. Often these essential elements are in short supply, and so millions of tons of fertiliser are applied to agricultural land each year to maintain crop yields. Another way for plants to gain access to scarce nutrients is to form symbiotic relationships with microorganisms that live in the soil. Plants pass on carbon-containing compounds-such as sugars-to the microbes and, in return, certain fungi provide minerals-such as phosphates-to the plants. Some plants called legumes (such as peas, beans, and clovers) can also form relationships with bacteria that convert nitrogen from the air into ammonia, which the plants then use to make molecules such as DNA and proteins.
To establish these symbiotic relationships with plants, nitrogen-fixing bacteria release chemical signals that are recognized via receptor proteins, called NFR1 and NFR5, found on the surface of the plant root cells. These signals trigger a cascade of events that ultimately lead to the plant forming an organ called 'root nodule' to house and nourish the nitrogen-fixing bacteria. A similar signalling mechanism is thought to take place during the establishment of symbiotic relationships between plants and certain soil fungi.
A plant protein called Symbiosis Receptor-like Kinase (or SYMRK for short) that is also located on the root cell surface is required for both bacteria-plant and fungi-plant associations to occur. However, the exact role of this protein in these processes was unclear. Ried et al. have now investigated this by taking advantage of a property of cell surface receptor proteins: if some of these proteins are made in excessive amounts they activate their signalling cascades even when the initial signal is not present.
Ried et al. engineered plants called Lotus japonicus to produce high levels of SYMRK, NFR1, or NFR5. Each of these changes was sufficient to trigger the plants to develop root nodules in the absence of microbes. Genes associated with the activation of the signalling cascade involved the formation of root nodules were also switched on when each of the three proteins was produced in large amounts. In contrast, only an excess of SYMRK could activate genes related to fungi-plant associations. Ried et al. also found that, while SYMRK can function in the absence of the NFRs, NFR1 and NFR5 need each other to function. These data suggest that the receptor proteins play a key role in the decision between the establishment of an association with a bacterium or a fungus.
As an excess of symbiotic receptors caused plants to form symbiotic structures, Ried et al. propose that this strategy could be used to persuade plants that usually do not form symbioses with nitrogen-fixing bacteria to do so. If this is possible, it might lead us to engineer crop plants to form symbiotic interactions with nitrogen-fixing bacteria; this would help increase crop yields and enable crops to be grown in nitrogen-poor environments without the addition of extra fertiliser. DOI: 10.7554/eLife.03891.002 mammalian receptor tyrosine kinases on the cell surface is linked with the development of some cancers caused by spontaneous receptor complex formation and inappropriate initiation of signalling (Schlessinger, 2002;Wei et al., 2005;Shan et al., 2012). We hypothesized that similar behaviour could be triggered by overexpression of symbiosis-related plant RLKs, providing a tool to further dissect the specific signalling pathways they address.
Results
Symbiotic RLKs trigger spontaneous formation of root nodules To achieve overexpression, we generated constructs expressing functional SYMRK (Antolín-Llovera et al., 2014), NFR5, or NFR1 under the control of the strong L. japonicus Ubiquitin promoter and added C-terminal mOrange fluorescent tags for detection purposes (pUB:SYMRK-mOrange, pUB: . The functionality of the NFR constructs was confirmed by their ability to restore nodulation in the corresponding, otherwise nodulation deficient, nfr mutant roots to the level of L. japonicus wild-type roots transformed with the empty vector ( Figure 1-figure supplement 1). Intriguingly, transgenic expression of any of the three symbiotic RLK versions in L. japonicus roots was sufficient to spontaneously activate the entire nodule organogenesis pathway as evidenced by the formation of nodule-like structures in the absence of rhizobia (Figure 1; Figure 1-figure supplement 2). The presence of peripheral vascular bundles instead of a central root vasculature unambiguously identified these lateral organs as spontaneous nodules ( Figure 1C). Spontaneous nodule primordia or nodules were present on 90% (116 out of 129), 23% (30 out of 133), 11% (16 out of 182), and 0% (0 out of 164) of L. japonicus root systems at 60 days post transformation (dpt) with, respectively, pUB:SYMRK-mOrange, pUB:NFR5-mOrange, pUB:NFR1-mOrange, or the empty vector ( Figure 1A; Figure 1-figure supplement 2). A total of 810 empty vector roots generated throughout the course of this study did not develop spontaneous nodules in any of the genetic backgrounds and time points tested. Roots expressing functional SYMRK-RFP from its native promoter (pSYMRK:SYMRK-RFP; Kosuta et al., 2011) and grown in the absence of rhizobia did not develop spontaneous nodules, indicating that spontaneous nodulation was triggered by SYMRK expression from the Ubiquitin promoter and not by the addition of a C-terminal tag alone (Figure 1figure supplement 3). Moreover, the expression of non-tagged SYMRK under the control of the Ubiquitin promoter triggered the formation of spontaneous nodules. In comparison to roots transformed with the tagged SYMRK version, a lower number of roots transformed with non-tagged SYMRK contained spontaneous nodules (Figure 1-figure supplement 4). One explanation for this observation is that the C-terminal mOrange tag might result in alterations in the relative amount of signalling-active SYMRK. Another possibility is that the presence of the tag improves homo-and/or hetero-dimerization, which subsequently leads to downstream signalling. Our results demonstrate that overexpression of NFR1-mOrange, NFR5-mOrange, or SYMRK results in the activation and execution of the nodule organogenesis pathway in the absence of external symbiotic stimulation.
Symbiotic RLKs trigger spontaneous nodulation-related signal transduction
To establish whether the development of nodule-like structures was associated with nodulation-related gene activation, we analysed the expression behaviour of marker genes induced during root nodule symbiosis (NIN and SbtS;Kistner et al., 2005) via quantitative real-time PCR (qRT-PCR; Figure 2A). The SbtS gene is also induced during AM symbiosis (Kistner et al., 2005). In comparison to control roots transformed with the empty vector, the SYMRK construct resulted in a highly significant increase in NIN and SbtS transcript levels (mean fold increase of 137 and 24, respectively). A slighter but statistically significant increase in transcript levels could be observed in roots overexpressing either NFR1-mOrange (NIN, mean fold increase 3; SbtS, mean fold increase 7) or NFR5-mOrange (NIN, mean fold increase 8; SbtS, mean fold increase 15) ( Figure 2A).
To monitor the spontaneous activation of NIN and SbtS by an independent and histochemical method, we made use of stable transgenic L. japonicus reporter lines carrying either a NIN promoter:βglucuronidase (GUS) fusion (pNIN:GUS; Radutoiu et al., 2003) or a SbtS promoter:GUS fusion (pSbtS:GUS; Takeda et al., 2009) (Figure 2B). In addition, we employed the symbiosis-reporter line T90 that was isolated in a screen for symbiosis-specific GUS expression from a promoter-tagging population (Webb et al., 2000) ( Figure 2B). The T90 reporter is activated in roots treated with nodulation factor or inoculated with Mesorhizobium loti and-similar to pSbtS:GUS-also shows GUS expression during AM Kistner et al., 2005). GUS activity was determined in roots by histochemical staining with 5-bromo-4-chloro-3-indolyl glucuronide (X-Gluc; Figure 2B). Either of the three symbiotic RLKs but not the empty vector activated the pNIN:GUS, the pSbtS:GUS, as well as the T90 reporter in the absence of M. loti or AM fungi ( Figure 2B). This histochemical analysis of GUS activity, in combination with the qRT-PCR results, provide strong evidence that overexpression of symbiotic RLKs leads to the activation of nodulation-related genes in the absence of external symbiotic stimulation ( Figure 2). However, the three RLK genes were not equally effective in inducing the symbiotic program: NFR5 or NFR1 overexpression resulted in a lower percentage of root systems showing promoter activation and formation of spontaneous nodules when compared to SYMRK overexpression ( Figure 1; Figure 1-figure supplement 2; Figure 2B). Interestingly, SYMRK-as well as NFR5-mediated T90 or NIN promoter activation was first observed in the root and retracted to nodule primordia and nodules over time, while NFR1-mediated T90 or NIN promoter activation could only be detected in nodule primordia or in nodules ( Figure 2B). The Ubiquitin promoter drives expression of the receptors in all cells of the root (Maekawa et al., 2008), which is in marked contrast to the highly specific and developmentally controlled expression patterns of the marker genes observed. These incongruences thus reveal the presence of additional layers of regulation, operating downstream of the receptors, which dictate the precise expression patterns of the reporters.
SYMRK triggers spontaneous AM-related signal transduction
Since SYMRK is not only required for nodulation but also for AM symbiosis, we investigated the potential of dominant active RLK alleles to spontaneously activate AM-related marker genes or a promoter:GUS reporter ( Figure 3). Blue copper-binding protein 1 (Bcp1) and the subtilisin-like serine protease gene SbtM1 are induced during AM symbiosis (Liu et al., 2003;Kistner et al., 2005;Takeda et al., 2009), and both genes are predominantly expressed in arbuscule-containing and adjacent cortical cells (Hohnjec et al., 2005;Takeda et al., 2009Takeda et al., , 2012. Furthermore, in L. japonicus, SbtM1 expression marks root cells that contain an AM fungi-induced prepenetration apparatus (Takeda et al., 2012)-an intracellular structure that forms prior to invasion by fungal hyphae (Genre et al., 2005). Transcript levels of SbtM1 and Bcp1 were determined via qRT-PCR, and both were significantly increased in roots transformed with pUB:SYMRK-mOrange compared to the empty vector control ( Figure 3A). To determine SbtM1 activation by an independent, histochemical approach, we employed a stable transgenic L. japonicus line harbouring a SbtM1 promoter:GUS fusion (pSbtM1:GUS; Takeda et al., 2009). In line with the results from the qRT-PCR experiments, overexpression of SYMRK-mOrange in roots of the pSbtM1:GUS reporter line resulted in activation of the SbtM1 promoter at 40 and 60 dpt ( Figure 3B). In contrast, no SbtM1 promoter activation or AM-related gene induction could be detected upon overexpression of either of the NFRs (Figure 3). The absence of AM-related gene expression in NFR5-expressing roots is not a consequence of the overall lower induction power of the NFR5 construct. In SYMRK-vs NFR5-expressing roots, the relative ratio of transcripts was 1.6:1 for SbtS and 17:1 for NIN (Figure 2A). In contrast, SbtM1 was undetectable in NFR5-but more than 1100-fold above detection limit in SYMRKoverexpressing roots ( Figure 3A). These data clearly demonstrate a strong difference in the gene repertoire activated by SYMRK vs NFR5. Together with the spontaneous nodulation, these results demonstrate that overexpression of NFR1-mOrange, NFR5-mOrange, or SYMRK-mOrange activates the nodulation pathway as evidenced by spontaneous organogenesis and gene expression results at the level of endogenous transcripts as well as promoter:GUS expression. In contrast, only the SYMRK construct but neither of the NFR constructs induced AM-related gene expression. This suggests that signalling specificity towards the two different symbiotic programs is achieved at the level of the receptors.
SYMRK associates with NFR1 and NFR5 in Lotus japonicus roots
Spontaneous receptor complex formation caused by overexpression offers itself as a likely explanation for the observed activation of symbiosis signalling in the absence of an external trigger or ligand. This is a scenario described in the context of cancer formation, where receptor tyrosine kinase overexpression or specific mutations in the receptor lead to receptor dimerization in the absence of a ligand, which results in ectopic cell proliferation (Schlessinger, 2002;Wei et al., 2005;Shan et al., 2012). Upon expression in Nicotiana benthamiana leaves in the absence of symbiotic stimulation, we observed previously weak association between full-length SYMRK and NFR1 as well as NFR5, but not between SYMRK and the functionally unrelated RLK Brassinosteroid Insensitive 1 (BRI1; Li and Chory, 1997;Antolín-Llovera et al., 2014; Figure 4-figure supplement 1). To test whether overexpression is associated with receptor complex formation in L. japonicus roots, we employed the overexpression constructs of NFR1, NFR5, or the unrelated EF-Tu receptor kinase (EFR; Zipfel et al., 2006) for co-immuno-enrichment experiments. The EFR construct did not interfere with nodulation in wild-type plants (Figure 1-figure supplement 1). Endogenous full-length SYMRK was co-enriched with NFR1 and NFR5, but not with EFR demonstrating association of SYMRK and both NFRs (Figure 4). However, it should be noted that the expression strength of EFR was lower than that of NFR1 and NFR5. SYMRK-NFR association was detected in the absence of nodulation factor. We did not observe an effect of M. loti on this association at 10 days post inoculation (Figure 4).
Epistatic relationships between SYMRK and other common symbiosis genes
The availability of dominant active receptor gene alleles offers an attractive tool for their positioning in the genetic pathway required for nodule organogenesis and symbiosis-related gene expression. We asked whether the pUB:SYMRK-mOrange construct induced spontaneous nodules or the symbiosis-specific T90 reporter in mutants of common symbiosis genes ( Figure 5; Figure 5-figure supplement 1 and 4). SYMRK-induced spontaneous nodules were absent from pollux-2, castor-12, nup133-1, or ccamk-13 mutant roots. Likewise T90 reporter (GUS) activation was not detectable in the castor-2 x T90 (Kistner et al., 2005) or ccamk-2 x T90 (Gossmann et al., 2012) lines ( Figure 5-figure supplement 4). This epistasis revealed that the ion channel genes CASTOR and POLLUX, the nucleoporin gene NUP133, and the calcium-and calmodulin-dependent protein kinase gene CCaMK, operate downstream of SYMRK in a pathway leading to spontaneous nodulation and activation of T90 ( (Yano et al., 2008). While bacterial infection is strongly impaired in L. japonicus cyclops or M. truncatula ipd3 mutants, nodule primordia or nodules, respectively, develop upon rhizobia inoculation (Yano et al., 2008;Horvath et al., 2011;Ovchinnikova et al., 2011). Furthermore, an autoactive version of CCaMK is able to induce the formation of mature spontaneous nodules in cyclops mutants (Yano et al., 2008). The ability of SYMRK to mediate spontaneous nodule organogenesis in the cyclops mutant is consistent with these results and points towards the existence of redundancies in the genetic pathway leading to organogenesis at the level of CYCLOPS (Singh et al., 2014).
Epistatic relationships between symbiotic RLK genes
We used the dominant active alleles to determine the hierarchy of the symbiotic RLK genes in the spontaneous nodulation and T90 activation pathways. Control roots of mutant lines transformed with the empty vector (218 root systems) or SYMRK driven by its own promoter (33 root systems) did not carry spontaneous nodules or nodule primordia ( Figure 5; Figure 5-figure supplement 1-3). Expression of pUB:SYMRK-mOrange spontaneously activated the nodulation program in nfr1-1, nfr5-2, and symrk-3 . This dependence of NFR5 on NFR1 is further supported by the observation that overexpression of NFR1-mOrange and SYMRK-mOrange but not of NFR5-mOrange activated the T90 reporter in the nfr1-1 mutant background ( Figure 5-figure supplement 5). These results position SYMRK downstream of or at the same hierarchical level as NFRs. Moreover, while SYMRK-mediated spontaneous signalling does not require the simultaneous presence of NFR1 and NFR5, NFR5-mediated spontaneous signalling is dependent on the presence of NFR1.
Spontaneous signalling induced by receptor overexpression
A hallmark of the nitrogen-fixing symbiosis of legumes is the accommodation of rhizobia inside plant root cells in specialised organs-the nodules-that provide a favourable environment for nitrogen pathway uncoupled from bacterial infection. Furthermore, dominant active RLK versions could be useful for probing and dissecting the symbiotic signalling pathway, also in those plant lineages that are presently unable to develop nitrogen fixing root nodule symbiosis.
SYMRK has an active and direct role in symbiosis signalling
It has been observed that cytoplasmic streaming in root hairs of a symrk-3 mutant did not resume after mechanical stimulation, which raised the possibility that the absence of calcium-spiking upon injection of calcium-sensitive dyes into mutant root hair cells was a pleiotropic effect of this increased touch sensitivity (Esseling et al., 2004;Miwa et al., 2006). If touch desensitisation was the only function of SYMRK, its overexpression would not lead to spontaneous nodule formation. We therefore unambiguously demonstrated a direct role of SYMRK in symbiosis signalling, while eliminating the possibility that the symbiosis defects of symrk mutants are due to pleiotropic effects only.
SYMRK is positioned upstream of genes involved in calcium-spiking
Mutants defective for either of the common symbiosis genes SYMRK, CASTOR, POLLUX, NENA, NUP85, or NUP133 produce very similar phenotypes in symbiosis, in that they abort infection at the epidermis and are impaired in calcium-spiking (Kistner et al., 2005;Miwa et al., 2006;Groth et al., 2010), which placed them at the same hierarchical level. Consequently, a genetic resolution of the relative position of the common symbiosis genes upstream of calcium-spiking was missing. Epistasis tests revealed that SYMRK initiates signalling upstream of other common symbiosis genes implicated in the generation and interpretation of nuclear calcium signatures ( Figure 5, Figure 5- figure supplement 1 and 4). These findings support the conceptual framework in which SYMRK activates the calcium-spiking machinery and consequently the CCaMK/CYCLOPS complex, a central regulator of symbiosis-related gene expression and nodule organogenesis (Gleason et al., 2006;Tirichine et al., 2006;Singh and Parniske, 2012;Singh et al., 2014). This is in line with the observation that dominant active variants of CCaMK were able to restore nodulation and infection in symrk mutant backgrounds, indicating that a main function of SYMRK in symbiosis is the activation of CCaMK (Hayashi et al., 2010;Madsen et al., 2010).
Interaction between SYMRK and the NFRs
We observed association between SYMRK and either NFR1 or NFR5 upon NFR overexpression in L. japonicus roots (Figure 4). Interestingly, under these conditions, the SYMRK-NFR association was detected in the absence of nodulation factor (Figure 4). In mammalian receptor tyrosine kinases as well as plant RLKs, ligand-induced receptor dimerization is the single most critical step in signal initiation Nam and Li, 2002;Schlessinger, 2002;Chinchilla et al., 2007;Schulze et al., 2010;Liu et al., 2012;Sun et al., 2013aSun et al., , 2013b. However, ligand-independent dimerization of receptor tyrosine kinases mediated by specific mutations in the kinase domain (Shan et al., 2012) or by overabundance of receptor tyrosine kinases (Wei et al., 2005) results in signalling activation and is a scenario well described in the context of cancer formation (Schlessinger, 2002). Similarly, overexpression of symbiotic RLKs might trigger ligand-independent receptor complex formation and activation of downstream signalling, thus providing an explanation why the interaction was also detected in the absence of external symbiotic stimulation. Unfortunately, we could not address the question whether SYMRK-NFR interaction is ligand-induced at endogenous levels of NFR expression since NFR1 and NFR5 were difficult to detect under these conditions.
The relationship between NFR1, NFR5 and SYMRK
We observed that NFR5 requires NFR1 as well as SYMRK for the spontaneous initiation of symbiosis signalling. This provides support for a model first put forward by Radutoiu et al. (2003), in which NFR1 and NFR5 engage in a nodulation factor perception complex. This model has received additional support through their synergistic effect on promoting cell death in N. benthamiana (Madsen et al., 2011;Pietraszewska-Bogiel et al., 2013). The finding that NFR1 and NFR5 interact with SYMRK upon overexpression suggests that the three RLKs engage in a receptor complex (Antolín-Llovera et al., 2014; Figure 4, Figure 4-figure supplement 1), and that this interaction might activate SYMRK for signal transduction. The observation that SYMRK operates independently of NFR1 or NFR5 brings about a new twist into current models of the signalling pathway (Downie, 2014) (Figure 5; Figure 5- figure supplement 1 and 4). NFR1 and NFR5 are only essential in the epidermis Hayashi et al., 2014), and it is likely that-at least partially-other members of the LysM-RLK gene family of L. japonicus (Lohmann et al., 2010) take over their role in the root cortex. NFR1 or NFR5 dispensability may be explained by other LysM-RLKs that might engage in alternative receptor complexes with SYMRK. Alternatively, spontaneous SYMRK-mediated signalling might be independent of any LysM-RLK, however, given the large number of LysM-RLKs in legumes (17 in L. japonicus ;Lohmann et al., 2010), it is difficult to test the latter hypothesis conclusively.
SYMRK undergoes cleavage of its ectodomain, resulting in a truncated RLK molecule called SYMRK-ΔMLD (Antolín-Llovera et al., 2014). In competition experiments in N. benthamiana leaves, NFR5 binds preferentially to SYMRK-ΔMLD, which experiences rapid turnover in N. benthamiana and in L. japonicus (Antolín-Llovera et al., 2014). As our SYMRK antibody does not recognise endogenous SYMRK-ΔMLD, we were not able to assess whether overexpressed NFR1 or NFR5 also associates with this truncated SYMRK variant in L. japonicus roots. In a hypothetical scenario, the SYMRK-ΔMLD complex with NFR5 forms constitutively to prevent inappropriate signalling, for example in the absence of rhizobia. The recruitment of NFR1, a hypothetical signal initiation event, would be promoted by the presence of nodulation factor. Our observation that upon overexpression in L. japonicus both NFR1 and NFR5 seem to interact with full-length SYMRK (Figure 4) suggests the formation of a ternary complex. This hypothetical complex has dual functionality: it signals through SYMRK on one hand to activate CCaMK and through the NFR1-NFR5 complex on the other hand to trigger the infection-related parallel pathways discovered by Madsen et al. (2010) andHayashi et al. (2010). It is possible that SYMRK has a dual-positive and negative-regulatory role: on the one hand SYMRK promotes signalling but on the other hand SYMRK-ΔMLD may be involved in preventing inappropriate signalling. A negative regulatory role would explain the exaggerated root hair response of symrk mutants to rhizobia (Stracke et al., 2002), since NFR1-NFR5 interaction is no longer under governance by SYMRK-ΔMLD. It has been demonstrated recently that expression of the intracellular kinase domain of SYMRK (SYMRK-KD) from Medicago truncatula or Arachis hypogaea in M. truncatula roots from the CaMV 35S promoter induces nodule organogenesis in the absence of rhizobia (Saha et al., 2014). However, in the presence of Sinorhizobium meliloti, nodules on plants overexpressing AhSYMRK-KD were poorly colonized and bacteria were rarely released from infection threads (Saha et al., 2014).
Heterocomplexes between SYMRK and alternative LysM-RLKs may govern nodulation-vs mycorrhiza signalling
The origin of AM dates back to the earliest land plants (∼400 mya) and recent angiosperms maintained a conserved genetic program for the intracellular accommodation of AM fungi (Gutjahr and Parniske, 2013). During the evolution of the nitrogen-fixing root nodule symbiosis, this ancient genetic programme has been co-opted, as evidenced by the common symbiosis genes (Kistner et al., 2005). The discovery that the ancient SYMRK might act as a docking site for the recently evolved nodulation factor perception system (Antolín-Llovera et al., 2014;Figure 4, Figure 4-figure supplement 1), highlights the role of this putative interface during the recruitment of the ancestral AM signalling pathway for root nodule symbiosis. Since a LysM-RLK closely related to NFR5 has been implicated in AM signalling (Op Den Camp et al., 2011), this finding also provides a conceptual mechanism for the integration of signals from the rhizobial and fungal microsymbiont through alternative complex formation between SYMRK and NFRs or AM factor receptors.
Specificity originates from the receptors
One question that has puzzled the community since the postulate of a common symbiosis pathway is how the decision between the developmental pathways of AM or root nodule symbiosis is made when the signalling employs identical signalling components. Models proposed involved different calciumspiking signatures with symbiosis-specific information content (Kosuta et al., 2008) or additional yet unidentified pathways that operate in parallel to the common symbiosis pathway to mediate exclusive and appropriate signalling (Takeda et al., 2011). Our observation of differential gene activation triggered by NFRs and SYMRK provides evidence that an important decision point is directly at the level of the receptors (Figure 2 and 3). Moreover, the observation that the dominant active SYMRK allele activates both pathways, which is not detected by stimulation with AM fungi or rhizobia, implies the existence of negative regulatory mechanisms that prevent the activation of the inappropriate pathway upon contact with either bacterial or fungal microsymbiont. The SYMRK-mediated loss of signalling specificity may be explained by simultaneous complex formation of SYMRK with NFR1 and NFR5, and related LysM-RLKs that mediate recognition of signals from the AM fungus (Maillet et al., 2011;Op Den Camp et al., 2011), which results in the release of both negative regulatory mechanisms, or by an unbalanced stoichiometry of SYMRK and putative specific negative regulators of AM-and root nodule symbiosis signalling. Candidates for such regulators include the identified interactors of the kinase domains of NFR1, NFR5, and SYMRK (Kevei et al., 2007;Zhu et al., 2008;Lefebvre et al., 2010;Mbengue et al., 2010;Chen et al., 2012;Den Herder et al., 2012;Ke et al., 2012;Toth et al., 2012;Yuan et al., 2012). The loss of signalling specificity upon SYMRK overexpression is reminiscent of expression of the deregulated CCaMK 314 deletion mutant that also induces spontaneous nodules and AM-related gene activation (Takeda et al., 2012). It is therefore possible that SYMRK overexpression imposes a deregulated state on CCaMK that is otherwise attainable artificially through the deletion of its regulatory domain.
DNA constructs and primers
For a detailed description of the constructs and primers used in this study, please see Supplementary file 1.
Plant growth, hairy root transformation and inoculation
L. japonicus seed germination (Groth et al., 2010) and hairy root transformation (Charpentier et al., 2008) were performed as described previously. Plants with emerging hairy roots systems were transferred to Fahraeus medium (FP) plates containing 0.1 µM of the ethylene biosynthesis inhibitor L-α-(2-aminoethoxyvinyl)-glycine at 2.5 weeks after transformation. For spontaneous nodulation experiments, promoter activation assays, or qRT-PCR experiments, plants were transferred to sterile Weck jars containing 300 ml dried sand/vermiculite and 25 ml FP medium at 23 dpt. For co-enrichment experiments, plants were transferred to sterile Weck jars containing 300 ml dried sand/vermiculite at 23 dpt, mock treated with 20 ml FP medium or inoculated with 20 ml of a M. loti MAFF303099 DsRED suspension in FP medium set to an OD 600 of 0.05, and incubated for 10 days. Plants for the SYMRKand CCaMK T265D -mediated T90 activation in the nfr1-1, cyclops-2, and ccamk-2 mutants were transferred to FP plates containing 0.1 µM of the ethylene biosynthesis inhibitor L-α-(2-aminoethoxyvinyl)-glycine at 21 dpt and kept on FP plates for 17 days. Transformants of the pSbtM1:GUS line were directly transferred to Weck jars containing 300 ml dried sand/vermiculite and approximately 25 ml ddH 2 O at 2.5 weeks after transformation. It is important to avoid free water at the bottom of the Weck jar. Plants were grown in Weck jars in a growth chamber (16 hr light/8 hr dark; 24°C) for 1.5-6 weeks. For complementation experiments, plants were transferred from FP plates to open pots containing 300 ml dried sand/vermiculite and 75 ml FP medium at 23 dpt. After 1 week, plants were inoculated with 25 ml per pot of a M. loti MAFF303099 DsRED suspension in FP medium set to an OD 600 of 0.05. Roots were phenotyped 15 days after inoculation.
Non-denaturing protein extraction from Nicotiana benthamiana leaves and immunoprecipitation experiments
Protein extraction and immunoprecipitation was performed as described previously (Antolín-Llovera et al., 2014).
Non-denaturing protein extraction from Lotus japonicus hairy roots and immuno-enrichment experiments
Plant tissue was ground to a fine powder in liquid nitrogen with mortar and pestle. Proteins were extracted by adding 200 µl extraction buffer per 100 mg root tissue (50 mM Hepes, pH 7.5, 10 mM EDTA, 150 mM NaCl, 10% sucrose, 2 mM DTT, 0.5 mg/ml Pefabloc, 1% Triton-X 100, PhosSTOP [Roche, Germany], Plant Protease Inhibitor [P9599; Sigma-Aldrich, Germany], 1% polyvinylpolypyrrolidone). Samples were incubated for 10 min at 4°C with 20 rpm end-over mixing, and subsequently centrifuged for 15 min at 4°C and 16,000 RCF. 30 µl of each protein extract was mixed with 10 µl 4× SDS-PAGE sample buffer (input; 25% (vol/vol) 0.5 M Tris-HCl (pH 6.8), 35% (vol/vol) 20% SDS, 40% (vol/vol) 100% Glycerol, 0.03 g/ml DTT, dash of Bromphenol blue). For immuno-enrichment procedures, 30 ml RFP binder coupled to magnetic particles (rtm-20; Chromotek, Germany) were washed in wash buffer (WB; 50 mM Hepes, pH 7.5, 10 mM EDTA, 150 mM NaCl, 1% Triton-X 100). Between 500 and 1000 ml of the protein extract was added to the beads and immuno-enrichment was performed for 4 hr at 4˚C with 20 rpm end-over mixing, followed by 15 min magnetic separation at 4˚C. Supernatant was removed and beads were washed twice with WB. 40 ml 2 SDS-PAGE sample buffer was added to the beads and both beads and input were incubated 10 min at 56˚C. After heating, beads were magnetically collected at the tube wall for 5 min and 40 ml of the supernatant (eluate) was taken. For SDS-PAGE, 20 ml of the input or eluate were loaded on each gel.
T90, NIN, SbtM1, and SbtS promoter analysis in Lotus japonicus GUS activity originating from the activation of promoter:GUS reporters was visualized by X-Gluc staining as described previously (Groth et al., 2010).
Expression analysis
Transgenic root systems of L. japonicus plants were harvested 40 dpt. 80 mg root fresh weight per sample was applied for total RNA extraction using the Spectrum Plant Total RNA kit (Sigma-Aldrich, Germany). For removal of genomic DNA, RNA was treated with DNase I (amplification grade DNase I, Invitrogen, Germany). RNA integrity was verified on an agarose gel and the absence of genomic DNA was confirmed by PCR. First strand cDNA synthesis was performed in 20 ml reactions with 600 ng total RNA using the SuperScript III First-Strand Synthesis SuperMix (Invitrogen, Germany) with oligo(dT) primers. qRT-PCR was performed in 20 ml reactions containing 1 SYBR Green I (Invitrogen, Germany) in a CFX96 Real-time PCR detection system (Bio-Rad, Germany). PCR program: 95˚C for 2 min, 45 (95˚C for 30 s; 60˚C for 30 s; 72˚C for 20 s; plate read), 95˚C for 10 s, melt curve 60˚C-95˚C: increment 0.5˚C per 5 s. Expression was normalized to the reference genes EF-1alpha and Ubiquitin, and EF-1alpha was used as a reference to calculate the relative expression of the target genes. The empty vector samples were used as negative control. Three biological replicates were analysed in technical duplicates per treatment. A primer list can be found in the supplementary files (Supplementary file 1B).
Statistics and data visualisation
All statistical analyses and data plots have been performed and generated with R version 3.0.2 (2013-09-25) 'Frisbee Sailing' (R Development Core Team, 2008) and the packages 'Hmisc' (Harrell, 2014), 'agricolae' (Mendiburu de, 2014), 'car' (Fox and Weisberg, 2011), 'multcompView' (Graves et al., 2012) and 'multcomp' (Hothorn et al., 2008). For statistical analysis of the numbers of nodules, nodule primordia, or total organogenesis events, a Kruskal-Wallis test was applied followed by false discovery rate correction. Quantitative real-time PCR data were power transformed with the Box-Cox transformation and a one-way ANOVA followed by a Dunnett's test was performed, in which every treatment was compared to the empty vector samples. | 8,107.4 | 2014-11-25T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Open-Circuit Fault Diagnosis of T-Type Three-Level Inverter Based on Knowledge Reduction
Compared with traditional two-level inverters, multilevel inverters have many solid-state switches and complex composition methods. Therefore, diagnosing and treating inverter faults is a prerequisite for the reliable and efficient operation of the inverter. Based on the idea of intelligent complementary fusion, this paper combines the genetic algorithm–binary granulation matrix knowledge-reduction method with the extreme learning machine network to propose a fault-diagnosis method for multi-tube open-circuit faults in T-type three-level inverters. First, the fault characteristics of power devices at different locations of T-type three-level inverters are analyzed, and the inverter output power and its harmonic components are extracted as the basis for power device fault diagnosis. Second, the genetic algorithm–binary granularity matrix knowledge-reduction method is used for optimization to obtain the minimum attribute set required to distinguish the state transitions in various fault cases. Finally, the kernel attribute set is utilized to construct extreme learning machine subclassifiers with corresponding granularity. The experimental results show that the classification accuracy after attribute reduction is higher than that of all subclassifiers under different attribute sets, reflecting the advantages of attribute reduction and the complementarity of different intelligent diagnosis methods, which have stronger fault-diagnosis accuracy and generalization ability compared with the existing methods and provides a new way for hybrid intelligent diagnosis.
Introduction
The multilevel inverter is a power electronic device that generates output voltage waveforms and current waveforms using a variety of direct current (DC) voltage sources and power switches [1].Multilevel inverters are widely used in low-voltage situations and mid-frequency switching frequency scenarios due to their advantages of low switching transient voltage change rate, small harmonic distortion, and high power conversion efficiency [2], and have become indispensable electronic devices in power systems as modern industry develops in the direction of scale, accuracy, systematization, automation, and intelligence.The complexity of the T-type topology and the abundance of power semiconductor devices have recently raised questions about the ability of the system to operate reliably in comparison to the traditional two-level inverter [3].
Because semiconductor power devices are relatively fragile, open-circuit (OC) faults and short-circuit (SC) faults of insulated gate bipolar transistors (IGBTs) can be distinguished based on their external behavior [4].SC faults cause short circuits and abnormal overcurrent states, causing other components to be damaged.It is necessary to isolate the problematic component or to immediately shut down the whole system, for instance, using desaturation detection in the door driver or fast fuses [5].On the contrary, OC fault may not immediately cause system failure but may cause current distortion and secondary damage to other components due to increased noise and voltage stress.As a result, effective OC fault diagnosis is critical for improving power system reliability.
At present, more methods based on signal processing are being developed to decompose, convert, or lower the dimension of system detection data according to the signal analysis strategy and extract feature information from it.Fault diagnosis and identification are achieved by comparing the changing pattern of feature information before and after the fault.The authors in [6] proposed a non-invasive diagnostic strategy to detect the near-field voltage signal of the inverter DC bus through the antenna and extract the spectral features of the collected signal using fast Fourier transform (FFT) as a basis for fault classification.However, due to the limitation of the amount of diagnostic information, this method can only achieve the diagnosis of the clamping diode open-circuit fault.A discrete wavelet transform-based fault feature extraction strategy for microgrid inverters was proposed by the authors in [7].The authors in [8] propose a fault-diagnosis method based on the average modulation voltage model for multi-current sensor disordered grid-connected inverters, which establishes the average modulation voltage model of three-phase steadystate coordinates and estimates the difference between the measured value and the actual value of the current by the model.Then, the fault is found.The experimental results show that the method can accurately locate the fault and perform fault-tolerant control when multiple sensors have offset faults at the same time.By symmetrically reconstructing the phase current signal, the effect of load variation is eliminated while retaining the main features of the fault.Then, multi-scale feature extraction is performed on the signal, and the energy coefficients of each group of current signals at different frequencies obtained are used as diagnostic classification information.However, the selection of wavelet bases will directly affect the extraction effect of fault features, which increases the difficulty of applying this method.To accurately detect IGBT switching faults, a new method based on an enhanced version of the variable mode decomposition algorithm (EVMD) combined with wavelet packet analysis (WPA) and scalar indicators is proposed by the authors in [9] to detect OC faults, which also shows how effective the suggested method is at diagnosing OC faults.For three-level active neutral-point-clamped (3LANPC) inverters, the authors in [10] established a predictive current model and seamlessly integrated the residuals of the predicted current vectors between the measured and predicted currents into the backward optimization of the MPC to diagnose inverter faults, which reduces the complexity of inverter fault identification, while the authors defined the counting function within each current cycle, which enhances the robustness of the algorithm.However, the method proposed in the article was based on generalized current residuals and a fault hypothesis prediction model.In [11], a method based on an average voltage vector was proposed, in which the threshold value is established by vector trajectory prediction, and the diagnostic variables include neutral point potential, eigenvector angle, and eigenvector modulus.These methods, however, all rely on signals provided by the system controller, resulting in a lengthy diagnosis time.To address this issue, the authors in [12] proposed using simple logic circuits to process the voltage and switching signals of the upper bridge transistors, as well as adding hardware to the inverter to provide transient fault information, but this introduced additional costs and complexity.Following that, a model simulation that infers system operation is proposed.For example, consider the hybrid logic dynamic diagnosis model, which is made up of a two-level inverter and an NPC inverter [13,14].The diagnostic signal is defined as the difference between the sampling and estimated currents, and the fault location is determined by the residual change rate.In [15], branch level and equipment level faults are identified hierarchically using the DC-link model, and parameter errors like inductance error and sampling error are processed to ensure accurate diagnosis results while also enhancing diagnosis speed and robustness.However, OC faults in different inverter transistors can produce similar fault characteristics [16].
Some artificial intelligence methods are used for state feature classification and are becoming a prominent research area as machine-learning (ML) technology and computing capacity grow.The authors in [17] made improvements to convolutional neural networks using a global average pooling layer instead of a fully connected layer, and the improved method reduces the number of model parameters of traditional neural networks greatly, which is beneficial to achieving fast fault diagnosis of inverters.To further optimize the diagnostic performance and improve diagnostic accuracy, its integrated processing and collaborative analysis are used for inverter fault diagnosis, which is a common information fusion process.The combination of different algorithms is used to enhance the extraction of fault feature information and to improve the classification and discrimination of fault features at the same time.The authors in [18] carried out data processing and model construction for inverter open-circuit faults.In order to increase the number of samples, the authors used a Conditional Variational Auto-Encoder for data enhancement of the fault samples and Wavelet Packet Decomposition to eliminate the noise in the samples; then, the authors constructed an improved residual network with a channel attention module as a fault-diagnosis model, and the simulation results show that targeting the inverter has higher diagnostic accuracy, faster convergence speed and shorter iteration period in fault diagnosis.However, the methodology used by the authors is more stringent on the accuracy of the dataset.Any deviation or error present in these initial fault datasets may affect the accuracy of the final fault diagnosis.The authors in [19] proposed a neural network diagnosis strategy based on a circuit resolution model.This method combines the advantages of both circuit analysis and data-driven diagnostic strategies, derives diagnostic signals that directly reflect circuit fault patterns through the parsing model, and then uses Artificial Neural Network (ANN) to identify and classify the feature information in the diagnostic signals, avoiding complex fault analysis, rule specification, and threshold selection problems.Abdo, Ali, and colleagues proposed improving fault classification accuracy by optimizing the data itself [20].In [21], a long short-term memory (LSTM) neural network and a clustering algorithm were used to create a neural network model for fault detection.To locate faults, the authors in [22] combined a deep convolutional network with network topology.Zhou et al. then used a granular Markov model to detect anomalous behavior after being inspired by the thought of information granularity [23].For neutral-point-clamped inverters, the authors in [24] proposed a data-driven inverter faultdiagnosis method based on the design of labels to simplify the traditional labeling method and one-dimensional depth-separable convolution (1D-DSC) and global maximum pooling (GMP) methods to process the data.Then, the TensorRT framework is used for model compression and optimization.Simulation results show that the proposed method can reduce the number of model parameters by more than 90% and has better online application potential for fault diagnosis.The authors in [25] combined two methods of chaotic adaptive gravity search algorithm (GSA) and back propagation neural network (BPNN) optimized by particle swarm optimization(PSO) algorithm to establish a fault-diagnosis model based on chaotic adaptive GSA-PSO-BPNN, which improved the fault classification performance, and the feasibility and effectiveness of the algorithm was demonstrated.Although the above literature does not require the analysis of the circuit operation mechanism or the creation of an accurate circuit model, the diagnosis time for faults is generally long.This is because complex calculations are generally performed on many diagnostic signals to accurately identify fault characteristics, a process that requires long data acquisition time and signal processing time.
Power inverters, on the other hand, are more complex systems, making it challenging to collect complete experimental data for fault diagnosis.Rough set theory is a new mathematical tool that can be used to deal with fuzzy and uncertain knowledge and has strong qualitative analysis capabilities.Rough sets can be directly analyzed and reasoned from experimental data to discover a large amount of implicit information knowledge and reveal the inherent law.Rough set attribute reduction has been used to help diagnose power transformer faults in recent years, with some success [26,27].
To summarize, artificial intelligence-based approaches may learn the nonlinear relationship between faults and fault features from data and have better diagnostic detection capabilities.However, when the neural network method is applied in the field of fault diagnosis, training samples are not easy to obtain, and it is difficult to perfectly integrate all expert experience and knowledge, making the diagnosis inaccurate and low precision.
Integrated artificial intelligence and signal processing combine the advantages of different strategies.The required data are fewer, and the diagnostic model structure is relatively simple, but it still takes more than half the fundamental wave period to locate the fault.This speed is difficult to meet on some occasions that require high real-time performance of fault protection isolation or fault tolerance.In contrast, the most important feature of the rough set method is that it can objectively describe and process uncertain events without requiring subjective a priori information outside the data-set, and the core element of the method is attribute parsimony.In inverter fault diagnosis, rough set attribute simplification is usually used to reduce the dimensionality of feature quantities and reduce the size and complexity of the system.Therefore, the use of rough set theory in combination with neural networks can reduce the size of the system and decrease the time of fault diagnosis, which will achieve more desirable results in fault diagnosis.
As a result, this paper processes an ELM network model for comprehensive inverter fault diagnosis using a knowledge-reduction method based on rough set (RS) theory.This paper's main contributions are as follows: (1) Each internal IGBT operational state is determined by examining the power value changes associated with the positive and negative half-waves of each phase current of the three-level inverter.(2) A genetic algorithm (GA)-based binary granular matrix knowledge-reduction method is proposed.The decision attributes of the problem are derived through knowledge reduction under the assumption that the classification ability of the information system remains unchanged.(3) Create a fault detection model that combines GA-GrC and ELM neural networks, replace all attributes with reduction results, improve the classification performance of the ELM neural network, and thus improve the diagnosis speed, accuracy, and real-time performance of the detection system.
T-3L Inverter OC Fault Analysis 2.1. PWM Modulation and Operation Mode
Figure 1 depicts the circuit topology of a T-3L inverter.The A-phase bridge arm is used as an example, with four IGBTs and four freewheel diodes in reverse parallel with the IGBT to provide a current reverse conduction loop.The output voltage of the inverter is routed to the load via the LC filter.The three-phase bridge arm of the inverter contains 12 IGBTs, the switching state of which is controlled by the gate signal.When the gate signal is 1, the IGBT is turned on.When the gate signal is set to 0, the IGBT is turned off.The T-3L inverter has three states when it is operating normally, as shown in Table 1.In this paper, pulse width modulation (PWM) is used to control the inverter switching state.Command signal 1 signifies switch-on, while 0 indicates switch-off.Activating switches S a1 and S a2 creates switching state P, resulting in a corresponding pole voltage V AO = +V dc /2.Switching state O is formed by turning on either S a2 or S a3 , depending on the current direction, with the corresponding pole voltage V AO = 0. Activation of switches S a3 and S a4 produces switching state N, with a corresponding pole voltage V AO = −V dc /2.When the T-type inverter is operational, V C 1 and V C 2 oscillate at a low frequency in V dc /2 , with an oscillation period of 1/3 current cycle.
T-3L Inverter OC Fault Characteristics
To elucidate the operation of various fault diagnostic approaches, this section discusses the operation of a single-phase three-level T-type inverter topology.It is assumed that the current flowing from any one of the DC-link terminals to the inverter pole is considered a positive direction of the current.The red arrows in Figure 2 Due to the symmetry of the circuit, only phase A is analyzed.The current direction from the inverter to the grid is considered positive (i > 0).The figures below depict the current circuit of each switching device in the T-3L inverter in the event of an OC fault.The solid line represents the actual current flow path, while the dashed line illustrates the current flow path of S a1 assuming normal conduction.
An OC Fault Occurs on a Single IGBT
As shown in Figure 3a, when I a > 0, the open circuit of S a1 causes the A-phase output state of the inverter change from P to O, and the discharge capacitance to change from C 1 to C 2 , establishing V C1 > V C2 .At this point, the current can only be output after passing through S a2 and S a3 to the neutral point, where it quickly attenuates to zero, and the output power approaches 0W.When I a < 0, Figure 3b shows that the A-phase output state of the inverter will not change due to the open circuit of S a1 .The A-phase output current of the inverter will flow out from the negative terminal via the reverse shunt diode of the IGBT on the lower side, creating a charging situation to the direct current (DC) side.At this point, the power associated with the positive half-wave current will be negative.
When S a2 fails, as shown in Figure 3c, and I a > 0, the open-circuit S a2 has no effect on the output state of phase.When the current is negative, it is routed back to the DC side power supply via the inverse shunt diode on S a4 .At this point, only C 2 will dis-charge, and the output state will change from O to P, resulting in a halving of the output power amplitude.
An OC Fault Occurs on Two IGBTs in the Same Phase
Consider the open circuit S a1 and S a2 as an illustration.When the load current direction is positive, as in Figure 4a, the current enters the negative terminal N of the DC bus and exits through the switch tube S a4 .The voltage between the two points is currently −V dc /2, and the output terminal is connected to the negative terminal N of the DC bus.According to Figure 4b, When the load current direction is negative, the current enters the output end A of the inverter, passes through the inverse shunt diode of S a4 , and then enters the negative polar end N of the DC bus.The A-phase current only has the negative half-wave part, and the voltage is −V dc /2.At the moment of the fault, the power of this phase dropped suddenly, accompanied by a significant increase in harmonics in the B and C phase currents, leading to varying degrees of power decline.
Experiments reveal that changes in current and power differ significantly from the fault characteristics mentioned above when two internal switches or external switches fail simultaneously.As an illustration, consider the OC fault between S a1 and S a4 .According to Figure 4c,d, when the load current is flowing in the opposite direction, the current input end is different and exits through the reverse parallel diode of S a2 and S a3 , respectively.As a result, the output end A is connected to the midpoint O of the DC bus, the voltage is 0V, and the power of this phase is 0W correspondingly.
Similarly, when the two internal switches S a2 and S a3 re disconnected, the gate-emitter bias voltage of S a2 and S a3 does not cause large oscillations of V ce , resulting in reverse recovery characteristics similar to diodes for S a1 and S a4 .The reverse recovery current and voltage peak fall and the loss is extremely low.Therefore, the A-phase power drops to 0W in an instant and then returns to a stable state.As a result, the magnitude of power is only second to the B and C phases.The open circuit of S a1 and S b1 also affects phase B and phase C current, preventing the current from reaching S c1 from the P end.As a result, the output power is 0W, and the current quickly decays to 0A.When the current direction is negative, the current flows into the output end, and S c1 is normally switched on, whereas the output state of the other two places does not change due to the open circuit, allowing only negative half-wave current can occur and the output amplitude decreases.Therefore, the output power only fluctuates around 0W.
From the above fault characterization, it is clear that a fault in the power device will force a change in the current path of the faulty phase in the inverter, resulting in a change in its operating state, and this transfer between operating states can be represented and tracked using a finite state machine.Define the current direction and the power switch fault as state transfer rules, which are represented by logical variables δ (+/−) and F SXk , respectively.When the current direction is positive, δ (+) = 1 , and vice versa, δ (−) = 1.F Sa1 = 1 means S a1 has an open circuit fault, F Sa1 = 0 which means S a1 is working normally.Taking phase A of the inverter as an example, the transfer of the circuit operation status in each operating mode is concluded in Table 2.
Based on the analysis above, the characteristics of single and double-tube faults can be used to diagnose other fault types and locate the fault phase and fault tubes.
Fault Characteristic Selection
The factors influencing the occurrence of faults are complex, and the fault sample data have many attributes and large dimensions, which leads to long data-processing time and makes fault classification difficult.Furthermore, there are a significant number of similar attributes in the fault data, and these similar attributes have an approximate influence on the fault classification results, with little difference.As a result, this paper proposes a method for reducing fault attributes, in which one main attribute replaces the approximate attribute for subsequent data classification.
Knowledge-Reduction Method Based on Granular Matrix
Professor Zadeh developed the concept of granular computing (GrC) [28].Granular computing is a new computing paradigm that addresses difficult challenges.It uses organized thinking, structured problem-solving methodologies, and structured information processing models as research subjects.The primary idea is to use hierarchical degrees of granularity to abstract and refine complicated problems, resulting in many simpler problems to solve.Three theoretical models are highlighted: rough set, quotient space, and computing words.RS theory can analyze and express fuzzy knowledge, as well as extract hidden rules from large amounts of data for analysis and solution.Additionally, RS theory and other machine-learning algorithms are very complementary, and their combined advantages can be very beneficial.
The research object in the framework of RS theory is an information system composed of an object set and an attribute set.The information system is defined as T, i.e., T = (U, M ∪ N, V, f ).where U is the collection of objects, also known as the universe, and M and N are sets of conditional and decision attributes, respectively.V and f represent range and information function collections, respectively.A set of knowledge includes all subsets.This "attribute-value" relationship results in a collection of decision tables.When redundant or unimportant knowledge is removed from the decision information system, the information system is said to be simplified.
Granulation and granular computing are the most fundamental problems in granular computing.Granulation is the division of a problem space into several subspaces or the classification of individuals in the problem space based on useful information and knowledge.Granules are the name given to these classes.The key to granular computing is to understand how to build a reasonable granular world and solve practical problems.However, representing the concept of rough sets with binary particles is a convenient and feasible algorithm model.
Let K = (U, R) represent a data-set and p ∈ R represent an equivalence relation on U, denoted as I ND(R), where U/P = {z 1 , z 2 , • • • , z n }.The granularity of P is denoted as GD(P), and its specific calculation formula is: The particle size of P represents its resolution.For ∀u, v ∈ U, when (u, v) ∈ P, u and v are indistinguishable under P, If it is indistinguishable, it belongs to a different p− equivalence class.It can be concluded that GD(P) represents the probability of p− indiscernibility of two randomly selected objects in U, and the higher the value, the lower the resolution ability.Define the resolution Dis(P) of knowledge P as: Because of the diversity of each piece of knowledge and the complexity of its contents, this paper uses binary particles to represent each piece of knowledge.Let U = {u 1 , u 2 , • • • , u n } be the universe and R be the equivalence relation.Each equivalence class in U/R can be expressed by an n -bit binary string.If the ith bit is 0, u i does not belong to this granule; if it is 1, it means u i belongs to this granule.
Characteristic Selection
Based on the preceding understanding of the essential ideas of granular computing, this part describes the relevant operations of granules and develops the operational basis of the binary granular matrix knowledge-reduction methodology based on genetic algorithms. When The binary particle matrix is defined as {X n×t , Y m×t , C n×m }, where C n×m is the relation matrix of the attribute set M and N.That is . a and b represent the binary strings under the corresponding set.
The relation matrix C n×m , whose value corresponds to the proportion of Y i elements in X j , expresses a subordinate relation between all equivalent classes X j and Y i .In this manner, the chromosome of the genetic algorithm can be sequentially mapped to each attribute.Each binary string represents a chromosome, and each binary particle, which has a value range of [0, 1], represents a gene.
The dependence of the attribute β describes the compatibility of the decision information system, where In the formula, the number of elements in POS M (N) is represented by card(POS M (N)), the number of elements in POS M (N) is known as the M positive domain of N, and the total number of elements in the universe U is represented by card(U).When β = 1, it is a compatible decision information system; otherwise, it is called an incompatible decision information system.
The set obtained after reduction is denoted as Q, and the set of irreducible relations contained in all reduced attribute sets is referred to as core attribute, i.e., the intersection of the reduced set red(Q), denoted as core(Q).The collection of significant attributes required for this knowledge is referred to as the core attribute.Verify the compatibility of other knowledge with kernel attributes based on the obtained core attributes and then compute the dependency to determine the minimal set of attributes.The specific algorithm is shown in Algorithm 1.
Algorithm 1 Attribute Reduction Algorithm
otherwise, proceed to step 5; 5: Calculate all values of x ∈ A − RED(A), recorded as Sig RED(Q) , take x 1 to satisfy: Finally, the fitness function is used to search.Assume that any object U has the conditional attribute M, and that its fitness is as follows [29].
where l M stands for the number of genes for which conditional attribute M has a value of 1, and β is the extent to which the decision attribute N is dependent on the attribute subset corresponding to conditional attribute M. It can be seen that the fitness function takes the number of subset elements and attribute dependency into account completely.Conditional attributes can be managed to evolve to the minimum attribute reduction set to achieve this.For kernel attributes that cannot accurately describe all information, randomly select two attributes from set U for crossover, generating two new attributes.The new individual is formed by the intersection of attribute p and attribute q at gene j: where a is a random number of [0,1].Randomly select attribute X from set U, mutate the gene j of that attribute, and the resulting new individual is: , b ≤ 0.5 , b > 0.5 (11) where b and c are random numbers between [0, 1], it is the current iteration number, and t m is the maximum number of iterations.
In each iteration, the best conditional attribute is preserved to prevent it from undergoing crossover and mutation again, ensuring the maximum inheritance of the conditional attribute.During each iteration, the worst attribute in the current population is replaced with the best attribute.
Continue the repetition until GA training reaches the maximum number of iterations, then determine the final kernel attribute set based on the fitness value.
Traditional genetic algorithms frequently use constant probabilities for crossover and mutation.As a result, the direct genetic operation of the traditional genetic algorithm on the population significantly slows convergence and fails to identify individual traits.To address this problem, this study adapts the probability values of crossover and mutation based on the population's fitness value.This approach can improve the genetic evolutionary algorithm's convergence speed and accuracy, as well as its global search capabilities while avoiding slipping into the local optimal solution.Figure 5 below depicts the flow chart for the binary granulation matrix knowledge-reduction approach based on genetic algorithm optimization.
Fault Detection Model Based on GA-GrC-ELM
The ELM neural network, also known as the feedforward neural network, uses the error between the output result and the real result to estimate the error of the previous layer of the output layer and then uses the error of this layer to estimate the error of the previous layer so that the error estimate of each layer is obtained repeatedly.The parameters of the ELM hidden layer can be set randomly, or a kernel function can be used as the hidden layer.ELM can only determine the output weight by computing the inverse of the parameter matrix H.The training process goes as follows: In this formula, V i represents the weight from the input layer to the hidden layer, W i represents the system bias, T i represents the weight from the hidden layer to the output layer, g represents the activation function, n represents the size of the training set, O j represents the output value, i.e., the classification result.To infinitely approximate the real result of the training data, the classification result is consistent with the real result P, i.e., ∑ n i=1 O j − P j = 0, so the formula can be obtained Written in matrix form as HT = P H is the output matrix of the hidden layer.The specific form is as follows: Among them, n represents the size of the training set, l represents the number of hidden layer nodes, g(x) represents the activation function, and g(x) requires wireless differentiability.
The goal of training the ELM model is to find the best T with the lowest training errors.The mathematical expression of the ELM model is as follows: Among them, ε j is the error between the category to which the jth sample belongs and the category determined by the model.By optimizing the parameters of the model, ε j 2 obtains the minimum value.Figure 6 depicts the proposed GA-GrC-ELM-based T-3L inverter fault-diagnosis structure diagram.The model primarily consists of a GA-GrC attribute reduction component and a neural network diagnosis component.First, using granular computing as the frontend information processor of the neural network, GrC can use its strong attribute reduction capabilities to eliminate duplication and create the smallest possible attribute set.To obtain the decision table for attribute reduction, the fault data of the T-3L inverter is then discretized and quantized using the clustering discretization method, and the repeated data are removed.The final step is to obtain the minimum attribute set for the final input fault sample using the grain matrix knowledge-reduction method, which is based on a genetic algorithm, adaptively changes the probability values of crossover and mutation according to the fitness value of the population to enhance its global search capability.Second, a GA-GrC-ELM neural network model needs to be constructed.The essential thing is to establish the hidden layer parameters for two neural networks to train the GA-GrC-ELM neural network using the decision attributes as the output and conditional attributes from the reduced training fault data as the input.Based on the minimum attribute set, the test sample verifies the GA-GrC-ELM network.
Experiment and Simulation
To validate the accuracy and real-time performance of the T-type three-level inverter fault-diagnosis solution based on output power, an offline simulation model was constructed using MATLAB/Simulink.The simulation parameters of the system are shown in Table 3. Figure 7 illustrates the output power of the T-type three-level inverter during normal operation.Due to the periodic averaging method, the output power undergoes a transition before entering a new steady-state process.Figure 8 shows the simulation results of the OC fault of S a1 in phase A. When the output power enters a new steady-state process, it can be seen that compared with the normal state, after the OC fault of S a1 occurs, the output power is 0, and at the same time, other phases are accompanied by harmonics.This is because normally, the A phase works in the P state; the forward current flows through the DC bus and is transmitted to the load through the bridge arm switch, and the bridge arm outputs a positive level.However, when an open-circuit fault occurs in S a1 , the A phase bridge arm output cannot Connected to the DC bus, and the forward current will freewheel through the midpoint switch S a3 and the diode D a2 .At this time, the A phase cannot work in the P state, and the P state is replaced by the O state.
Figure 9 shows the simulation results of the OC fault of phase A S a1 and S a2 .Since the forward current can only continue to flow through the lower arm diode D a4 after the fault, the operating state changes from P to N. It can be seen that when multiple IGBTs have OC faults at the same time, the power change is obviously different from the fault characteristics of a single IGBT OC fault.The simulation results are consistent with the theoretical analysis.
Dataset Selection
Since analyzing the power waveform and obtaining the fault characteristic signal in the time domain requires much calculation, the MATLAB library function FFT is used for the spectrum analysis of the three-phase output power waveform, and the amplitude and phase angle of each harmonic wave are obtained, with 50 Hz as the base frequency.By integrating and combining simulation results, it is discovered that the DC component, fundamental harmonic, and second harmonic of the three-phase power signal contain most of the information about various faults.As a result, the DC component, fundamental amplitude A 1 , fundamental phase angle φ 1 , second harmonic amplitude A 2 , and second harmonic phase angle φ 2 are chosen as the input characteristic signals of the neural network.
Data Discretization Processing
The experimental part of this paper randomly selects 330 sets of fault-type data and divides the training set, validation set, and test set according to the ratio of 8:1:1.The final training set has 264 sets of fault-type data, and the validation set and test set contain 33 sets of fault-type data, respectively.Since the particle computation attribute reduction is based on discrete data, 30 sets of training sample data are randomly selected for discretization using the cluster discretization method in this paper.The decision table is obtained by further quantifying the discretized data and removing duplicate samples, as listed in Table 4.
C φ 2 is the conditional attributes, and N = {1, 2, 3, 4} is the decision attributes, respectively, represents the number of faulty devices.
First, by removing a single attribute, such as attribute A DC , we can obtain: It indicates that the attribute A DC depends on N and is irreducible.
Similarly, other attributes were removed one at a time to test whether the attribute and its related attribute set could represent complete information based on attribute dependency.
This article is grounded in the concept of binary granular representation of rough sets, utilizing mutual information attribute reduction algorithms based on Neighbor rough sets attribute reduction algorithm, entropy-based rough set attribute reduction algorithm, and a proposed genetic algorithm (GA)-based rough set attribute reduction algorithm.The comparative experiment of attribute reduction results is presented in Table 5.Additionally, Figure 10 illustrates the testing capabilities of the three algorithms in assessing the attribute reduction performance of rough sets under various discrimination conditions.It is evident that the GA algorithm significantly reduces the number of optimal attribute sets on this dataset compared to the other two algorithms, demonstrating the stronger attribute reduction ability of the GA-GrC algorithm.To verify the performance of the genetic algorithm-based binary grain matrix approximation proposed in this paper, the data sets before and after the reduction were used as the input data of the ELM network, respectively, and the network was trained with the maximum training times set to 1000, the learning rate set to 0.01, and the minimum error of the training target set to 0.01%.In Figure 12, Figure 12a-c are the fault-diagnosis results of the original dataset based on different neural networks.In comparison, since the input weights of ELM are random and fixed, there is no need for an iterative solution.It is necessary to solve the weights from the hidden layer to the output layer.Therefore, compared with the BP algorithm and SVM algorithm, the ELM neural network has a smaller training step within the specified range and a higher accuracy.After rough set reduction based on granular computing, as shown in Figure 12d-f in the figure, the accuracy of each neural network has been significantly improved.It can be seen that the front-end processing of granular computing can effectively achieve data reduction, reducing redundant information and significantly improving the training accuracy of the neural network.Demonstrates the effectiveness of our proposed GrC algorithm.Figure 12g-i show the classification performance of binary granulation matrices based on different optimization conditions.The results prove that the binary granulation matrix reduction method based on a genetic algorithm expands the search space and avoids falling into local optimality.This paper proposes that the binary granulation matrix reduction performance based on a genetic algorithm is the best.Table 6 compares the performance of different algorithms in terms of accuracy, mean square error, and running time.In terms of accuracy, the GA-GrC-ELM algorithm proposed in this paper achieves 98%, which is significantly better than BP, SVM, and ELM algorithms.Although the accuracy of other algorithms also improved after the addition of GrC, it was still inferior to the algorithm in this paper.In addition, the algorithm in this paper also performs well in terms of mean square error and running time.Considered together, the GA-GrC-ELM-based algorithm can diagnose the T-type three-level inverter faults faster and more effectively.
Result Analysis
In summary, the T-type three-level inverter fault-diagnosis method based on GA-GrC-ELM proposed in this paper fully utilizes the ability of granular computing theory to remove redundant information and combines the genetic algorithm to automatically calculate the fault diagnosis based on the fitness value of the population.It adapts to changing the probability values of crossover and mutation, effectively enhances its global search capability, and effectively solves the problem of diagnostic accuracy caused by the complex training samples and high dimensions of neural networks.
Conclusions
In this paper, a GA-GrC-ELM-based fault-diagnosis method for T-3L inverters is presented.By measuring the power corresponding to the positive and negative half-waves of each phase current, the inverter OC fault is correctly identified and classified.The classification outcomes are then discretized and normalized to create a decision table for an input neural network model.The fault-diagnosis decision-making system is reduced using the granular matrix knowledge-reduction algorithm, which takes into account the various influences of each granularity.In addition, the reduction performance is optimized through adaptive functions to delete redundant attributes.The results of the experiments demonstrate that the GA-GrC-ELM algorithm resolves issues with the conventional single neural network model, such as its slow running speed, extensive training dataset, and challenging convergence.It offers more advantages in terms of fault-diagnosis precision and can better simulate judgment.Since the field environment is complex and changeable, there are many unexpected causes for the occurrence of a certain fault, and at the same time, there is a coupling relationship between the faults.Future work can consider the quantitative and directional comprehensive analysis of the fault occurrence and evolution mechanism in inverters from the perspectives of mathematical derivation as well as actual operating conditions.
Figure 2 .
Figure 2. Switching state circuits based on the current direction for phase A: (a) Switching states P for i > 0. (b) Switching states O for i > 0. (c) Switching states N for i > 0. (d) Switching states P for i < 0. (e) Switching states O for i < 0. (f) Switching states N for i < 0.
Figure 4 .
Figure 4. Two IGBTs fault in the same phase: (a) I a > 0, S a1 and S a2 fails.(b) I a < 0, S a1 and S a2 fails.(c) I a > 0, S a1 and S a4 fails.(d) S a1 and S a4 fails.2.2.3.An OC Fault Occurs on Two IGBTs in the Different PhaseIf S a1 and S b1 were to be disconnected, for instance, the current could only flow from the middle point O to the output end of phase A via the anti-parallel diodes of S a2 and S a3 .The open circuit of S a1 and S b1 also affects phase B and phase C current, preventing the current from reaching S c1 from the P end.As a result, the output power is 0W, and the current quickly decays to 0A.When the current direction is negative, the current flows into the output end, and S c1 is normally switched on, whereas the output state of the other two places does not change due to the open circuit, allowing only negative half-wave current can occur and the output amplitude decreases.Therefore, the output power only fluctuates around 0W.From the above fault characterization, it is clear that a fault in the power device will force a change in the current path of the faulty phase in the inverter, resulting in a change in its operating state, and this transfer between operating states can be represented and tracked using a finite state machine.Define the current direction and the power switch fault as state transfer rules, which are represented by logical variables δ (+/−) and F SXk , respectively.When the current direction is positive, δ (+) = 1 , and vice versa, δ (−) = 1.F Sa1 = 1 means S a1 has an open circuit fault, F Sa1 = 0 which means S a1 is working normally.Taking phase A of the inverter as an example, the transfer of the circuit operation status in each operating mode is concluded in Table2.
Figure 5 .
Figure 5. Flowchart of binary granular matrix knowledge-reduction algorithm based on genetic algorithm optimization.
Figure 6 .
Figure 6.The proposed model structure.
Figure 8 .
Figure 8. MATLAB simulation results under the condition of an open-circuit fault in S a1 .
Figure 9 .
Figure 9. MATLAB simulation results under the condition of an open-circuit fault in S a1 and S a2 .
Figure 11 Figure 11 .
Figure 11 shows the training error curves of the binary grain matrix attribute approximation performance model for different optimization conditions on the training set, validation set, and test set, where the system automatically selects the stopping moment for the number of iterations in the adaptive condition.Where Train is the mean square error on the training set, Validation is the mean square error on the validation set, and Test is the mean square error on the test set.With the increase of epoch, the mean square error gradually stabilizes and oscillates in a small range, Although the ELM neural network based on Neighbor-GrC optimization converges quickly, the accuracy is far from the actual results.The ELM neural network based on Entropy-GrC optimization reaches the minimum convergence accuracy in the 10th iteration process.The ELM based on GA-GrC optimization proposed in this article has a minimum convergence after 5 iterations.Obviously, the GA-GrC-ELM neural network has a faster convergence speed and smaller convergence error because it introduces the crossover and mutation operators of GA during training, which expands the search space of the algorithm.The complementary advantages of genetic algorithm and binary granulation matrix knowledge reduction are improving the search ability and convergence speed, avoiding falling into local optimum, and improving the accuracy of the optimal solution.
Table 1 .
The connection between output level and switch-on/off.
Table 2 .
Summary of T-inverter A-phase operation status under normal and fault conditions.
Table 3 .
Parameters of simulation model.
Figure 7. MATLAB simulation results under normal operation.
Table 4 .
The discretized decision table.
Table 4 ,
U is the universe, M
Table 5 .
Performance evaluation of different rough set reduction techniques.A A1 , A φ 1 , B DC , B φ 1 , C DC , C φ 1 , C φ 2 MutualInformation Attribute Reduction Based on Neighbor Domain Rough Set. 2 Attribute reduction of rough set based on Entropy. 3 Attribute Reduction of Rough Set Based on Genetic Algorithm. 1
Table 6 .
Training results of different algorithm. | 10,204.6 | 2024-02-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Adaptive Network Model for Assisting People with Disabilities through Crowd Monitoring and Control
Here, we present an effective application of adaptive cooperative networks, namely assisting disables in navigating in a crowd in a pandemic or emergency situation. To achieve this, we model crowd movement and introduce a cooperative learning approach to enable cooperation and self-organization of the crowd members with impaired health or on wheelchairs to ensure their safe movement in the crowd. Here, it is assumed that the movement path and the varying locations of the other crowd members can be estimated by each agent. Therefore, the network nodes (agents) should continuously reorganize themselves by varying their speeds and distances from each other, from the surrounding walls, and from obstacles within a predefined limit. It is also demonstrated how the available wireless trackers such as AirTags can be used for this purpose. The model effectiveness is examined with respect to the real-time changes in environmental parameters and its efficacy is verified.
Introduction
Most people require a kind of assistive technology at some point in their lives, especially as they age or face disability.While some may require assistive technology temporarily, such as after an accident or illness, others may require it for a longer period or throughout their lifespan.This technology is most needed by older people, children and adults with disabilities, and people with long-term health conditions such as diabetes, stroke, and dementia.Assistive products can range from physical products such as wheelchairs, prosthetic limbs, and hearing aids to digital solutions such as speech recognition or automated safeguarding.Improving access to assistive technology can contribute to the achievement of the sustainable development goals and ensure that no one is left unattended.Such a technology is developed and deployed in many ways.In this paper, advanced adaptive signal processing is used to develop a wireless decentralized multi-agent communication network to assist and protect people with disabilities in a crowd during an emergency situation, such as a pandemic.
COVID-19 deeply affected the world in the past five years.During the pandemic, concern about how a crowd of people moves and how the people interact increased [1].Therefore, work on crowd monitoring and analysis, and how to intervene in the crowd structure, attracted the attention of more researchers.Although some new advances in crowd monitoring have been made to tackle this problem and to maintain a safe social distance, to the best of our knowledge, none of these methods has focused on a unified and inclusive adaptive network analysis approach which also caters for people with disabilities, such as blind or visually impaired people as well as wheelchair users.
Most recently proposed crowd monitoring techniques are fusion-based (centrally controlled) and rely on the use of surveillance cameras powered by image processing and computer vision algorithms [2].These methods are passive, have enormous technical and privacy limitations, and do not allow prediction and easy intervention of the movement and behavior of the crowd.Even so, some of the latest crowd analysis methods are now starting to use spatial-temporal data and apply some advanced feature learning and classification methods such as recurrent neural networks (RNNs) and deep neural networks (DNNs).These approaches enable the prediction of crowd behavior and its direction of movement [3].Yet, the crowd elements (people) do not benefit from interactions with their neighborhoods and local interventions.Therefore, these methods are not useful enough for assisting people with disabilities navigate in a crowd.
Most advances in visual assistive technologies [4] rely on computer vision techniques and the use of a global positioning system (GPS) [5].The centralized vision-based systems or those using GPS share limitations and problems with the previously mentioned crowd monitoring systems.The challenges and limitations are related to low signal strength, sharing personal information, problems with both indoor and outdoor operations, and also low location accuracy [6,7].By indoors, we refer to places with weak or no internet connection.Some of these technologies also require the users to wear some intrusive devices, making users more reliant on such wearables.On the other hand, for wheelchair users, although some advances have been made in autonomous wheelchair design [8,9], these technologies often rely on the use of computer vision and other technologies which are subject to some ethical and privacy regulations.These systems often need to learn the environmental map before they can be used with confidence.This makes the use of such systems difficult for a constantly varying environment.
Facing these challenges, we develop a comfortable-to-carry and easy-to-use decentralized system equipped with low-range communication tools for assisting people with disabilities in moving through a crowd conveniently while maintaining a safe distance from other people or obstacles.The preliminary results of this work have been presented in [10].The proposed system models crowd movement in a dynamic environment and a distributed manner depending on the information the agents receive from each other and the changes in the environment.This allows for tracking the agents (people) including people with disabilities who share a connected network.The environment may include well-defined constraints, such as walls and fences, ticket control barriers, objects, or people moving in unpredictable directions.To perform this analysis, we use the concept of adaptive cooperative networks by means of the diffusion adaptation mechanism [11,12] to model the crowd motion while passing through geometrically varying areas.The diffusion adaptation over networks strategy was chosen for this application due to its successful results in network modeling and swarming, including biological networks, such as bird flight formation [13] or fish schools [14], and its promising results in modeling and monitoring a crowd of people [15].Therefore, in our proposed method, this adaptive technique replaces traditional distributed systems which require exhaustive programming or solving cumbersome differential equations.
In this work, the agents (or nodes representing people) share their position coordinates and each agent communicates with other agents within its one-hop neighborhood.As long as an agent can detect the positions of the other nodes (agents) in its neighborhood, it adjusts its speed and distance with others and the barriers while moving towards its destination (e.g., the exit gate in a metro station).This helps in more accurately calculating and maintaining safe distances between the general public and people with disabilities as well as safe social distances in pandemic situations.To achieve this, the movement speed, the distance between the agents, and possibly their movement directions must change (within allowed limits) with respect to the variations in the pathway geometry (e.g., width) and any obstacle preventing them in reaching their target.This can be achieved simply by being aware of the nodes within a neighborhood and the geometrical constraints.In this scenario, people can make a compromise between their distances and speeds, to keep themselves safe.
The proposed system must also work with considerably small variations in distances between the agents in the range of centimeters, to be able to operate both indoors and outdoors, including underground (e.g., metro corridor).A number of positioning technologies can be used to obtain the agents' location in real-time, some of which are more appropriate than others for the proposed scenario.For instance, a geolocation technology such as GPS is widely used and easily available.However, it usually has an accuracy of 4 to 5 m and cannot operate underground, which makes it inappropriate for this application.The latest Bluetooth technologies have a high location accuracy in the centimeter range and can work quite well indoors and outdoors [16][17][18].However, it may have a low signal strength for larger neighborhoods, which can cause loss or delays.The above limitations make such systems inadequate for decentralized crowd monitoring on their own, especially underground or places with no internet access.Therefore, we propose the use of tracking devices with embedded short-range wireless communication systems, such as ultra-wideband (UWB) [19], which can provide a high-precision positioning within the centimeter range.An example of these tracking devices is the AirTag from Apple Inc., compatible with Apple devices, or AirFinders from Link Labs, compatible with most modern smartphone operating systems (OSs).These devices, however, need to be linked to a compatible mobile device, making them insufficiently practical.To avoid this problem, we could instead use the Precision Finding feature available in the most recently released Apple devices, which makes use of the embedded UWB technology to precisely locate other compatible Apple devices in its neighborhood.
To use this system for crowd monitoring, each agent of the network, representing a member of the crowd including a person with disabilities, needs to carry one of these smart devices.Hence, the location of the device and the subject's movement direction can be accessed in real-time and fed onto the diffusion adaptation model as the agents' position coordinates.
The main contributions of this paper are as follows: (1) development of a diffusion adaptation model which incorporates the environmental parameters into its formulation; (2) development of an assistive technology based on cooperative networking that can help people with disabilities navigate a crowd in a challenging environment; and (3) using state-of-the-art commercially available smart devices for decentralized crowd monitoring purposes.
Diffusion Adaptation Modeling
The concept of diffusion adaptation cooperative networks has opened a new direction in adaptive and distributed signal processing and analysis of multi-agent communication networks [11,12].These networks have the capability of modeling groups of nodes or agents which can transfer information to each other and try to achieve common target(s) in a cooperative manner.For more than one target or objective, multi-task scenarios [20] have also been attempted and used for biological network modeling [13,14,21] and in social networks [22] as very common applications.The same adaptive systems can be used to check the reliability of the received information in a multi-agent computer network.This is useful in maintaining the security of such networks [23].A wider application of the technique can be seen in medicine where the medical images are classified through a cooperative dictionary learning approach [24]; or, considering the electrodes of an electroencephalography system as the nodes of a cooperative network, such a system can find applications in brain-computer interfacing [25,26].
Here, to model a crowd moving through a geometrically varying environment over time, we introduce a mobility model of people, represented as nodes or agents of a connected network, using diffusion adaptation.
Diffusion Adaptation
In multi-agent distributed networks, the agents collaborate with each other to solve a global optimization problem.In particular, for crowd modeling, each agent k is interested in estimating an unknown target or objective τ, while sharing information with the agents in its neighborhood N k .As one of the distributed learning strategies, diffusion adaptation is a symmetric and stable consensus strategy defined in [11] and further developed by many researchers, some referred to in Section 2.1.The algorithm has two parts of adaptation and combination which can alternate between different orders: Adapt-Then-Combine (ATC) or Combine-Then-Adapt (CTA).The adaptation convergence of multi-agent diffusion networks has been proved in the literature [12].
Consider the crowd as a collection of people distributed over a space ℜ 2 with a defined geometry.The collection of people, with the ability to communicate to each other and share information, forms an adaptive network.They adapt their movement to those of agents in the neighborhood as well as geometrical/spatial constraints while moving towards their target which, in this example, is at the end of the predefined path (e.g., an exit door).This also helps the crowd members to self-organize themselves based on the information exchanged within their one-hop neighbors.Figure 1 illustrates a group of agents, their neighborhood, and an exemplar of the surrounding environment.The general objective of such a network is for each node k to reach the location of the target in a fully distributed manner.One option to achieve this objective is to use the ATC diffusion algorithm [14,27,28].
The model presented here is therefore a variation of the models studied in [10,14,15].This variation allows the network to move towards a target smoothly through a predefined path while avoiding possible obstacles.Following the diffusion adaptation strategy given in [12], consider a connected network of N nodes where each node k wants to estimate an unknown parameter τ from the collected local measurements {d k,i , u k,i , w k,i } for each node k at time instant i.This is achieved through the estimation of the global parameter τ that minimizes the cost function given by where E is the expectation operator.u k,i represents a unit direction regression vector pointing to the direction of the target.w k,i is the location vector of node k relative to a global coordinate system at each time instant i and is discussed in Section 2.1.2.d k,i represents the scalar distance between the location of the target and w k,i , and is given by the inner product: To solve the optimization problem in (1), we use the ATC diffusion strategy, which gives the following set of distributed adaptive equations: where u T k,i represents the transpose of u k,i , N k is the number of agents in the neighborhood of node k, and µ k is a positive step size used by node k.The combination weight representing the information received from each node l (a l,k ) is from a set of non-negative real weights assigned to node k and satisfies In the implementation of Equations ( 3) and ( 4), the nodes in the node k neighborhood share their intermediate estimates {ψ l,i , d k,i , u k,i } after each iteration.Since our model is geometrically bearing, we can simplify Equation (3) under reasonable approximations to [28] Hence, the equation can finally be described as
Motion Model
Similar to mobile networks, in our crowd movement scenario, the relationship between the movement speed and two consecutive agent locations is defined as [14,28] where ∆i is the time step (time difference between two consecutive states) and v k,i+1 is the velocity vector of node k in the next time instant i + 1.Therefore, from now on, we focus on estimating the velocity vector and the current position of each agent k at each time instant i, denoted by w k,i .In our model, there are two factors that influence the velocity vector of the nodes.The first factor is the spatial constraint involved in identification of the location of node k at each time instant i.In our model, we want the crowd to navigate through a predefined path from a start point to the end point.In Figure 1, we see how the distance between the two surrounding walls of the pathway can change.The moving direction for the crowd (from left to right) is denoted by an arrow.In such a scenario, while the safe social distancing is followed, in the wider areas, the people can walk normally and have moderate to large distances between them.Nevertheless, in the narrower regions, the subjects should move faster while allowing a minimum, smaller (yet permitted) social distancing to avoid a traffic jam in narrow areas.
The objective of our model is to estimate the position of agent w k,i , while the agent moves between the two walls and keeps its permissible distance limit from other agents.
The second factor that influences the velocity vector of the nodes is the desire of the agents to move in synchrony and avoid collisions by maintaining a safe distance r between the nodes.As described in [28], this can be achieved by updating the velocity vector as follows: where γ is a non-negative scalar and δ k,i is given by v g k,i refers to a local estimate for the velocity of the center of gravity of the network and is obtained by the ATC diffusion strategy described in ( 1)-(7) as where µ v k is a positive step size, and a v l,k are the combination weights which satisfy similar conditions to those in (5).
Motion Model with Variable Speed and Distance between Nodes
To model a more realistic crowd motion, Equations ( 8)-( 12) are modified so the speed of each node k and the distance between the nodes can be scaled depending on the width of the crowd pathway where node k lands at time instant i and the distances between individual nodes and the target (effectively for the narrow regions).
In order to predict (or estimate) the new speed and location for agent k, we need to re-calculate the above parameters based on the closeness of the two surrounding walls.In our simulation, we assume that the agents move inside a region restricted by two walls, where the dimensions are approximated by the chords of circles tangent to both walls at time instant i.The chord links the two tangent points.The position of node k is considered to be on the corresponding chord (i.e., the chord where k falls on). Figure 2 clearly shows the concept.The agents also maintain a minimum predefined safe distance from the walls.To enable a more realistic scenario, we assume that the people often go as slow as one step and as fast as three steps per second with approximately 0.6 to 1.2 m strides, respectively.This assumption is essential for setting the initial and the baseline crowd speed.This means that the speed of node k at time instant i (v k,i ) can vary between v min = 0.6 m/s and v max = 3.6 m/s.This gives an average speed of v avg = 2.1 m/s and can be assumed fixed for all the agents representing the general public.Agents representing the individuals with disabilities are assumed to have half of the normal speed.Given their physical condition, it is reasonable to assume that they always move at a slower speed than individuals without disabilities.The same assumption could be made for toddlers and older people if we were to include them as part of the simulated crowd.
On the other hand, the minimum social distance r can also vary inversely proportional to the speed (or according to the closeness of the walls) between a lowest (e.g., r min = 1 m) and a highest (e.g., r max = 2 m) value.For people with disabilities, the minimum social distance is higher than for those of the general public given the same speed.Now, the objective of the new model is to allow the nodes to have a higher speed and a reasonably lower distance between the nodes for the narrower regions (closer walls) and vice versa.Based on this assumption, the effective social distance in the neighborhood of node k at time instant i can be defined as and can be numerically approximated using the previously defined v max , v min , r max , and r min as where t k,i represents the target (or end point) location vector at each time instant i.This shows a linear (but negative) dependency between the social distancing of agent k and its speed at time instant i.Therefore, as long as the speed is known by the agent, the social distance can be estimated instantly.For individuals with disabilities, their social distance is expected to be To estimate the speed, we refer to Figure 2. At each time instant i, agent k falls on the chord of a circle linking the two tangent points between the walls and the circle.This is a unique chord for each agent.The agent's speed is inversely proportional to the corresponding chord length at that time.
To calculate such a chord, knowing the two functions that represent the walls and the coordinates of node k at time instant i relative to the same global coordinate system, as shown in Figure 2, the centers of all the circles fall on the center dashed line.Therefore, the chord equation can be defined as where (x 1 k , y 1 k ) and (x 2 k , y 2 k ) are the tangent points of the two walls and the circle, as well as two points of the desired circle centered at O k .Therefore, the circle can be defined by the following equations: where x O k and y O k represent the coordinates (x, y) of the circle center O k , and r O k is the radius of such a circle tangent to the two walls, which is also the distance between the walls and the center line.In ( 16) and (17), we drop the time index i for simplicity.
To obtain all the necessary variables in the above equations and make the chord length measurable, assume f 1 (x) and f 2 (x) are the known equations for the two walls.Therefore, another two equations of our system of equations are In this case, the chord length of node k at time instant i, which is the main parameter for geometrical adaptation, is calculated as Finally, we need to utilize the information about the maximum and minimum widths of the pathway, which represent the maximum and minimum chord lengths, respectively, to adjust the agent speed and, accordingly, social distance.Given these two values of L min and L max , and associating them, respectively, to v max and v min , the speed factor v c k,i can be approximated as By replacing the parameters from (20) with realistic values of v min = 0.6 m/s and v max = 3.6 m/s and to proceed with some experiments, v c k,i is approximated as In practice, for large crowds, we may assume that there is no chord (pathway width) of less than 1 m in width and no need for any concern about social distancing for pathways of more than 10 m in width (chord length).In that case: Hence, by measuring the chord of agent k at time instant i L k,i , we are able to find all other parameters for a cooperative movement of the agents towards their common destination.
The new estimated speed is applied to the overall velocity vector of node k in order to scale and adjust the speed depending on the cross-section of the area where the node is located at time instant i.To ensure that the system works if the crowd interacts with an obstacle, the new velocity vector is set as follows: where R is the radius of the node k neighborhood, p k,i represents the location vector in the coordinates (x, y) of the obstacle that node k wants to avoid at time instant i, and α is an adjustable coefficient that allows the node to maintain a safe distance from such an obstacle.C k,i is a coefficient that regulates the speed of each node k at each time instant i depending on the path width: Therefore, when node k does not face any static obstacle such as a barrier or walls, v a k,i+1 follows (23a) and moves towards the target t k,i .On the other hand, when node k detects an obstacle close to its location, it avoids the obstacle and moves in the direction opposite to the obstacle (23b).
According to ( 9)-( 23), we propose the following mechanism by which v k,i+1 can be set for node k.This mechanism is a modification and extension of the one proposed in [14] and is defined by where {λ, β, γ} are adjustable non-negative weighting coefficients.Algorithm 1 summarizes the methodology.
Algorithm 1 Adaptive Cooperative Crowd Modeling using ATC
, we obtain the next location vector of node k During each step of adaptation, people with disabilities can be advised on the direction and speed of his/her movement based on an estimate of the angle between movement direction and the direction towards the destination.This can be mathematically defined as where '•' refers to vector internal product and ∥ • ∥ refers to the Euclidean distance between two location vectors.t k,i is the target (or desired end point) of each node k, w k,i−1 is the previous position of node k, and w k,i is its current position.θ k,i can be delivered to the subjects with disabilities (or to their wheelchairs) to correct their directions where necessary.The major advantage of diffusion adaptation compared with traditional consensus networks is its convexity, leading to stability of the multi-agent network [29].
Crowd Monitoring System
The proposed system can estimate and monitor the crowd movement and provide the necessary guidelines and warnings for any upcoming danger to the users, represented as agents of the same network, so they can navigate towards their destination while maintaining a safe social distance.For individuals with disabilities, the cooperative system can also assist them in moving through a crowd by providing them with the desired direction of movement and speed necessary to reach their desired destination (considered as the end point or target of the diffusion adaptation strategy).
In the proposed application, each agent, including the people with disabilities, carries a tracking device that can interact with its nearby tracking devices.Each tracking device receives the location information from all the nearby tracking devices.This information is fed to the diffusion adaptation model to be used as the nodes' position at each time instant.Therefore, each tracking device acts as an intelligent node of the cooperative network.
In the scenario studied and presented here, for simulation purposes, all the tracking devices are Airtags from Apple Inc. Thanks to their Precision Finding feature, it is possible to locate other Airtag devices within a close proximity with a high precision within the centimeter range.Although other tracking devices are compatible with a wider range of OSs, such as the previously mentioned AirFinders or the latest Google or Samsung smartphones, these tracking devices are not always reliable, secure, and easily compatible with each other.On the contrary, Apple devices that come with embedded UWB highprecision location technology such as the iPhone 15, AirTag, or iWatch Series 9, can easily and securely connect to each other and provide a high-precision location of the devices.
In our experiment, the path geometry as well as the target location or end point (t k,i ) for our difussion adaptation model are presumed known.The agents are assumed to know the target beforehand, learn it through repetitions, follow the signs, or follow those who know them.
Each AirTag can be tracked by the nearby mobile devices and a mobile device can access the location of all the other tracking devices in the user's neighborhood.This information is used as the location of each node k at each time instant i (w k,i ) for the model.A schematic diagram of the overall setup for a network of AirTag tracking devices is depicted in Figure 3.For visually impaired and blind users, the application is run in each individual's smartphone and the recommendations and warnings are provided to the user through the speakers.In the case of wheelchair users, the application can be embedded within the wheelchair's navigation control system, allowing the application to control the wheelchair movement based on the recommendations given by the diffusion adaptation algorithm.In our experiment, each user needs to carry an AirTag and a mobile device, as it can be appreciated in Figure 3.Each AirTag is used as the transmitter to provide the agent's location while each mobile device is used as the receiver to obtain the location of all the other agents of the network, represented in Figure 3 as all the AirTags connected to the agent's mobile device.
To ensure the network is independent of the global network, the algorithm must be built-in and supported by a local communication protocol.However, here, we utilize the protocol provided by Apple to share the AirTag's location with other Apple devices for simulation purposes.Therefore, each Apple device proximate to an AirTag sends its GPS location together with an encrypted message generated by the device, which can be considered as an identification message sent to the Apple Cloud.Then, the UWB technology of the device allows the devices to pin down the exact location of the other nearby tracking devices.The AirTags emit a beacon message constantly, which is picked up by the nearby iPhones, Macs, or other Apple devices.This allows the Apple Cloud to obtain the AirTag's exact location in the centimeter range [30,31].
This cooperation between the iOS devices and the AirTags through UWB, as well as the other tracking devices, allows the creation of a reliable network able to locate the nearby devices with high precision even indoors.On the other hand, the available encryption system provides sufficient communication privacy.This is the main advantage of using Apple tracking devices compared to other devices.
Results
In this section, the crowd motion through a predefined path from a start to an end point is simulated.For a better evaluation of the proposed method, we simulate the crowd motion under a highly constrained situation.Therefore, the pathway chosen for the simulation presents a bottleneck situation, such as what the crowd encounters when passing through a narrow corridor as shown in Figure 4a,b, at the underground entrance to a castle as in Figure 4c, or a metro station corridor as in Figure 4d.In the simulation, we consider the same target for all the nodes over time as the end point such as where t represents the approximate target location, given that the agents keep their social distances even close to the target and, therefore, they do not converge exactly to a single target point.The simulation parameters are set as follows.Consider a crowd of 40 people each representing a node of the network, where one node is a person with disabilities and the others are the public.The step size, µ k , and µ v k are set to 0.05.For a safe distance from the walls, the coefficient α is set to 0.5.For velocity control, the coefficients {λ, β, γ} are, respectively, equal to {0.5, 1, 1}.The other parameters are set as defined in Section 2.1.For a more realistic representation, some random noise was added to the speed of the nodes as well as the distances between each other.
Figure 5 illustrates the movement of the crowd (mobile network) described above in ℜ 2 .The green symbol '*' on the right represents the end point (the target destination), the red dots represent the positions of the nodes considered as the general public, and the black dot represents the position of the people with disabilities over time.Finally, the blue lines define the walls of the path.At the start of the simulation, in Figure 5a, the nodes are located at some random positions on the left side of the path.This represents the initial locations (start point) of each node showing the stage we start running the algorithm.These initial locations are generated randomly for simulation purposes.Later, all the nodes move towards their desired destination within the defined path.In Figure 5b,c, the nodes adapt their speeds and distances depending on the width of the path.Finally, in Figure 5d, which shows the end of the simulation time, the nodes gradually approach the desired destination.
In addition, Figure 6 provides the evidence that the nodes have effectively reached or will reach the target over time.This figure represents the Euclidean distances calculated using the following equation: The Euclidean distance is calculated for each node k at each time instant i. Figure 6 shows that the average distance between all the nodes and the target decreases over time until it reaches a distance close to 0, which means that the nodes have reached the target.At the end of the simulation, some nodes have already reached their desired destination, while others, including those of the people with disabilities, are still approaching it.As explained in Section 2.1.3,the people with disabilities maintain a lower speed, so it will always take them longer to reach their destination than other nodes.This is also in line with the simulation presented in Figure 5, where it can be appreciated how people with disabilities fall behind the general public.Table 1 shows how the chord length changes with respect to the path, and how the speed of the nodes is changed accordingly.The larger or smaller the chord, the lower or higher the speed of the node.The changes to the speed of the people with disabilities can also be appreciated in this table.For a similar chord length, the speed for node k = 7, the general public, is considerably higher than the speed for node k = 5, the people with disabilities.In Table 2, it is evident that for a node representing the general public, the speed is higher and the distance is lower than for a node with a disability.This confirms the differences in the performance of the system for agents with different conditions.These results show that the proposed method obtains a considerable improvement in accurate modeling of the movement of the crowd through a constrained path, compared to the results presented in [10,15].In contrast to [15], the agents of the proposed method are able to keep a safe distance from the walls even in a bottleneck situation, and the special characteristics of the people with disabilities are implemented.On the other hand, in the proposed model, the distance and speed of all the nodes, especially the people with disabilities, are better regulated depending on the width of the path compared to the results presented in [10].
Conclusions
This paper explores the use of cooperative communication networks through diffusion adaptation together with the use of a suitable short-range communication technology such as UWB to model and monitor in real-time a moving crowd that can detect and avoid walls and other obstacles while moving towards a predefined target.The network nodes modify their speeds and distances based on the environmental geometrical properties and limitations.The simulation results show that the crowd successfully stays within the defined path, and the speed of each node as well as its distance from the nodes in its neighborhood are adapted to the new path profile and the predefined constraints.The proposed method is very impactful as it applies a high-end algorithm for decentralized cooperating networks to a real-life and very demanding problem such as monitoring and assisting people with disabilities in moving through a crowd.The proposed system also uses the state-of-the-art communication and positioning technology available in commercial devices to best safeguard the individuals, especially people with disabilities, in a highly challenging environment.
As mentioned in Section 1, previously proposed navigation assistive technology for people with disabilities, as well as recently proposed crowd monitoring systems, rely on image processing and computer vision [32,33].These require an on-device high computational power and face high privacy concerns.Although our proposed system has some privacy concerns due to sharing the devices' locations, this information is encrypted and stays within the network formed by the close-proximity tracking devices.On the other hand, existing crowd monitoring systems are centrally controlled and can only monitor a predefined area since they rely on the use of pre-installed surveillance cameras [34] or WiFi Beacons [35].Finally, other proposed crowd modeling systems [36][37][38] can accurately simulate a crowd behavior under certain situations.However, compared to the proposed model, they cannot update the model in real-time based on the actual location of the agents or communicate the updated recommended direction or the speed of movement of the people with disabilities in underground situations or places with no internet connection.
Even so, the fully functional implementation of the proposed system in a real-world scenario still presents some challenges related to the use of Apple's Precision Finding feature for the location of the nodes.Although, currently, iOS leads the mobile operating system market share in some countries, such as in the USA or Australia, Android remains the predominant mobile operating system worldwide [39].This creates certain limitations on the correct implementation of the proposed system in certain countries or areas where a reliable network of Apple devices is not available.Although this could be alleviated with the use of more general UWB-embedded devices that do not need to be connected to a compatible mobile device, this approach also presents certain challenges.Although some standard protocols for UWB communications have been established, most UWB systems still present a high incompatibility with devices with different UWB radio chips [40].
Therefore, a major improvement in the application of the proposed cooperative system will be by integrating and embedding the system in each individual mobile phone that uses the same UWB standard protocol.By allowing the easy secured communication between UWB-embedded commercially available devices, we can obtain the relative coordinate of each node of the network relying only on short-range communication, which can eliminate the need for tracking devices connected to compatible mobile phones.The aim is to have the system be independent of long-range communication or tracking systems such as GPS.The devices can be empowered by the necessary embedded software enabling shortrange communication networks using Bluetooth, UWB, or any other suitable low-range secured network.On the other hand, future work and improvements on other positioning systems for indoor and outdoor locations, such as in the feasibility of using iBeacon-based positioning systems [41] in open areas, can improve the system performance.
Figure 1 .
Figure 1.The network agents confined by two walls move in a geometrically varying environment whereby their speeds and their proximities can change accordingly.The neighborhood of agent k, which represents the agent with disabilities in the network, is denoted by N k represented by the dashed line.The crowd movement direction is represented by the arrow at the right of the path.
Figure 2 .
Figure 2. Using tangent circles to estimate the varying width of the space in ℜ 2 between the start and end points of the path for each node k in the coordinates (x k , y k ).The circle chord length (between the two tangent points) that contains node k best represents the width of the pathway.
Figure 3 .
Figure 3. Illustration of the setup for using AirTag tracking devices as the agents of a cooperative network.Each AirTag is with an agent and can be tracked by the nearby iPhones.Each colored line represents the interaction and share of location for an agent, represented by a pair of iPhones and its AirTag, and other tracking devices within a close proximity, which forms the neighborhood N k .
Figure 4 .
Figure 4. Illustration of pathways with possible bottleneck represented by the simulated pathway.(a) Narrowing of a passageway, (b) a pathway with a narrow corridor, (c) the underground entrance to a castle, and (d) the corridor of a metro station.
Figure 5 .
Figure 5. Simulation of the movement of a crowd over time.The average speed (v k,i ) of all the nodes is given for i = 0, i = 70, i = 130, and i = 200.The path is represented by blue dots, the general public by red dots, the people with disabilities by a black dot, and the end point of the path by a green "*".
Figure 6 .
Figure 6.Representation of the average Euclidean distance between node k (w k,i ) and the target (t) at each time instant i.The blue bars represent the average distance of all the general public nodes, while the red bars represent the people with disabilities.
Table 1 .
Length of the chord (L k,i ) that goes through nodes k = 7 and k = 5 at several time instants i and speed (v c k,i ) of nodes k = 7 and k = 5, respectively.Node k = 5 represents the individual with disabilities.
Table 2 .
Average distances between nodes k = 7 and k = 5 and their respective neighbors (r k,i ), and their speeds (v k,i ) at several time instants i. Node k = 5 represents a node with a disability. | 9,109.8 | 2024-03-01T00:00:00.000 | [
"Engineering",
"Computer Science",
"Medicine"
] |
Policy Decisions and Use of Information Technology to Fight COVID-19, Taiwan
Because of its proximity to and frequent travelers to and from China, Taiwan faces complex challenges in prevent ing coronavirus disease (COVID-19). As soon as China reported the unidentified outbreak to the World Health Organization on December 31, 2019, Taiwan assembled a taskforce and began health checks onboard flights from Wuhan. Taiwan’s rapid implementation of disease prevention measures helped detect and isolate the coun try’s first COVID-19 case on January 20, 2020. Labora tories in Taiwan developed 4-hour test kits and isolated 2 strains of the coronavirus before February. Taiwan effec tively delayed and contained community transmission by leveraging experience from the 2003 severe acute respi ratory syndrome outbreak, prevalent public awareness, a robust public health network, support from healthcare industries, cross-departmental collaborations, and ad vanced information technology capacity. We analyze use of the National Health Insurance database and critical policy decisions made by Taiwan’s government
Because of its proximity to and frequent travelers to and from China, Taiwan faces complex challenges in preventing coronavirus disease (COVID- 19). As soon as China reported the unidentified outbreak to the World Health Organization on December 31, 2019, Taiwan assembled a taskforce and began health checks onboard flights from Wuhan. Taiwan's rapid implementation of disease prevention measures helped detect and isolate the country's first COVID-19 case on January 20, 2020. Laboratories in Taiwan developed 4-hour test kits and isolated 2 strains of the coronavirus before February. Taiwan effectively delayed and contained community transmission by leveraging experience from the 2003 severe acute respiratory syndrome outbreak, prevalent public awareness, a robust public health network, support from healthcare industries, cross-departmental collaborations, and advanced information technology capacity. We analyze use of the National Health Insurance database and critical policy decisions made by Taiwan's government during the first 50 days of the COVID-19 outbreak.
producing test kits adapted from existing diagnostic modalities for pneumonia of unknown etiology.
As Taiwan CDC took the lead, public and private healthcare providers, local governments, and health departments looked to the central government for guidance regarding preparedness and response. The country quickly updated infection control practices and strategies established during the 2003 SARS epidemic, such as installation of infrared temperature checkpoints and border quarantine at airports and seaports. Following Taiwan CDC's outbreak prevention guidelines, hospitals swiftly instituted screening booths to monitor the temperature of persons entering the facility, offer hand sanitizer, and separate persons with fever or related ailments. In addition, Taiwan increased stockpiles of personal protective equipment (PPE) for healthcare workers, predesignated potential isolation wings and hospitals, and created a daily nationwide inventory of available intensive care and negative-pressure isolation rooms, including the number that could be refitted when needed.
On January 15, 2020, Taiwan CDC classified the novel coronavirus as a class-V communicable disease, which institutes legal measures, including mandated reporting and quarantine. For instance, under class-V, healthcare providers are required by law to report suspected cases to Taiwan CDC within 24 hours, and the government can isolate or quarantine persons confirmed or suspected to be infected at designated sites. The Wuhan travel advisory was elevated to level II-alert the next day and later to level III-warning ( Figure; Appendix Table, https:// wwwnc.cdc.gov/EID/article/26/7/20-0574-App1. pdf). Reporting criteria were broadened to include persons showing symptoms who had not traveled to China recently but had close contact with persons who had confirmed or suspected cases. In addition, specimen testing parameters were expanded. On January 20, Taiwan activated its Central Epidemic Command Center (CECC), which is equivalent to an Emergency Operations Center in the United States.
Border Quarantine
Border quarantine procedures are managed by staff from regional offices of Taiwan CDC stationed at airports and seaports. Staff screen all incoming passengers by using no-touch, video-recordable infrared thermometers, which were installed during the 2003 SARS outbreak. Staff also monitor passengers for specific symptoms, provide timely health education, and conduct health evaluations, including sample collection or testing, as needed. In addition, staff report suspected cases to the centralized database of Taiwan CDC and to local health departments for follow-up monitoring or care and refer or transport symptomatic persons to hospitals according to infectious disease regulations, when needed.
Beginning December 31, 2019, Taiwan CDC implemented enhanced border quarantine measures, which included temporary onboard health checks on persons arriving on flights from Wuhan. As the outbreak spread internationally, in late January 2020, Taiwan began requiring passengers to manually or electronically complete a health declaration card detailing any symptoms or diseases, and travel and contact histories for case investigation or contact tracing, if necessary. In addition, Taiwan CDC staff determined the need and gave instructions for selfmonitoring or home quarantine, depending on current policies and any special situations.
Case Detection
The enhanced border quarantine procedures led to early detection of a suspected case of COVID-19. On January 20, a 55-year-old woman reported fever, cough, and shortness of breath at her airport health screening upon arrival from Wuhan. She was transported directly to the hospital, averting local exposure. She reported that she wore a mask and remained in her seat for the duration of the flight. The crew and other passengers, who had no prolonged direct interaction with her, passed the health evaluation at the airport and were directed to complete a 14-day self-monitoring regimen at home. During self-monitoring, passengers and crew were required to record their temperature twice daily, stay home, or wear a mask if they had to go out; as an extra measure, they had to respond to daily telephone checks by infectious disease staff.
On January 21, the passenger with symptoms was confirmed to have COVID-19, the first known imported case in Taiwan. The same day, the United States announced its first case in a 35-year-old man who had returned from Wuhan on January 15 and was later admitted to a hospital in Washington State on January 19 (4,5).
With confirmed cases reaching 1,400 globally, including cases in Europe (6), Taiwan's disease investigation teams worked through the week-long Lunar New Year holiday. Beginning on January 24, Lunar New Year's Eve, all passengers traveling from China, Hong Kong, and Macau were required to complete a health declaration card and travel history upon arrival in Taiwan. Arriving passengers were given instructions for self-monitoring and a phone number for inquiries or concerns; this procedure was later expanded to cover arrivals from all destinations. Passengers from Wuhan and Hubei Province and persons who had close contact with confirmed cases were mandated to a 14-day home quarantine. Quarantine involved self-isolation without going out or having visitors, recording temperature and symptoms twice daily, and if living with others, wearing a mask at all times and taking precautions with household members.
To support Taiwan CDC's surveillance, local civil offices were given the contact information of all homequarantined persons in their jurisdiction. Local health department personnel or district administrators familiar with the communities conducted daily telephone checks on these home-quarantined persons in their areas. Persons who were not compliant with home quarantine orders were turned over to law enforcement and tracked by police officers. Repeat offenders could be fined or confined to designated facilities.
As the number of persons on home isolation in Taiwan grew to tens of thousands, GPS functionality and cameras on personal or government-dispatched smartphones were used for monitoring and case identification. Recognizing the challenges of the need for seemingly healthy persons to stay home for 2 weeks, miss work and school, and avoid outside contacts, local governments set up quarantine-care centers to provide support and counseling, which strengthened the barrier against potential community transmission. Staff in PPE could conduct home visits, arrange meal deliveries, and bring essential supplies to persons living alone to help them comply with the quarantine order. A 24-hour public epidemic hot line was opened for questions or reporting. Taiwan CDC upgraded its interactive mobile phone application, Disease-Prevention Butler, and supplemented it with an artificial intelligence chatbot to provide accurate, timely information and gather concerns for analysis and response.
Group tours from Taiwan to China were suspended, and tours from China and residents of Hubei were banned. All citizens from China were later banned from entry into Taiwan, with few exceptions (Appendix Table). For groups already in Taiwan at the time of border quarantine, tour leaders were required to conduct and report daily health checks of their members. Students enrolled in Taiwan colleges or universities who had gone home to China for the winter break and holiday were asked to postpone their return to Taiwan for 2 weeks; those who arrived early were self-quarantined in separate dormitories.
On January 30, WHO declared a public health emergency of international concern and urged international coordination to investigate and control the spread of COVID-19 (7). Confirmed cases climbed to >7,800 globally; Taiwan had 9, including 1 case of local transmission in a man infected by his wife who returned from Wuhan (8).
Information Technology and Cross-Departmental Cooperation
Other government agencies in Taiwan also contributed expertise and increased capacity during the crisis. Taiwan CECC partnered with civil and law enforcement departments for quarantine monitoring, as described. In addition, the CECC asked the NHI to integrate recent history of travel to China from the database of Customs and Immigration to supplement the NHI's centralized cloud-based health records. After Customs and Immigration data were integrated, the NHI system flagged records so medical providers would be aware of patients' travel history when they made an appointment or came in. Later, all confirmed and suspected case contacts reported to Taiwan CDC also were added to the NHI database.
Because all providers are required to submit claims to the single-payer platform within 24 hours, the comprehensive NHI database had near-real-time information that let clinicians and Taiwan CDC track or trace back all doctor visits. The NHI patient records included complete health history, underlying health conditions, and recent progression of symptoms, treatments, and hospitalization related to respiratory syndrome. These data helped pinpoint high-risk patients and persons likely to have had contact with infected cases. In addition, the NHI database gave Taiwan CDC the ability to quickly identify new patterns of symptoms or clustered cases and the source or path of infection. The high security and privacy policy of the NHI information technology system permitted data sharing only for purposes of combatting the epidemic and was restricted to 1-way transmission of specific information from other departments to the NHI database.
No health records or other personal information were available to anyone outside of the health system.
The Customs and Immigration database also displayed warnings about travel history to Wuhan and China within the previous 3 months so border control staff could identify persons who had been to the COVID-19 epicenter for additional health screening. The Ministry of Foreign Affairs negotiated and coordinated the evacuation of Taiwan citizens stranded in Wuhan after the city went into lockdown on January 23 (9) and, later, those who were passengers onboard the Diamond Princess cruise ship docked in quarantine off the coast of Japan (10). The Ministry of Transportation managed charter flight arrangements, and the special biohazard cadets from the Ministry of Defense were called to help disinfect the planes and affected airport areas afterwards. Repatriated citizens and cruise ship passengers went through health screenings before boarding airplanes and were immediately tested for COVID-19 upon arrival in Taiwan.
One person evacuated from Wuhan tested positive for the coronavirus and was directly transported to a hospital. All others passed a double-negative criterion, having 2 negative test results 24 hours apart, and went to a government-managed quarantine facility for 14 days, where they received check-ups 3 times a day. No subsequent cases manifested.
Social Norms and Mask Shortages
After the 2003 SARS outbreak, persons in Taiwan, Japan, and several other countries in Asia began wearing medical face masks during influenza season or in crowded public spaces, such as on subways (11). Wearing a mask also is considered good practice for persons with a cold, and persons with allergies or a weakened immune system are expected to wear a mask (12). Therefore, many citizens had supplies at home or rushed to acquire masks once the epidemic was announced, despite Taiwan CDC advising that healthy persons did not need a mask, except when visiting hospitals or crowded, enclosed places.
Anticipating a surge in demand, Taiwan's prime minister suspended mask exportation at the end of January. News of shortages soon emerged in different parts of the world, partially attributed to the delayed and reduced exports from China, the largest mask-producing country in the world, because dozens of cities in China were on lockdown and demand increased in the country (13,14). The Taiwan government requisitioned domestically made medical and surgical masks and invested to quickly expand production. To accomplish better distribution across the population, Taiwan introduced a temporary rationing system. Every resident's NHI card, which is already linked to thousands of pharmacies and hundreds of local health centers nationwide, became their identification to obtain masks in their neighborhood. In addition, a government-funded, mobile phone application (Mask Finder, https://mask.pdis.nat.gov.tw), developed through a public-private partnership, helped citizens locate supply distribution points and showed updates on availability. Health promotion messages on indications for wearing a mask and handwashing routine were widely disseminated in all media.
Clinical and Pharmaceutical Research Capacity and Case Investigation
Starting in early January 2020, the Taiwan CDC laboratory began developing real-time reverse transcription PCR (RT-PCR) diagnostic protocols by leveraging previous experience sequencing SARS and Middle East respiratory syndrome coronaviruses. China released the full genomic sequence of the novel coronavirus on January 11, and by January 12, the Taiwan laboratory team introduced an upgraded, 4-hour test kit, shortened from the initial 24-hour test. The upgraded test had a high sensitivity of 10-100 copies/reaction, which is comparable to the standard assays recommended by WHO. The laboratory staff continued to accelerate testing speed and capacity, developing the ability to test >1,100 samples/day. By the end of February, Taiwan was able to test 2,450 samples/day by using public and select contracted private laboratories.
In late January, 2 strains of the coronavirus were successfully isolated by a university and a government-funded research institute in Taiwan. Research and development of drugs, vaccines, and a rapid testing kit continued, some through public-private or international partnerships.
As it became known that persons could have CO-VID-19 and have mild or no symptoms, no travel history, or no definitive case contact (15), Taiwan CDC further widened its testing and reporting criteria to minimize local transmission. At the time, only 3 cases of local transmission had been identified, all contracted from family members with recent travel history. To improve case detection, on February 12, Taiwan CDC conceived a retrospective COVID-19 screening scheme. The screening encompassed persons who had tested negative for influenza in the previous 14 days but who reported having severe influenza complications, were under surveillance for upper respiratory symptoms, were part of a cluster of influenza cases, or received a diagnosis of pneumonia but did not respond well to treatment. Using the NHI database, the team pinpointed 113 suspected patients, 1 of whom, case 19 in Taiwan, tested positive for COVID-19 on February 15 and died that evening. This discovery triggered the required confirmed-case contact investigation, which located and tested dozens of the patient's family members and close contacts. The patient's asymptomatic brother tested positive on the same day, and 2 more family members with minor symptoms tested positive in the next 2 days. Other close contacts tested negative but were stipulated to a 14-day home quarantine, and hundreds more possible contacts were put on selfmonitoring for 2 weeks. The source of infection for case 19 later was identified by using collaborative triangulation of multiple departments' databases and disease investigation and traced to a passenger who returned from China. Without retrospective screening and access to the comprehensive NHI database, such cases would have gone undetected.
On day 50 of the global epidemic, February 18, WHO reported >75,000 cases and >2,000 deaths worldwide (16). Among the 22 cases confirmed in Taiwan, local transmissions were limited to 5, primarily between family members. Despite a credible international report that modeled outbreak dynamics and predicted Taiwan would have the second highest case importation outside of China (17), early prevention measures, stringent border control, and aggressive efforts to combat community spread have continued to be effective as of March 2020.
Policy Implications
With the outlook of COVID-19 still unclear, health authorities around the world continue to be on high alert. Since February 2020, the Taiwan government and CECC have focused more on detecting and isolating local cases to contain potential local spread, while maintaining and updating travel restrictions to limit foreign entry from highly affected areas. The experience of SARS generated instrumental lessons in disease control measures and policy planning for government agencies and hospitals in Taiwan. It also improved the public's health behavior and hygiene practices, such as increased uptake of influenza and other vaccinations, frequent handwashing, and use of hand sanitizers and masks (12,(18)(19)(20). In addition, the 2003 SARS outbreak had heightened infection transmission awareness and provided better mental preparedness for the new pandemic. Timely, clear communication with the public also has fostered trust and built community capacity for the public to partner with the government in containment and mitigation.
During any health crisis, a robust health system is crucial to support the surge of medical care and testing needed (21). Taiwan has a solid public health, medical, and insurance infrastructure distributed throughout the country. This infrastructure consists of local health departments and centers staffed by healthcare professionals trusted by local residents, particularly in the rural areas where private practices are scarce; hospitals, medical centers, and clinics that strongly support a well-coordinated infectious disease network for preparedness and response; and a comprehensive NHI that covers >99% of the population with high-quality providers and low out-of-pocket cost. The interconnected health system reduces barriers to doctor appointments and follow-up visits, which helped capture suspected cases with minor symptoms. Furthermore, the single-payer NHI model affords centralized health records of population-level longitudinal data and the capability of merging information from other government databases. This connectivity proved a valuable tool for analysis and case investigation during disease outbreaks, including dengue, influenza, SARS, and the current COVID-19 pandemic.
Interagency collaboration, data sharing, and timely mobilization of human capital and resources are equally vital to a response (22). Taiwan followed WHO standards on testing and case definition and shared updated disease information and virus sequences on International Health Regulations (https://www.who. int/ihr/en) and other global health platforms. With CECC's authority to coordinate works across departments and enlist additional personnel during an emergency, Taiwan CDC has been capable of handling the growing volume of regular and new tasks.
In addition, the legislature approved emergency funding to ensure disease control efforts did not fall short and to mitigate the economic effects of the outbreak. The funding included compensating lost wages for persons working part-time or without paid sick leave during the quarantine. Compensation also permitted time off for persons with children or elderly family members who were sick or had contact with confirmed cases. These incentives, modeled after actions taken during the 2003 SARS outbreak, aided in isolation compliance.
Unlike SARS, in which patients were only infectious when febrile (23), persons with COVID-19 could have no or minimal symptoms, remain undiagnosed but contagious, and pose a greater threat of local transmissions (24). As the pandemic evolves, global cases likely will increase because of community spread, expanded laboratory capacity, and wider testing criteria. The timing, locations, and policies of travel advisories and entry restrictions, in addition to testing and reporting criteria, are critical to epidemic control but vary across countries. From a public health perspective, recognizing the ideal time to institute or terminate these policies and measuring their effectiveness can be challenging.
Conclusions
Taiwan's robust public health and healthcare systems, combined with public acceptance of protective policies influenced by the 2003 SARS outbreak, likely bolstered efficient implementation of policies in the first 50 days of the COVID-19 outbreak. At the same time, Taiwan's response to COVID-19 might have overshadowed other health threats, such as seasonal influenza and chronic diseases. Strategic prioritization of other public health functions and resources and broader government operations will be necessary. As the outbreak continues, Taiwan will need to evaluate associated policy decisions to sustain the system.
Taiwan built on lessons learned from SARS, and some of the successful strategies during the current pandemic could inform policy approaches by other governments. In countries that rely heavily on state and local actions, intergovernmental and interjurisdictional coordination and adequate funding are needed to assure emergency preparedness and response capacity. An integrated approach that incorporates public health, human services, and healthcare systems can increase resilience and better prepare nations for future events. | 4,820.2 | 2020-07-01T00:00:00.000 | [
"Political Science",
"Computer Science",
"Medicine"
] |
Biomarkers related to fatty acid oxidative capacity are predictive for continued weight loss in cachectic cancer patients
Abstract Background Cachexia is characterized by a negative protein and energy balance leading to loss of adipose tissue and muscle mass. Cancer cachexia negatively impacts treatment tolerability and prognosis. Supportive interventions should be initiated as early as possible. Biomarkers for early prediction of continuing weight loss during the course of disease are currently lacking. Methods In this pilot, observational, cross‐sectional, case–control study, cachectic cancer patients undergoing systemic first‐line cancer treatment were matched 2:1 with healthy controls according to age, gender and body mass index. Alterations in amino acid and energy metabolism, as indicated by acylcarnitine levels, were analysed using mass spectrometry in plasma samples (PS) and dried blood specimen (DBS). Welch's two‐sample t‐test was used for comparative analysis of metabolites between cancer patients and healthy matched controls and to identify the metabolomic profiles related to weight loss across different time points. A linear regression model was applied to correlate weight loss and single metabolites as predictor variables. Finally, metabolite pathway enrichment analyses were performed. Results Eighteen cases (14 male and 4 female) and 36 paired controls were enrolled. There was a good correlation between baseline PS and DBS of healthy controls for the levels of most amino acids but not for acylcarnitine. Amino acid levels related to cancer metabolism were significantly altered in cancer patients compared with controls in both DBS and PS for arginine, citrulline, histidine and ornithine and in DBS only for asparagine, glutamine, methylhistidine, methionine, ornithine, serine, threonine and leucine/isoleucine. Metabolite enrichment analysis in PS of cancer patients revealed histidine metabolism activation (P = 0.0025). Baseline acylcarnitine analysis in DBS was indicative for alterations of the mitochondrial carnitine shuttle, related to β‐oxidation: The ratio palmitoylcarnitine/acylcarnitine (Q2) and the ratio palmitoylcarnitine + octadecenoylcarnitine/acylcarnitine (Q3) were predictive for early weight loss (P < 0.0001) and weight loss during follow‐up. Activation of tryptophan metabolism (P = 0.035) in DBS and PS and activation of serine/glycine metabolism (P = 0.017) in PS were also related to early weight loss and across successive time points. Conclusions We found alterations in amino acid levels most likely attributable to cancer metabolism itself in cancer patients compared with controls. Baseline DBS represent a valuable analyte to study energy metabolism related to cancer cachexia. Acylcarnitine patterns (Q2, Q3) predicted further weight loss in cachectic cancer patients undergoing systemic therapy, and pathway analyses indicated involvement of the serine/glycine and the tryptophan pathway in this condition. Validation in larger cohorts is warranted.
Introduction
Cachexia is a multifunctional metabolic syndrome characterized by involuntary weight loss due to wasting of skeletal muscle mass and/or adipose tissue degradation, affecting about 30-90% of cancer patients, preferentially at advanced tumour stages. Cachexia and particularly sarcopenia (i.e. decreased muscle mass and/or quality) negatively impacts efficacy and tolerability of different systemic cancer treatments (i.e. chemotherapy, molecular targeted therapy and immunotherapy) and causes functional impairments, further decreasing quality of life. 1 Despite the fact that 10-20% of cancer related deaths are assumed to be associated with cachexia/ sarcopenia, it still remains an underestimated and undertreated medical condition. 2 Although international consensus has been achieved in defining clinical criteria as well as different stages of cachexia (i.e. pre-cachexia, cachexia and refractory cachexia), 3 robust biomarkers predicting progressive weight loss in cancer patients during the course of the disease are lacking. Such biomarkers, however, are urgently needed, because they could trigger early intensification of supportive measures (i.e. nutritional support) in patients at high risk for continuing weight loss. Moreover, such biomarkers could contribute to the identification of novel treatment targets.
Inflammatory cytokines like interleukin 6 (IL-6) and tumour necrosis factor-α (TNF-α) released from tumour cells or the tumour microenvironment play a central pathophysiological role. They trigger acute phase response in the liver and alterations in metabolic processes related to amino acid and energy metabolism, leading to degradation of adipose tissue and muscle as well as neuroendocrine activation. Amino acids represent central intermediates in protein metabolism. Acylcarnitines, on the other hand, are transport forms of long-chain fatty acyl-CoA thioesters. They are formed in a reaction of L-carnitine and acyl-CoAs by carnitine acyltransferases located at the inner mitochondrial membrane to facilitate mitochondrial entry for β-oxidation. Thus, they represent central intermediates of energy metabolism, which are in addition are indirectly involved in protein/amino acid metabolism.
Consequently, we aimed at analysing amino acids and acylcarnitine profiles in the blood as a promising approach for the identification of novel cancer cachexia-related biomarkers.
Mass spectrometry (MS) is a well-established method to simultaneously analyse multiple metabolic parameters from small blood samples. Importantly, metabolome analyses in the context of cancer cachexia were usually performed in serum, plasma or urine. 4,5 For example, Cala et al. 6 used plasma samples from cachectic and non-cachectic cancer patients to identify amino acids and their derivatives indicative for cachexia as well as a number of metabolite pathway alterations. Also, Yang et al. 4 performed comparative metabolomics analyses in serum and urine samples to finally build a diagnostic model of cachexia based on three (i.e. carnosine, leucine and phenyl acetate) metabolites. DBS, on the other hand, reflects actual cellular metabolic processes and are currently used for the diagnosis of a number of inborn metabolic disorders for routine screening of newborns. 7 DBS have already been used to identify novel metabolomic biomarkers to improve cancer diagnosis. 8 DBS are simple to collect and can easily be stored and shipped, and MS-based analysis in DBS has been extensively standardized during recent years. 9 In addition, PS and DBS can in part be considered complementary as alterations in energy metabolism-like acylcarnitine profiles-can be detected in DBS with higher sensitivity.
From this background, in our pilot cross-sectional, casecontrol study, we choose to perform MS-based analysis of blood amino acid and acylcarnitine profiles both in DBS and PS from cachectic patients with advanced gastrointestinal cancers. In PS, we found alterations of plasma amino acid levels related to the urea cycle and histidine pathway activation in cachectic cancer patients compared with matched healthy controls. Pathway analyses in PS indicated involvement of the serine/glycine and the tryptophan pathway in patients with continuous weight loss. More importantly, in DBS, we were able to identify a characteristic acylcarnitine pattern at baseline, which significantly predicted further weight loss.
Patient selection and sample collection
From October 2014 to January 2016, 19 cancer patients (patient No. 13 was excluded later because of the absence of neoplastic disease) were consecutively enrolled in a pilot, observational, cross-sectional, case-control study, at the University Cancer Centre Leipzig (UCCL) central outpatient unit according to the following inclusion criteria: age ≥18 years, newly diagnosed, histologically confirmed gastrointestinal malignancy, planned to undergo systemic chemotherapy treatment, meeting the definition of cachexia, according to the Consensus Conference definition. 3 Patients were excluded in case of active infections, uncontrolled diabetes, kidney or liver failure or immunodeficiency syndromes. Cancer patients consecutively enrolled in our study were 1:2 matched by age, sex and body mass index with healthy controls (n = 36), from the population-based LIFE-Adult study (https://life.uni-leipzig. de/en/life_health_study.html; accessed 20 July 2020). 10 Only healthy subjects were selected based on an extensive questionnaire covering a broad range of diseases (i.e. prior or present cancer diagnosis, cardiovascular diseases, lung diseases, gastrointestinal/liver diseases, renal diseases, autoimmune diseases, metabolic/endocrine diseases, musculoskeletal diseases, neurological diseases, eye diseases, dermatological disease, allergies, infectious diseases and psychiatric conditions including depression). Moreover, a panel of laboratory parameters was available for every participant in the LIFE study including the following parameters related to metabolism and inflammation: C-reactive protein (CRP) ≤ 5 mg/L; interleukin 6 (IL-6) < 7 pg/mL; thyroid-stimulating hormone (TSH) 0.4-2.5 mU; glucose 3.9-6.1 mmol/L (70-110 mg/dL); haemoglobin A1c (HbA1c) 20-42 mmol/mol bzw. 3-6%; triglyceride < 1.7 mmol/L; cholesterol: women 5.1-7.2 mmol/L, men 4.5-6.2 mmol/L; high-density lipoprotein (HDL)-cholesterol > 1.03 mmol/L; low-density lipoprotein (LDL-cholesterol) < 4.2 mmol/L. In the controls selected for our study, all parameters were in the normal range.
Blood samples were collected from all patients/controls in the morning after a fasting period of around eight to 14 h. Thereafter, blood samples of patients were collected every 4 weeks (±7 days) for up to six consecutive time periods: Time Point 1 (TP1) represents baseline visit in the study, TP2 to TP7 represent respective follow-up visits ( Figure S1). EDTA-plasma samples were immediately centrifuged for 10 min at 2750 g at 15°C and stored within 2 h of collection at À80°C until analysis. Weight developments were closely monitored. This study was conducted under the approval of the Ethics Committee of the Medical Faculty of the University Leipzig (AZ:137/14-ff) and in accordance with the principles of the Declaration of Helsinki. Before enrolment, all patients provided written informed consent.
Metabolomics measurements
Mass spectrometric analysis of amino acids and acylcarnitines was performed on EDTA-plasma (PS) and dried EDTA-whole blood samples (DBS) using an API 2000 tandem mass spectrometer (Applied Biosystems, Germany) and a Turbo Ion Spray Source (TIS) in combination with a HTC Pal autosampler and a PE 200 microgradient pump for flow injection analysis (FIA). The methodology has been described in detail elsewhere. 9 Data related to the cancer patient cohort are provided on Zenodo at: https://doi.org/ 10.5281/zenodo.5122502.
Statistical analysis
Metabolite data were filtered for outliers using a cut-off of mean +5 × SD of the logarithmized data (no measurements had to be removed as outliers). Afterwards, data were inverse-normal-transformed to ensure normal distribution of the measurements while retaining measurements at zero. Thus, metabolites were analysed as standardized measurements with a mean μ = 0 and a standard deviation SD = 1. Correlation between baseline PS and DBS metabolome of cases plus controls and of controls only was calculated as Spearman's rho using the cor.test() function of the 'stats' R package Version 3.6.0, including a multiple testing correction with a false discovery rate (FDR) = 5% after performing the Benjamini-Hochberg procedure. In addition, correlation of baseline IL-6 PS levels and baseline metabolite levels both in DBS and PS were analysed accordingly.
Differences in metabolite levels both in PS and DBS between cancer patients and healthy matched controls were analysed using Welch's two-sample t-test as implemented in the t.test()function of the 'stats' R package including correction for multiple comparisons as indicated by q-values, computed using the p.adjust() function using the parameter "method = 'BH'".
For testing metabolite associations with weight loss within the cases, metabolite data for controls were averaged for each pair and subtracted from its corresponding case for each respective time point. The weight loss phenotype was defined as the difference of the weight from a given time point and the successive time point. The following weight loss definitions are used: 'initial weight loss', calculated as the difference of the weight from baseline (TP1) and the long-term historical weight based on anamnestic information from the patients, and 'early weight loss', indicating weight loss between baseline (TP1) and the subsequent TP2.
The binary variable (weight loss: 0 = no/1 = yes) was defined as a weight loss of more than 2% compared with the previous time point. Association of single metabolites with the binary weight loss phenotype from TP1 to TP2 was tested using Welch's two-sample t-test. Metabolites nominally (at P < 0.05) associated with the binary weight loss phenotype from TP1 to TP2 were subsequently also tested for association with binary weight loss separately at the remaining available time points in a pairwise comparison of successive time points (up to TP7). Associations of single metabolites with weight loss in kg from TP1 to TP2 were tested using linear regression models as implemented in the 'lm()' function of the 'stats' R package with the weight loss as response variable and each metabolite as singular predictor variable. All Pvalues were adjusted for multiple testing controlling the FDR at 5%. Due to the limited sample size, multivariable analysis of correlated analytes was omitted. Finally, pathway enrichment was tested using MetaboAnalyst 4.0's MetPa tool. 11,12 For significantly associating metabolites (at FDR = 5%), we performed a pathway analysis of KEGG-metabolic pathways using all representable metabolites (M = 32) as background. 13 All statistical analyses were performed using the open-source statistical software package R 3.6.0 14 (https://www.R-project.org).
Patient characteristics
Characteristics of the 18 cachectic cancer patients enrolled are given in Table S1. 78% of the patients were male. All but one patient who received chemotherapy in an adjuvant setting were treated for advanced or metastatic gastrointestinal cancers (i.e. mostly gastro-oesophageal cancer in 67% of cases; Table S1). Median age was 61 years (range 51-79). Though all patients were classified as cachectic at study inclusion, with a weight loss >5% in the past 6 months in comparison with the historical weight or weight loss >2% in the past 6 months and a BMI < 20 kg/m 2 , according to the consensus definitions, 3 n = 8 of them presented with a normal weight BMI category, and none of them were underweight.
Weight trends were assessable in 14 of the 18 cancer patients enrolled according to the definition (see Methods section) and five patients presented with weight loss from baseline (TP1) to TP2, whereas seven patients showed weight loss >2% within at least one interval of consecutive time points during the entire time course ( Figure S1).
Comparison of metabolomics analyses in PS vs. DBS in healthy controls
Because we performed our metabolomics analysis in parallel in PS and DBS, we were interested in the correlation of metabolite levels between these two specimens. For this purpose, we analysed the data set of healthy controls. Distribution of transformed data are shown in Figure S2A, and significant correlations are given in Table 1 (see Table S2 for the rest of the data set). The following metabolites or metabolite ratios showed very strong or strong correlation levels (ϱ ≥ 0.6): proline, alanine, phenylalanine, threonine, glycine and citrulline and free carnitine, decanoylcarnitine, the ratio Q11 (i.e. alanine/acetylcarnitine) and finally aminobutyric acid.
Comparison of metabolomics analyses in PS and DBS in healthy controls vs. cancer patients
Next, we compared metabolite levels in PS and DBS between cancer patients and healthy controls. Values of arginine, citrulline and histidine were significantly reduced in both DBS and PS in cachectic cancer patients compared with healthy controls ( Table 2). Moreover, values of asparagine, glutamine, methyl-histidine, methionine, ornithine, serine and threonine as well as the ratio of the branched-chain amino acids (BCAA) leucine/isoleucine were significantly reduced in DBS but not in PS of cancer patients compared with controls ( Table 2). In contrast, ornithine levels were significantly increased in PS of cancer patients compared with controls. Finally, no clear pattern could be observed in this comparison with respect to the acylcarnitine measurements: Whereas hexanoylcarnitine (C6) was significantly decreased in DBS and hexadecenoylcarnitine (C16:1) and octenoylcarnitine (C8:1) in PS of the cancer patients compared with controls, free carnitine was significantly increased in the plasma ( Table 2) ( Table S3 for the remaining set of data and Figure S2 for the entire data set).
Because systemic inflammation represents a key pathophysiological mechanism in propagating development of cancer-related cachexia, we performed an additional analysis to identify potential correlations between IL-6 levels and the metabolomic parameters. We found no statistically significant correlation between baseline IL-6 levels and any of the metabolomic parameters studied, neither in PS nor in DBS (data not shown).
Next, pathway enrichment analysis (based on data from PS) indicated that cancer cachexia triggers histidine metabolism reprogramming (P = 0.0025), according to MetaboAnalyst's 15 and Table S5. According to spearman rho (ϱ) value, the entity of correlations is defined as follows: very strong if ϱ ≥ 0.8; strong ϱ 0.6-0.79; moderate ϱ 0.40-0.59; low ϱ 0.20-0.39; ϱ < 0.20 no correlation. Only correlations with a q-value ≤ 0.05 following correlation for multiple comparisons are included in the table (full data set, see Table S2).
MetPa tool, which is related to alanine, aspartate and glutamate metabolism, the purine metabolism and the pentose phosphate pathway.
Predictive metabolomics model of weight loss in cachectic cancer patients
To identify potential biomarkers related to weight loss, we performed linear regression analysis with weight loss in kg as response and metabolite levels as predictor variables. First, we focused on initial weight loss (i.e. loss from historic weight to the weight at baseline [TP1]). Whereas some parameters both from baseline amino acid and acylcarnitine analyses (i.e. hydroxyproline, ornithine, tryptophan and leucine/isoleucine as well as C14 and C20:2) reached statistical significance in the primary analysis (P < 0.05), none of them retained statistical significance after correction for multiple testing (n = 74 analytes) ( Table S4A).
Next, based on univariate linear regression analysis with weight loss in kg as response and metabolite levels as predictor variables, we searched for biomarkers related to early weight loss (i.e. between baseline [TP1] and TP2) and identified significantly (P < 0.05) decreased baseline levels of alanine (in DBS and PS) as well as baseline plasma ornithine and sarcosine levels. Regarding acylcarnitine, C8:1 (in PS), C10 and C18 (in DBS), C16OH in PS as well as C20:1 and the ratios Q2 (palmitoylcarnitine/acetylcarnitine) and Q3 (palmitoylcarnitine + octadecenoylcarnitine/acetylcarnitine) in DBS were altered with nominal significance in these patients. Following correction for multiple testing (n = 74 analytes), significance was no longer reached; however, a trend remained for C20:1 and the ratios Q2 (C16/C2) and Q3 (C16 + C18:1/C2), P = 0.1499, respectively (Table S4B).
Next, we compared baseline amino acid and acylcarnitine levels between cachectic patients with stable weight and those with weight loss (i.e. change of ≥2% between each time point and the subsequent one using the categories weight loss: yes/no) both in DBS and in PS.
Using Welch's two-sample t-test and these binary weight loss categories, according to our definition, Q2 and Q3 (in DBS) turned out to be predictive (P = 0.00013 and P < 0.0001, respectively) for further weight loss between baseline (TP1) and the first control time point (TP2), thus indicating early weight loss. These parameters maintained statistical significance after correction for multiple testing (including all n = 74 analytes), P = 0.017 for both (Table 3).
Moreover, we tested for weight loss between all successive time points (i.e. pairwise comparison between the remaining available successive time points) in metabolites, reaching at least nominal significance for weight loss between TP1 and TP2. Again, Q2 and Q3 (in DBS) were shown to be the only remaining predictive parameters, following correction for multiple testing, P = 0.0459 for both ( Table 4).
Pathway metabolite enrichment analysis indicated alterations in serine/glycine metabolism in PS of cachectic patients with continued weight loss (P = 0.017, according to MetaboAnalyst's MetPa tool). In addition, pathway plasma metabolite enrichment analysis indicated an altered tryptophan metabolism network (P= 0.0347, according to Negative values correspond to an overall lower mean in the patient group compared with the control group, and positive values to a higher mean. Only significant differences in either DBS or PS (with a q-value ≤ 0.05 following correlation for multiple comparisons) are reported. The remaining set of data is given in Table S3. 95% CI, 95% confidence interval; DBS, dried blood sample.
MetaboAnalyst's MetPa tool) in both PS and DBS in cachectic cancer patients with early weight loss between baseline (TP1) and the first control time point (TP2).
Discussion
Early multimodal interventions (i.e. nutritional support, exercise training and anti-inflammatory interventions) have been suggested for the treatment of cancer-related cachexia. Consequently, clinical recognition of malnutrition and cachexia is important. 2 Thus, there is an urgent need to identify biomarkers predicting continued weight loss in cachectic cancer patients. Such markers could both trigger early intensification of supportive interventions and provide insights into the pathophysiology of this phenomenon, which could help to identify potential novel treatment targets. This prospective case-control study relies on tandem MSbased analysis of DBS and PS in a cohort of cachectic gastrointestinal cancer patients and matched healthy controls to identify markers predictive for further weight loss during first-line cancer treatment. Loss of muscle mass (i.e. sarcopenia) is a well-defined negative prognostic marker in cancer patients. Pathophysiologically, fatty acid and muscle metabolism are interrelated in triggering muscle loss. 16 Consequently, we choose to analyse both amino acid (i.e. related to protein/muscle metabolism) and acylcarnitine (i.e. related to fatty acid/energy metabolism) metabolism profiles. Such analyses have traditionally been performed in PS. Here, we introduced analysis in DBS and performed analyses in parallel in both analytical compartments. Specifically, whereas plasma has been used for amino acid analysis for decades, acylcarnitine can be measured by MS in DBS with high accuracy and sensitivity, which led to the widespread use of MS-based analysis of DBS for routine newborn screening to identify rare inborn metabolic defects covering a wide spectrum from urea cycle, amino acid, organic acid and fatty acid metabolic disorders including CPT 1 and CPT 2 deficiencies. 17 We found high correlations for a number of amino acids between PS and DBS although specifically long-chain acylcarnitine levels did not correlate (Tables 1 and S2). As acylcarnitines play a central role in energy metabolism and are generated at the inner mitochondrial membrane of cells, the cellular content in DBS (mostly leucocytes) vs. PS may represent a possible explanation for these findings exclusively made in DBS analysis. Several studies indeed indicate that amino acid levels in plasma and DBS are highly correlated. For certain amino acids, however, lower concentrations have been found in DBS compared with plasma, and although the reasons for this bias are not fully understood, lower extraction efficiency may play a role. 18,19 For acylcarnitines, differences have also been described between plasma and DBS, and this mostly accounts for long-chain acylcarnitines that are endogenously present in normal erythrocytes 20 or can be absorbed in red blood cells. 21 Comparing our cohort of cachectic cancer patients with matched controls, the following alterations in the levels of different amino acids in cancer patients were identified: Arginine, citrulline and histidine were significantly reduced in both DBS and PS of cachectic cancer patients. Moreover, asparagine, glutamine, leucine/isoleucine, methylhistidine, methionine, ornithine, serine and threonine were reduced in DBS (for full overview of differences, see Tables 2 and S3). Interestingly, ornithine levels were significantly decreased in DBS but significantly increased in PS. This may again reflect the contribution of cellular components in DBS. In this respect, increased levels of ornithine-decarboxylase activity have been described in peripheral blood leucocytes from patients with chronic lymphocytic leukaemia. 22 On the other hand, and in line with our findings, arginine levels were found decreased in the plasma of cancer patients, and conversion of arginine to ornithine by arginase was proposed as a mechanism. 23 Interestingly, we also found a decreased ratio of arginine/ornithine in PS, possibly related to an increase in arginase activity in the tumour microenvironment. 24 With respect to arginine, decreased levels are often found in malignant tumours, and conversion of arginine to ornithine by arginase produces polyamines (putrescine, spermine and spermidine) that promote tumour proliferation and aggressiveness through modulation of the global chromatin structure. 25 Moreover, depletion of arginine contributes to suppression of cytotoxic T-cell proliferation. 26 In line with our findings, in a cohort of cachectic cancer patients (breast, colorectal and pancreatic cancer) with different levels of weight loss, decreased plasma free arginine concentrations were found to be tumour related, independently from weight loss. 23 The ratio of essential BCAA leucine/isoleucine was reduced in our cohort of cancer patients. BCAA constitute a high-level source of acetyl-CoA sustaining the Krebs cycle or lipogenesis. Moreover, acetyl-CoA is required for histone and protein acetylation, providing a link to epigenetic modifications and tumour growth. 27 Moreover, enzymes catalysing the first step of BCAA catabolism are overexpressed in many cancers, 28 again underscoring their role in cancer cell metabolism. Finally, histidine and methylhistidine were significantly decreased in both DBS and in PS of our cachectic cancer patients. It has recently been demonstrated that histidine levels correlate to some extent with the total amount of pro-inflammatory cytokines. 29 In our cohort, however, there was no correlation with IL-6 levels. This discrepancy might be explained by the marked differences in the clinical characteristics of the cohort analysed by Sirnio et al. (i.e. mostly colorectal cancer patients, 13.1% Stage IV only) compared with our cohort. We also observed a global activation of the histidine metabolic pathway, according to PS measurements, in cachectic cancer patients compared with healthy controls (P = 0.0025). According of the metabolite enrichment analysis, this pathway is associated with the pentose phosphate pathway, alanine, aspartate and glutamine metabolism and thus with purine metabolism including all amino acids found to be significantly altered in our cancer cohort. In line with our findings, Cala et al. 6 recently reported a decrease of histidine derivatives in cachectic cancer patients. The cancer patients included in our study were all defined by a weight loss >5% at baseline to define cachexia according to the consensus guidelines. It was the predominant criterion defining cachexia, according to the consensus criteria. 3 None of them had a BMI < 20 kg/m 2 . Accordingly, none of the controls were underweight (i.e. BMI category < 18.5 kg/m 2 ). Due to this fact and due to the lack of a group of cancer patients without cachexia according to the consensus criteria, it is not possible to delineate a specific role of amino acid biomarker alterations with respect to cachexia independently from cancer-related alterations in a strict sense. In addition, our limited sample size does not allow for a multivariable analysis of correlated metabolites, which would be warranted when analysing closely related analytes. Overall, however, there is a broad body of evidence from the literature that amino acid metabolomic profiles in PS or DBS are dominantly influenced by cancer cell metabolism itself, thus restricting the value of single amino acids as specific cancer cachexia markers.
In line with this hypothesis, alterations in single amino acid levels were not predictive for further weight loss in our cohort. However, pathway analysis based on data derived from PS indicated activation of the glycine/serine pathway related to weight loss for successive time points (P = 0.017) as well as tryptophan pathway activation (P = 0.035) related to early (i. e. between TP1 and TP2) weight loss. Serine and glycine are related to the one carbon metabolism in cancer cells 30 and were shown to be involved in cell transformation and malignancy. 31 Moreover, it has been demonstrated recently that glycine administration attenuated muscle wasting in a mouse model of cancer cachexia. 32 Tryptophan plays a pivotal role during T-cell activation, and indoleamine-(2,3)-dioxygenase (IDO), which is induced by interferon-γ (IFN-γ), mediates degradation of tryptophan. In addition, a relation of tryptophan metabolism with inflammation is well established. 33 In cachectic patients suffering from haematological malignancies, weight loss was associated with immune activation (i.e. IFN-γ activity) as well as decreased serum tryptophan levels. 34 Similarly, Iwagaki et al. 35 demonstrated that decreased serum tryptophan levels in cachectic gastrointestinal cancer patients compared with controls and reduced nutritional state and further weight loss were associated with increased levels of neopterin and a consumption of tryptophan. From this background, IDO inhibitors, which are currently been developed as cancer drugs for immunotherapy, 36 might also be interesting for their potential effects in cancer cachexia. Interestingly, in a CT-26 murine tumour model, inhibition of IDO decreased tumour growth kinetics and efficiently prevented loss of body weight. 37 With respect to energy metabolism-related metabolomic markers, we could demonstrate that both Q2 and Q3 ratio in DBS of cachectic cancer patients predicted further weight loss according to binary categories (yes/no) between baseline evaluation (TP1) and TP2 during treatment. Q2 and Q3 comprise ratios between long-chain acylcarnitine in the numerator, palmitoylcarnitine (C16) for Q2 and palmitoylcarnitine plus octadecenoylcarnitine (C16 + C18:1) for Q3 and acetylcarnitine (C2) in the denominator (for both Q2 and Q3). They reflect the activity of mitochondrial carnitinepalmitoyltransferase 1 and 2, corresponding to carnitine shuttle (CS) activity. CS plays a pivotal role in fatty acid metabolism, allowing the transport of long-chain fatty acids across the impermeable mitochondrial membranes. These are usually esterified in the cytosol by coenzyme A (CoA) to form acyl-CoA thioesters before beta-oxidation in mitochondria. CS comprises four enzymes, namely, carnitinepalmitoyltransferases I (CPT I) and II (CPT II), the carnitine acetyltransferase (CrAT) and the carnitine-acylcarnitine translocase (CACT), which are involved in the bidirectional transport of acyl-CoA and carnitine in exchange with CoA and acylcarnitine from the cytosol to mitochondria. 38 In body fluids (i.e. PS and DBS), acylcarnitine profiles are not only a diagnostic test for inherited disorders of fatty acid metabolism but also for defects in BCAA catabolism. Specifically, elevation of Q2 or Q3, as seen in our cachectic cancer patient cohort loosing further weight compared with cachectic cancer patients maintaining or even gaining weight, represents an established marker for defective CPT II function 39 in the clinical setting for newborn screening, which results in impaired β-oxidation and/or liver ketogenesis. 21 Interestingly, back in 1998, Seelaender et al. 40 using a model of Walker 256 carcinosarcoma bearing rats affected by severe cachexia found a marked reduction (i.e. À56%) of activity of the isolated mitochondrial inner-membrane CPT II in comparison with non-tumour-bearing, non-cachectic control animals and identified a lower weight, catalytically less active isoform of CPT II, possibly induced by tumour-derived factors (i.e. TNF-α). Later on, the same group showed reduction of CPT II activity in hepatic sinusoids, possibly contributing to the progression of cachexia. 41 In line with these findings, in a cachexia mouse model of colon-26 adenocarcinoma, Liu et al. reported decreased mRNA expression levels and enzymatic activity of CPT I and CPT II and significantly reduced amounts of free carnitine and acetyl-carnitine compared to normal controls 42 and oral administration of high doses of L-carnitine in this murine model 42 or a rat model of cachexia 43 improved the condition by reducing serum levels of IL-6 and TNF-α and restoring CPT expression and activity to enhanced beta-oxidation and to decrease adipose tissue breakdown. 42 As the predictive alterations in Q2 and Q3 were only found in DBS, it might be speculated that the metabolomic changes related to the CS described in the liver in experimental models so far 40,42 are reflected similarly in the cellular compartment (i.e. predominantly leucocytes) of DBS.
In summary, our results indicate that changes in acylcarnitine profiles measured in DBS and related to the CS are predictive for further weight loss in cachectic cancer patients. Moreover, pathway analyses indicated an involvement of the serine/glycine and the tryptophan pathway. Due to the small sample size, the findings are hypothesis generating and should be validated in larger cohorts.
Online supplementary material
Additional supporting information may be found online in the Supporting Information section at the end of the article. Figure S1. Single patient's weight variations over all time points. All patients were consecutively enrolled. Pat. 13 was excluded after study enrolment, because of absence of tumor tissue in the biopsy samples. Numbers on the x-axis represent time points of measurements ranging from 1 (baseline) up to 7 Figure S2. A) Comparison of control standardized metabolite measurements in DBS and PS; B) Comparison of case-control differences among pre-processed metabolite measurements in plasma and dried blood Table S1. Baseline clinical characteristics and weight variations. Abbreviations: UICC, Union international contre le Cancer; BMI, Body Mass Index Table S2. Spearman Correlation between metabolites in DBS and PS determined in the control cohort Table S3. Comparison of estimated mean differences of standardized measurements between patients and controls of amino acids, carnitine, free carnitine, acyl-carnitines and free fatty acids in PS and corresponding DBS. Negative values correspond to an overall lower mean in the patient group compared to the control group, positive values to a higher mean. A q value<0.05 indicates statistical significance following correction for multiple testing Table S4. A) Linear regression analysis of metabolome profile predictive for initial weight loss as a continuous variable (according to numeric weight loss from the historical weight to the weight at TP1); B) Linear regression analysis of metabolome profile (according to numeric weight loss) from baseline to the second scheduled evaluation (i.e., early weight loss). Only results with p-value <0.05 are reported, q-values represent results following correction for multiple testing Table S5. Complete overview of metabolites and metabolite ratios analyzed and mapped in the reference metabolite database as published by Burkhardt et al. [15]. HMDB and PubChem identifiers are indicated | 7,515.2 | 2021-10-11T00:00:00.000 | [
"Biology"
] |
Intelligent Gamification Mechanics Using Fuzzy-AHP and K-Means to Provide Matched Partner Reference
Players in the Small and Medium System (SME) collaboration gamification system need suitable partner references to support the goals of their activities. This study aims to build an intelligent system gamification mechanics model to provide the proper partner reference for players. The following steps are carried out sequentially in carrying out this research. First, analyze needs for a recommendation model that supports partner reference. Second, design an intelligent system formula using the Fuzzy-Analytical Hierarchy Process (Fuzzy-AHP) and K-Means algorithms to obtain partner reference recommendation patterns and segmentation of similarity of interests between partners. Third, compile the scenario of recommendation model mechanics which involves actors and activities involved in the model. Fourth, design use cases and activity diagrams to translate scenarios in the form of program flow. Fifth, code programs related to use cases and activity diagrams. The sixth is to conduct experiment with the prototype results to test all the functions of the proposed model. Fuzzy-AHP produces a weight for each tested data which can be claimed as a ranking, with the highest weight value being 9,980. K-Means produces 3 clusters in which, based on this experimental data, the third cluster has the most members. Both models are realized in the dashboard, and referring to experiments from 63 respondents, the model shows its performance by displaying SME rankings and clusters according to the data and criteria being tested. Intelligent system algorithms are to develop models of gamification mechanics, primarily to support player decisions in determining more effective game steps. This model can work well if sufficient data requirements support it. Therefore, the proposed mechanics depends on game activities, and more data are available to be extracted and produce more precise recommendations.
Introduction
Small and Medium Enterprise (SME) is one of the essential components of the country's economy because its existence contributes to the absorption of labour and an increase in per capita income. However, there are many challenges faced [1][2][3]. Some of the challenges include weak information exchange and low activity and retention and motivation to collaborate [2,[4][5][6][7]. Several studies reported that SMEs are reluctant to collaborate because of the lack of information regarding appropriate partner references in collaborating [1,2] and the lack of e ective exchange of information and good knowledge extraction between SMEs [4,8].
Meanwhile, partner reference recommendations are an important part of collaborating activities [1,8].
e accuracy of partner references determines the success of the collaboration between SMEs [1,8]. Appropriate partner references are coming from aspects of similarity of interest, mutual need, and interdependence [1,1,2,4,8]. e results of the review study found nine (9) studies that proposed matters relating to extracting data on SME [1,2,4,[8][9][10][11]. Most of these studies are at the conceptual level (8 studies), where the topics are mostly by the proposed concept of data extraction models for information exchange. It can indicate that SME data extraction research development is mostly still at the conceptual level, which is still broad to be developed, and subsequent research needs to develop further to create more concrete and specific models. Meanwhile, one study proposes extracting data related to collaboration partner references and has become a partner reference model. However, it still has weaknesses in only single criteria and is generated from data that are difficult to calculate, so it is prone to bias [11]. Related to the literature review, there are still few appropriate approaches for presenting references for suitable SME partners in collaborating, and there are not many appropriate mechanisms for collecting and extracting information into new knowledge in exchanging information within the SME network [1,2,4,8].
Concerning collaboration problems that collaboration retention is small, so the collaboration framework approach can be chosen out from how attractive or acceptable the characteristics of the approach are. ere is a gamification approach that is currently being developed and has become part of the lifestyle of today's society and aims to increase user participation and motivation and try to influence user behavior [12][13][14]. Gamification is the process of imitating a fun and even addictive gameplay atmosphere while players complete nongame tasks [14][15][16]. Gamification seeks to bring together functionality and engagement to increase functionality, productivity, and satisfaction, create more experiences, drive behavior, and generate positive business impact [17]. Showing the principles of the MDE framework model, the essential components of gamification consist of mechanics, dynamics, and emotional where each of these components cannot be separated because the mechanical element (M) will create the dynamics of the game and create its emotional atmosphere for the players [17][18][19]. e success of the gamified system lies in the application of game mechanics according to the characters of players [19]. Gamification is suitable as an SME collaboration framework platform regarding its characteristics. We expect to make collaboration more exciting and increase retention. So, going from the two problems, a solution is needed by building an intelligent system in a collaborative gamification mechanics model that can provide knowledge extraction to produce a suitable reference partner for SME actors.
Several intelligent systems approaches can be applied in this research. Among them is the fuzzy-analytical hierarchy process (Fuzzy-AHP) which has the potential to provide a more precise weight value in completing the weighting of data with several criteria [20,21]. Fuzzy-AHP has the advantage of being able to weigh more precisely than data criteria that have more subjective characteristics and are uncertain so that the resulting reference is claimed to be more precise [21][22][23]. While the K-Means clustering approach has the potential to group objects based on their characteristics [24] so that objects with the same characteristics are grouped in the same cluster and objects with different characteristics are grouped into other clusters [25,26]. is condition fits the mapping needs and position of each SME to make it easier for them to identify partners who have the same interests.
is study introduces the proposed "Intelligent Gamification Mechanics (IGM)" model. is model embodies gamification mechanics made from an intelligent system to provide knowledgeable recommendations to players. e IGM formula uses the fuzzy-AHP algorithm [24] and K-Means [25] to provide two knowledge recommendations that support each other in providing a suitable partner reference.
e fuzzy-AHP formula produces recommendations for ranking suitable SME partners [27]. e K-Means formula produces SME segmentation mapping that provides information on the position of players in groups who have the same interests and potential to collaborate. e model is built in the gamification platform to make the model more attractive and interactive. is model can be included in a collaboration framework to provide recommendations for suitable partner references for collaborators.
is study reports the results of our research on the performance of the IGM model. e experiment used 63 respondent data according to the criteria used in the algorithm. e prototype demonstrates the model's ability to present a suitable ranking of SME partners while at the same time presenting the mapping/positioning of SMEs in groups that have the exact needs and great potential for collaboration. is research resulted in 3 contributions: first, an intelligent system formula in gamification mechanics; second, a leader board prototype that displays suitable partner ranking and SME segmentation mapping. Future research can apply or develop this model to improve collaboration partner references in various fields. e model can be developed by adding criteria and the number of clusters as needed, and the results can be compared and analyzed.
Materials and Methods
ere are five method steps ( Figure 1). First, analyze the need for a recommendation model that supports the provision of appropriate partner reference information. Second, design an intelligent system formula using the fuzzy-AHP and K-Means algorithms to obtain partner reference recommendation patterns and segmentation of similarity of interests between partners.
ird, develop a scenario of recommendation model mechanics by involving the actors and activities involved in the model. Fourth, design use cases and activity diagrams to translate scenarios in the form of program flow. Fifth, code programs containing use cases and activity diagrams. e sixth is to experiment with the prototype results to test all the functions of the proposed model. e first stage is to build an intelligent system formula to produce two recommendation models. 2 Discrete Dynamics in Nature and Society 2.1. Fuzzy-AHP Algorithm. Fuzzy-AHP was first proposed by Chang which is a direct development of the AHP method which consists of matrix elements represented by fuzzy numbers [20,21]. Fuzzy-AHP is a combination of the AHP method with fuzzy concept approach. Fuzzy-AHP covers the weaknesses found in AHP, namely, problems with criteria that have more subjective characteristics [20,21,26]. A scale order represents the uncertainty of numbers. e Fuzzy-AHP method uses a fuzzy ratio called triangular fuzzy number (TFN) and is used in the fuzzification process. TFN consists of three functions. e membership consists of the lowest score (l), the middle grade (m), and the highest grade (u) [21,26]. e steps of the FAHP method are as follows [20,21]: (1) Arrange problems in a hierarchical form (2) Compile a comparison matrix between all elements/ criteria (3) Calculating the value of the consistency ratio from the results of the comparison matrix, calculation with the condition that the CR value is 0.1 (4) Change the weighted results into fuzzy numbers using the TFN scale (5) Calculate the fuzzy geometric mean and fuzzy weight (6) Determine the fuzzy priority for each alternative using linguistic variables In this study, SME partner ranking recommendations apply the fuzzy-AHP algorithm with four criteria: scope, market, product, and marketplace. All fFuzzy-AHP steps and formulas are compiled and tested with dummy data to ensure correct calculations.
K-Means Algorithm.
e K-Means algorithm is unsupervised machine learning algorithm. In the data analysis process, K-Means clustering is a method that performs data grouping with a partition system [24][25][26]. K-means clustering is also a non-hierarchical cluster analysis method that seeks to partition existing objects into one or more clusters or group objects in regard to their characteristics. Objects with the same characteristics are grouped in the same cluster. Objects that have different characteristics are grouped into other clusters [26,28]. K-Means steps are as follows [28][29][30]: (1) Perform data preprocessing followed by data transformation; then, determine the number of clusters (Number of K) Player SME Segmentation leaderboard SME Database (2) Choose a centroid at random as many as the specified number of K (3) Calculates the rarity of the centroid to the object and grouping connected with the minimum distance (4) Check if the object moves; then, the iteration process continues (5) If the object does not move, the last cluster is recorded as the result of the cluster formed In this study, recommendations SME segmentation applies the K-Means clustering algorithm by determining three clusters and the centroid at random. K-Means calculates data containing four criteria (scope, market, product, and marketplace). After the K-Means step is declared, the formula is tested with dummy data to ensure the correctness of the formula calculation. e second stage is to define the mechanical components, including the player and the leader board. e player is defined by definition, status, and access rights, while the leader board is defined by description, user interface, and access pattern. en, the third step is to define the rules and requirements to reach the goal, namely, player rankings and segmentation are displayed on the leader board accurately. en, the second and third steps produce a mechanics movement module. e fourth step is to compose a mechanics narrative according to the mechanics movement module completed. In this stage, the mechanics are equipped with detailed steps of each path from actor to system or vice versa and from system to system or actor to actor. is narrative also explains the origin of the data processed in the intelligent system. e fifth stage is to complete the mechanics with use cases and activity diagrams that serve to translate the program flow and all activities in mechanics. After this stage is complete, the model is ready to be implemented in the coding program at the next stage. e fifth stage is coding this program using a web programming language with the coding flow following what has been described in the use case and activity diagram. en, the completion of the fifth stage means the IGM prototype is ready to be tested. e last stage is to experiment by entering data from 63 respondents with the condition that the data have been preprocessed and transformed. Respondent data include general identity and four criteria used in the intelligent system formula. Experiments were carried out to observe all prototype functions and the performance results of the IGM model.
The Proposed Model
is section reports the details of the proposed model related to the method steps described in Figure 1. Figure 2 describes the general flow of the IGM model. e model is built on a gamified platform that adopts a leader board and dashboard to showcase the mechanics of the intelligent system.
From Figure 2, the model is detailed to the following steps to describe the flow of the proposed model.
SME Reference Formula with Fuzzy-AHP.
is section describes the flow of the player reference formula with fuzzy-AHP using dummy data. e first step in the fuzzy-AHP process is to tabulate the data using dummy data (Table 1) as an experiment to ensure the model works correctly. In Table 1, dummy data are presented in the form of 4 data on SME players who will be ranked as reference partners along with four criteria possessed by players, namely, SME, scope, market, product, and marketplace. ese four criteria were chosen by considering the analysis of the needs and availability of SME data, which of course can change if applied to data in different situations and fields.
Triangular fuzzy number (TFN) is used in the fuzzification process which consists of three membership functions, namely, the lowest value (l), the middle value (m), and the highest value (u) [20,26]. Determination of TFN is guided by linguistic variable and triangular fuzzy number (Table 2).
Step 1: define a priority comparison of criteria using the TFN scale (Table 2). Previously, the following were the guidelines for determining the TFN scale out from the weight of each criterion in reference to expert opinion and literature review [2,3] in the SME sector by adjusting the TFN value guidelines (Table 3). en, determining the priority value between criteria (Table 4) is to determine the value of 1 for two criteria that have the same value and find the difference for the two criteria that have different values.
Step 2: determine the comparison of paired matrices between criteria with the TFN scale in the decimal value (Table 5).
Step 3: determine the fuzzy synthesis (S i ) limit value referring to the FAHP calculation step fuzzy formula (S i ): calculates the total lower value in each column, and here is an example of C1: For the total value of lower, c 2 � 2.20, ·c 3 � 5.50, and c 4 � 5.17, using the same method according to the data in Table 5, Intermediate of 7 to 9 (7/2, 4, 9/2) (2/9, 1/4, 2/7) 9 Equally strong (4, 9/2, 9/2) (2/9, 2/9, 1/4) For the total value of the median, C2 � 2.62, C3 � 7, and C4 � 6.17, using the same method according to the data in Table 5, Calculate the total upper value in each column and here is an example of C1: For the upper total value, C2 � 3.73, C3 � 6.67, and C4 � 8, use the same method according to the data in Table 5: Calculating fuzzy synthesis value at lower, we obtain To calculate S 2 , S 3 , and S 4 , use the same formula as the data reference in Table 6. Calculate the value of Fuzzy synthesis on the median: To calculate S 2 , S 3 , and S 4 , use the same formula as the data reference in Table 6.
Calculating the value of fuzzy synthesis on upper, we obtain To calculate S 2 , S 3 , and S 4 , use the same formula as the data reference in Table 6.
Step 5: determine the value of the fuzzy-AHP's priority vector (V) using the FAHP calculation step, specifically in equation (12). Determine the vector's value using the following equation: m i is triangular fuzzy number of Ci criteria. Calculating the vector value (C1) containing Table 7 data, we obtain as follows: 3 as value m 1 ≤ m 3 ⟶ 0.20 ≥ 0.35, � 0.122, To calculate the vector in the next cells, we use the same equation, where all the priority vector results have been presented in Table 8.
Determining the value of the defuzzification ordinate is for k � 1, 2, . . ., n, k ≠ i; then, this process produces a vector weight. en, the application is d ′ (C 1 � min(C1, C2, C3, C4)) so that it produces data as in Table 9.
Step 7: normalize the value of the fuzzy vector weight (W) going from the FAHP calculation step in equation (6).
Normalization of fuzzy vector weight value (W) is
where A 1 � 1, 2, . . ., n is the decision element.
After the normalization of the W^' equation, the normalized value of the vector weight (see Table 10) is like equation (7): where is W is nonfuzzy number and value of W � 1,
en, carry out the process of normalizing the weight vector of each criterion that represents the weight of each alternative with the total number of weight values equal to 1. en, rank decision results by calculating the total score with equation (9).
where S j � score, S ij � the weight of each criterion which represents the weight of S j , and W i � weight of every criteria. e outputs of these calculations determine which score is the highest. e score with the greatest recommendation is the best. Table 11 contains the maximum and minimum values for each criterion.
Considering the vector weight on the criteria (W) using equation (9), the following procedure is used for C1: For C2, C3, and C4 using the same equation formula, then the overall result of the weight vector value is shown in Table 12.
Determine the score by multiplying the weight vector (w) ( Table 9) by the weight vector (w) for each criterion (Table 12), which represents the weight of each, as shown in equation (9). e overall score in alternative 1 (A1) is calculated as follows: From the fuzzy-AHP ranking results, the SME ranking is generated according to the criteria set in the fuzzy-AHP calculation (Table 13). e ranking results in this model use dummy data to ensure that the process input and output functions have been running according to the target. e value of the weight score determines the ranking results. e higher the value, the higher the ranking of an alternative.
is model is devoted to ranking SME players according to suitable criteria for collaborating with a player. e higher the weight score is, the higher the ranking of SME partners chosen to be suitable partners. From testing with dummy data, the ranking of the data is shown in Table 14.
SME Segmentation Formula with K-Means
Step 1: tabulate the data using dummy data. en, determine the number of clusters in the first iteration, wherein determining the number of clusters and the position of the cluster (denoted K) in the first iteration is determined randomly [29]. In this model design, 3 (K � 3) clusters are determined by choosing randomly from the data with the details of the data centroid in Table 15. en, it can be notated as C1 (2,1,1,1), C2 (3,3,3,3), and C3 (2,2,4,2).
Step 2: calculate the distance value of the data to the centroid using the Euclidean distance formula (equation (10)): Showing the data in Table 14, the data distance from the centroid of each criterion is as follows To get (S n , C 1 ), Furthermore, the data D(S n , C 1 ) use the same method as the calculation results in Table 11 column Cr1. We use the same formula to get D(S n , C 2 ) and D(S n , C 3 ), and the result is described in Table 16.
Step 3: group the data according to the centroid by grouping the data according to the shortest distance of each item. is process can be calculated by finding the smallest value among the values, D(S n , C 1 ), D(S n , C 2 ), and D(S n , C 3 ). e cluster is determined containing the smallest value obtained by one of the Euclidean distance values in each item set. e results of determining the cluster can be seen in Table 17. C2 C3 C4 A1-SME 1 1 0 0 0 A2-SME 2 1 0 1 1 A3-SME 3 0 1 1 1 A4-SME 4 0 0 0.5 1 Table 13: Vector weight value (w) specification criteria.
Step 5: the process of repeating the iteration as before with different data centroids, namely, calculating the distance value of the data to the centroid, using the Euclidean distance formula (10) Consisting of the data in Table 13, the data distance from the centroid of each criterion is as follows.
To get D(S n , C 1 ), Furthermore, the data D(S n , C 1 ) use the same method as the calculation results in Table 19 column Cr1. We use the same formula to get D(S n , C 2 ) and D(S n , C 3 ).
Step 6: group the data according to the centroid by grouping the data according to the shortest distance of each item. is process can be calculated by finding the smallest value among the values, D(S n , C 1 ), D(S n , C 2 ), and D(S n , C 3 ); the cluster is determined as concerning the smallest value obtained by one of the Euclidean distance values in each itemset. e results of cluster determination can be seen in Table 20.
From the results of the second iteration, there is no change in the position of the cluster, so the iteration process stops until the second iteration, and the resulting cluster is as presented in Table 21.
The Experiment Result and Discussion
e experiment uses SME data of 63 respondents' data inputted into the prototype. Figure 3 shows the results of the recommendations generated from the ranking of SME partners with fuzzy-AHP. e results display the identity of the name, email address, and score of the fuzzy-AHP which aims to provide and facilitate information for players to continue their actions after being recommended by the system. ese results are constantly changing according to changes in player data in the game. Rankings are displayed in a dashboard accessible to recommended players and partners. e prototype shows its ability to present SME rankings according to the criteria data that have been used as test material. Figure 4 shows the results of the recommendations generated from SME segmentation with K-Means. Cluster 1 produces four players, cluster 2 produces 41 players, and cluster 3 produces 18.
ese results constantly change according to changes in player data in the game. e SME segmentation is displayed on the leader board so that all the players involved can see their position in the cluster. ey can continue to collaborate in regard to the cluster recommendations generated by the system, considering that they have many characteristics and interests in common.
Experiments show that the model can provide recommendations for SMEs' knowledge for collaboration. Figure 3: SME ranking using fuzzy-AHP. Figure 4: SME segmentation using K-Means. 12 Discrete Dynamics in Nature and Society However, this result depends on the adequacy of the data processing. e extensive and valid data affect the accuracy of this model in the analysis. For this reason, anticipation needs to be considered in the prototype to ensure that the data inputted by players are correct and consistent.
Conclusion
e Intelligent Gamification Mechanics (IGM) model makes essential recommendations for SME actors to collaborate to provide the proper reference for SMEs to establish cooperation to make it more useful and on target. SME ranking and SME segmentation work complementarily to support players' decisions in cooperating. e proposed intelligent system mechanics model has demonstrated its proper function using the experimental test of SME actor respondent data. At the same time, the dashboard and leaderboard function well and can present the mechanics of the intelligent system in a gamification-based prototype specification. e availability of data will determine the results of the IGM analysis. In line with that, the characteristics of the data and the expected solution of the problem raised also determine the weighting criteria in the fuzzy-AHP model and also determine the number of clusters in the K-Means. erefore, further research needs to be developed and anticipated changes in respondent data that are up to data and sustainable so that IGM performance can be optimal. is study can also be the initiation of future research on the development of gamification mechanics based on intelligent systems. Gamification in presenting partner references is needed in other fields, and it is necessary to test the performance of this model in solving these problems. For this reason, the implementation and development of this proposed model is still wide open.
Data Availability e data are available from the corresponding author upon request. | 5,938.8 | 2022-05-31T00:00:00.000 | [
"Computer Science"
] |
PGEN : A Novel Approach to Sequential Circuit Test Generation
A novel approach, called PGEN, is proposed to generate test patterns for resettable or nonresettable synchronous sequential circuits. PGEN contains two major routines, Sequential PODEM (S-PODEM) and a differential fault simulator. Given a fault, S-PODEM uses the concept of multiple time compression supported by a pulsating model, and generates a test vector in a single (yet compressed) time frame. Logic simulation (included in S-PODEM) is invoked to expand the single test vector into a test sequence. The single test vector generation methodology and logic simulation are well coordinated and significantly facilitate sequential circuit test generation. A modified version of differential fault simulation is also implemented and included in PGEN to cover other faults detected by the expanded test sequence. Experiments using computer simulation have been conducted, and results are quite satisfactory.
INTRODUCTION
Sequential circuit testing has been recognized as the most difficult problem in the area of fault detection.The difficulty comes from the existence of memory elements.With memory elements, such as latches or flip-flops, the circuit output depends not only on the current inputs but also on the operation history (cir- cuit states).Of course, it is possible to facilitate se- quential circuit testing by adding some extra hard- ware, which enhances the controllability and observ- ability of the circuit [22].However, the test hardware increases hardware overhead and can degrade circuit performance.Thus, before using valuable chip space, test generation without adding extra hardware should be tried.
In this work, a novel approach called PGEN is pro- posed to generate test patterns for synchronous se- quential circuits with or without a reset line.Instead of unfolding the sequential circuit into an iterative *E-mail: csiwbj @ccunix.ccu.edu.tw*Present address: Information Technology Services, AT & T Universal Card Services, Jacksonville, FL 32220, USA.*E-mail: srdpb @ acadvml.uottawa.calogic array, for a given fault, PGEN uses the concept of time compression and synthesizes in one time frame a single test vector representing the com- pressed form of multiple time frames.The single vec- tor is then expanded into a test sequence by a logic simulator guided by dynamic or static cost analysis.In general, when a set of test patterns is applied to a sequential circuit, each signal line of the machine can be static or pulsating (changing logic values).The static line walues can be represented by conventional logic and fault models such as logic 1, 0, D,/) [20], while the pulsating line values are represented with the model value P to reflect the circuit behavior in a single and yet compressed time frame.For this rea- son, the name of the proposed test generation method is PGEN (Pulsating Test Generation).Using the pul- sating logic model, the sequential circuit behavior un- der a test sequence can be faithfully described in a single time frame.
The PODEM algorithm [10] used for combina- tional circuit testing is transplanted and upgraded to support the pulsating model for test pattern determi- nation.PGEN contains two major parts.The first part, S-PODEM is used to synthesize a single test vector based on the compressed time frame, and ex- pand the single vector into a test sequence.The sec- ond portion of PGEN is a modified version of the differential fault simulator [8] and is implemented to cover all faults detected by the test sequence devel- oped by S-PODEM.In fact, the PGEN approach is a compromise between simulation-based and iterative logic array test methods.It utilizes the benefits of deterministic test generation methods to ascertain the required input signals for sensitizing and propagating the faults; however, the search for test patterns is greatly simplified by the pulsating model as will be shown later.PGEN also needs simulation; but, unlike conventional simulation-based methods which deter- mine test patterns entirely by simulation, PGEN uses simulation only for test pattern expansion.In sum- mary, the philosophy of PGEN is to utilize the advan- tages of simulation-based and iterative logic array testing, and avoid the difficulties of both approaches.
The paper is organized as follows: Section 2 gives background on sequential circuit testing, and Section 3 explains the pulsating model.The PGEN algorithm and its special attributes are presented in Section 4, and Section 5 describes S-PODEM and its routines.Simulation and results are given in Section 6. Lastly, conclusions are given in Section 7.
BACKGROUND
To increase the efficiency of sequential test genera- tors, algorithms proposed have utilized different tech- niques such as backward justification [4] [14-16]; concurrent fault simulation [1]; and use of previous state information [2] [14].Three different approaches have been considered for sequential circuit test generation.They are 1.iterative test generation method; 2. simulation-based method; and 3. functional test generation method.
In the iterative test generation approach [2] [4-5] [9] [13-17] [19], the combinational model for a se- quential circuit is constructed by regenerating the feedback signals from previous time copies of the circuit.Thus, the timing behavior of the circuit is approximated by iterative combinational levels.Topological analysis algorithms that activate faults and propagate the effect through these multiple copies of the combinational circuit are used to generate tests.This approach can be further divided as forward time, reverse time or a combination of forward and reverse time test generation.
In the simulation-based approach [6] [21 algo- rithms start with a random vector and simulate the circuit.From the simulation result, a cost function is computed.This cost is defined to be below a thresh- old only if the simulated vector is a test.If the vector is not a test, i.e., the cost is high, then cost reduction by gradual changes in the vector leads to a test.
All methods described above perform test generation based on circuit structures.Sequential circuit testing based on functions, especially using state ta- bles, can also be found in [7] [12] [18] and will not be further discussed since the scope of this paper is mainly restricted to test generation based on circuit structure.
MOTIVATION AND MODELING
The PGEN approach considers single stuck-at fault detection with a special 1-value model, denoted as the P-model.The values of the P-model reflect the logic values (or circuit behavior) of different lines for both the fault-free and faulty circuits, when test pat- terns are applied.Section 3.1 justifies the motivation behind the P-model, and details of the P-model are given in Section 3.2.
Motivation
The basic idea of PGEN originated from the desire to compress the time frame by a reasonable method, rather than unfolding the sequential circuit.After careful study of a number of sequential circuits, it was observed and concluded that most of the primary inputs remain at some static values during test generation for a particular fault.Only a few primary inputs exhibit pulsating behavior.That is, a considerable number of lines will remain at some stationary value during the entire test generation process for a partic- ular fault.
Pulsating line set (PLS)
When a line L experi- ences a change in its logic value (either fault-free or faulty circuit, or both) during the test experiment, L is known as a pulsating line.The set of all such pulsating lines is regarded as a pulsating line set, PLS.
Due to the presence of a fault, several effects may take place.If L SLS then there are two possibili- ties: 1. L gets the same static value for both faulty and fault-free circuits.By the definition of SLS, this value does not change with time.
2. L has a different static value for faulty and fault- free circuits.In the fault-free circuit it may get 0 (1), whereas in the faulty circuit it may receive a (0).
Assume that L PLS, then L may experience pulsat- ing behavior in either the faulty or fault-free circuit or both.There are three possibilities: 1. L experiences pulsating behavior in both the fault- free and faulty circuits. 2. L experiences pulsating behavior in the fault-free circuit, but due to the fault effect it remains at a constant value in the faulty circuit.Thus, in the fault-free circuit it has value P, but in the faulty circuit it may have 0 or 1. 3. L keeps a static value (0 or logic value) in the fault-free circuit, but it experiences changes in line value for faulty circuit.
Static line set (SLS): A line which remains at a static value (both fault-free and faulty circuits) during the entire test operation is regarded as a static line.
The set of all such static lines constitutes a static line set, SLS.
There will be other lines which will experience dif- ferent logic values, when test patterns are applied.To represent such a behavior in a compressed time frame, a new symbol is required.In the proposed P-model, this symbol is represented by P which hints that a particular line will change its value during the test generation process.Thus, the standard fault model is enhanced by this addition.
Logic Model and Operations
The logic model of PGEN consists of 11 values: 0, 1, D (1/0), /)(0/1), X, P, PO (P/O), P1 (P/l), 1P (l/P), OP(O/P), and PP(P/P).Here, D and/ have the same meaning as the one used in conventional methods [20], i.e., D denotes logic in the good circuit and logic 0 in the faulty circuit.Similarly, /) indicates logic value 0 in the good circuit and in the faulty circuit.The P value is used for any pulsating signal.P0 (P1) indicates a pulsating signal in the good cir- cuit and static value 0(1) in the faulty circuit, and 0P (1P) indicates 0(1) in the good circuit and P in the faulty circuit.Furthermore, PP represents a signal pulsating in the good machine with another signal pulsating (assuming different pulsating behavior) in the faulty machine.As usual, X denotes an unknown value.Once the logic model is determined, different logic operations need to be redefined.Table I deter- mines the relationship between inputs and outputs for a universal logic gate, a two-input NAND gate, from which all other combinational logic operations can be generated.
TEST GENERATION
10PP0 for fault f, the logic simulator will expand 10PPO into test sequence 10000, 10110, 10100 which detects f.Finally, the fault simulator is invoked and all other faults detected by the generated test se- quence are removed from the fault list.The S-PODEM algorithm is called repeatedly to synthe- size single test vectors for faults remaining on the fault list.All undetected faults are marked as hard-to- detect.At the end of PGEN, all hard-to-detect faults are assigned single test vector PP...PP to pursue possible test sequence expansions with maximal degree of freedom.
4.1 The PGEN Algorithm
Determination of the Fault Site
The main structure of PGEN is similar to any stan- dard test generation algorithm which starts with ini- tialization of data structures.It reads the circuit de- scription file, in ISCAS 1989 benchmark circuit for- mat [3], and prepares the internal data model.From the fault list file, it prepares the fault list for the cir- cuit under test (CUT).Next, controllability and ob- servability analysis using a modified SCOAP 11 is performed.Afterwards, a fault is selected from the fault list and S-PODEM is invoked for test genera- tion.If a single test vector is successfully synthe- sized, then logic simulation (also included in S-PODEM) driven by cost functions is immediately invoked to expand the test vector into a test sequence.
For example, if S-PODEM generates a test vector According to the ISCAS format, each node N can have only one output and the output line of N is in- dicated by N as well.In context to fanout lines, a fault can occur either on a stem line or on a (fanout) branch line.As shown in Fig. 4.1a fault F occurs on stem line L, thus F can be represented by (N, v) where N is the input node of the faulty line L and v is the fault value.However, representation of a branch line fault needs the use of both input and output nodes.As shown in Fig. 4.1b, fault F is represented as ([N 1, N2], v).It requires two nodes to uniquely specify a fanout branch.Accordingly, in_fault_node and out_fault_node can be defined to respectively in- dicate the input and output nodes of the faulty line for fault F. For example, in Fig. 4.1a the in_fault_node of fault F is N; while in Fig. 4.1b, the in_fault_node is N and out_fault_node is N 2.
Pseudo Input Node
In synchronous sequential circuits there is at least one flip-flop in every single feedback loop, and all flip- flops are identified as pseudo input nodes.The ratio- nale behind the name "pseudo input" is that implica- tion always starts from primary input nodes; however, it is also necessary to consider logic values on the flip-flops to initiate the implication process for loops.Although a flip-flop is not a real primary input, it works as a primary input to start implication.Thus, it is called a pseudo input node.In summary, the reason for the pseudo input node is to break the feedback loop by assigning an initial value to the flip-flop, such that the implication process can be initiated.
Consider an example shown in Fig. 4.2; the impli- cation process starts with the primary input node, A, assigned a logic value by the backtracing process and the flip-flop identified as a pseudo input node.At the beginning of the implication process, all pseudo input nodes are assigned value 0. This strategy is consistent with the assumption that all flip-flops are resettable, and hence initially all of them will provide value 0.
Extension to the general case can be achieved by re- moving the reset constraint to the CUT, and will be discussed later.
During implication, it may be observed that input and output values of a pseudo input node do not match.In general, this mismatch can be resolved by representing the behavior using a pulsating signal which may or may not contain the fault effect.For instance, consider the circuit in Fig. 4.2.Assume that primary input A is assigned value 1, and node G has been recognized as a pseudo input node.When im- plication starts, line G will have value 0 and G2 will have value 1.Since both inputs of G 3 have value 1, the output of G 3 will be 1.Now, node G has different values on its input and output lines.It is possible to continue the implication process by passing value from the pseudo input node, but in that case implica- tion will continue forever and oscillate implication values between 0 and in the feedback loop.
So, in the S-PODEM algorithm whenever there is a mismatch between values at the input and output of a pseudo input node, the output will be assigned a value which contains some information regarding pulsating behavior.In this case, the output of G is assigned P. Implication continues, and nodes G2 and G 3 will also be assigned value P. At this point, pseudo input node G will have the same value at the input and output, and the implication stabilizes.As shown in Fig. 4.2, the test behavior of multiple time frames has been compressed and faithfully repre- sented by the pulsating model.Any faulty line as- signed a pulsating value P, after the implication pro- cess, will be detected (Fig. 4.2).More details will be given in Section 5 which describes the implication process.
Resolving input/output values for a pseudo input 4.4 Implication Types It is important to distinguish implication types for different nodes.Consider the circuit in Fig. 4.3, and assume that primary input A is not assigned any logic value.At the time of implication, initially G will be assigned value 0, and a logic will be as- signed to node G2.Further implication is not possi- ble.Thus, the implication algorithm stabilizes, and concludes that the fault can not be sensitized.Accord- ing to the standard PODEM algorithm, the implica- tion fails and backtracking will be performed.However, this is not a desirable action.By assigning value to input A, pseudo input node G1 will get value P and the fault will be sensitized.Consequently, to re- solve this implication failure, backtracing and more primary input assignments are required instead of backtracking.
Careful observation reveals that the erroneous backtracking choice arose due to the temporary and local effect of the implication value generated by the pseudo input node.The decision in favor of back- tracking was mainly based on the pseudo input node behavior at the first time-frame, rather than the complete (and compressed) time-frame behavior.The difference between the first-time-frame-info and all- time-frame-info arises only with pseudo input nodes.
For primary input nodes, implication values (0, or P) are stabilized in the first time frame and the values remain the same for the entire test experiment.So, the first-time-frame-info is equivalent to the all-time- frame-info for the implication values of primary in- puts.Based on this discussion, two different implication types can be introduced, INPUT-IMPLIED (lIMP), and PSEUDO-IMPLIED (PIMP).In short, an IIMP line (or node) is one which is stabilized over all time frames; however, a PIMP line is not stabilized.Note that flip-flops are the only source of the PIMP implication type.
Implication of a node can be represented by a pair (imp-value, imp-type) in which imp-value indicates the implication value, while imp-type denotes the im- plication type.The implication value can be easily generated using Table I (Section 3).The implication type of a given node N is determined according to the following rules.
1.All assigned primary inputs are IIMR 2. When any controlling input of N is IIMR the out- put of N is IIME 3. When all controlling inputs of N are PIMR the output of N is PIME 4. When all inputs of N are noncontrolling inputs, the output has a non-X value and at least one input is PIMP, then the output of N is PIME 5. When all inputs of N are noncontrolling IIMP and output has a non-X value, then the output of N is IIME 6.For any pseudo input node N, if its input and out- put have the same value, then the output becomes IIME 7.For any pseudo input node N, if the output of N is the in_fault_node, plus, the new and old implica- tion values of N are the same, then the output of N is IIME 8.When the output of N is X, then N is NIMP (NON- IMPLIED).
Rules to 5 are easy to understand, once the proper relation of IIMP and PIMP with first-timeframe-info and all-time-frame-info is clear.For a given fault, a primary input will exhibit the same kind of behavior (with logic value 0, 1, or P) for the entire test experiment; hence it is IIME Besides, when a controlling input is IIMP, it is going to control the behavior of the gate output, and hence the output is also IIME Similarly, when the behavior of a node is controlled only by PIMP node(s), the output is also PIME According to rules 2 and 5, a node can be IIMP only when at least one of the controlling inputs is IIMP or all (noncontrolling) inputs are IIME Since pseudo input nodes initially have PIMP type, at least one of the inputs of the nodes in the feedback loop will be PIMP; and according to rules 3 and 4, their output will always be PIMP except for a controlling IIMP input.So nodes in a feedback loop may never be INPUT IMPLIED (Fig. 4.4) unless rule 6 is con- sidered.As shown in Fig. 4.5, implication types of all nodes are changed to IIMP by applying rule 6; and this reflects that the circuit behavior is stable under the current input assignment.
If N is the in_fault_node, then the implication of pseudo input node N, only using rule 6, may not be stable.Assume that Din of N keeps receiving impli- cation value P0, but Q of N is stuck-at-1.Then the resolved implication value (see Section 5.4.2) of N is determined as P1, and the implication values of Din and Q will never be the same.Consequently, the im- plication type of N will never be changed to IIME With the addition of rule 7, the implication type of node N is resolved to IIME The last rule deals with NON-IMPLIED or NIMP type.The NIMP type is introduced for sake of com- pleteness only.Any line which has value X is not implied, and the implication type is NIME 5. THE S-PODEM ALGORITHM S-PODEM is a test generation algorithm character- ized by a direct search process, in which decisions consist only of primary input assignments.As dis- cussed before, S-PODEM generates single test vec- tors based on the concept of time compression which compacts the sequential circuit behavior using a pulsating model.Thus, instead of unfolding the sequential circuit into combinational logic array, S-PODEM generates a single test vector in a single (yet com- pressed) time frame.Algorithm 5.1 briefly discusses the major routines (or algorithms) used by S-PODEM, and detailed illustrations of these routines are given in the following sections
Algorithm Find_Objective
The find_objective algorithm determines the next ob- jective used by the backtracing routine.First, the rou- tine checks ffhether or not fault f has been sensitized.The objective is set to sensitize f if: (1) faulty line L has value X; or (2) the logic value of L equals the fault value and L has a PIMP implication type.Then, the objective node and value are passed to the back- tracing process.
If f has already been sensitized (regardless of the implication type of L), then the objective is to propagate the fault effect to a primary output.As in the case of the standard PODEM algorithm, a D-frontier is prepared for fault propagation.In PGEN, the D-frontier is implemented as a priority queue sorted on the ascending order of observability values.A lower observability value for node N indicates that N is easier to observe at a primary output; so the ob- servability value serves as a good criterion of priority for the D-frontier.
As a member of the D-frontier, the basic require- ments for node N are: (1) N has the fault effect on one of its inputs and X on its output; and (2) there exists an X-path from node N to a primary output.If no nodes can be included into the D-frontier using the above two criteria, it is still possible to find a poten- tial D-frontier by loosening the requirements.Thus, node N can be a member of the potential D-frontier if: (1) N has the fault effect on one of its inputs and the implication type of N is PIMP; and (2) there ex- ists an X-PIMP-path from N to a primary output.If no nodes can be found for the potential D-frontier, then the process of find-objective fails.Note that an X-PIMP-path is a path where all nodes have either value X on their outputs or have been assigned im- plication type PIMP.
The D-frontier is passed to the backtracing process to perform fault propagation.If backtracing from N fails, then N is discarded and another node is de- queued and backtraced.If the D-frontier becomes empty during backtracing, then the backtracing process returns failure and backtracking is performed.If backtracing from N succeeds, then implication based on the newly assigned primary input value is initi- ated.For fault propagation, the objective value of N is determined in such a way that all unassigned inputs of N will obtain a noncontrolling value.
Algorithm Backtracing
Given an initial objective, the backtracing process is employed to find a primary input assignment such that the objective can be accomplished.The backtrac- ing algorithm used by PGEN considers D flip-flops as combinational nodes using a time compression tech- nique, and must reach a primary input to terminate successfully.When backtracing reaches a primary in- put node N, it enters N into the decision tree along with the objective value of N. The objective value at primary input node N is known as the preset value of N, because the implication process starts with preset values at the primary input nodes.If the backtraced path is a feedback loop, then some nodes in the feed- back loop may be traversed more than once.
Each time a D flip-flop is traversed during a back- tracing, the clock line value is checked first.If the value is X then the next objective is to obtain value P for the clock input.The backtracing process is guided by controllability values derived using a modified SCOAP method.If the backtraced path is a feedback loop, then backtracing directed only by controllability values may never leave the loop.In order to solve this problem, the concept of backtrace count is uti- lized.The backtrace count of a node N is an enumer- ation of the number of times that N has been back- traced during a particular backtracing process.The controllabilities of each node are then weighted by the node's backtrace count, when a backtracing deci- sion is made.If node N is overbacktraced then its controllabilities are weighted by a large number, and N will not be further backtraced.Therefore, infinite backtracing on a feedback loop can be avoided.When a D flip-flop node FF is backtraced more than once for each backtracing process, the backtrac- ing can be further continued or restarted from the initial objective node.If the initial objective node N, which triggers the backtracing, has the objective value v recognized as the control value, then another backtracing path from N may be chosen to provide the control value.However, if v is not the control value of N, then the backtracing continues from FF.
The reason behind this strategy is not difficult to un- derstand.It is possible that all rebacktracing from N ultimately stop on D flip-flops, and could result in infinite switching on rebacktracing from N. To avoid this situation, backtracing from FF continues regard- less of the initial objective (N and v), if the backtrace count of FF exceeds a threshold value TH(bc) (TH(bc) is assumed 8 in this work).
Another major difference in the S-PODEM back- tracing process from the standard one is the selection of the input node.In PGEN, the easiest input of a combinational node is always selected (guided by the controllability/observability values) regardless of the objective value.This approach helps avoid pseudo input nodes, and generally guides the backtracing process towards primary inputs.If a pseudo input node is eventually necessary, then backtracing will lead to a pseudo input node and finally to a primary input.
Given pairs (obnode, ob_newvalue), the obnode input which is the easiest to control to the desired value (ob_newvalue), than all other unassigned input nodes, is selected.With testability analysis, control- ling an input is quantified by the corresponding con- trollability value, and a node can have different 0 and controllabilities.First chosen is an input node N with value X, when N has the lowest controllability value of being driven to the new_obvalue.If there is no input with the X value, then a PIMP input is se- lected.If backtracing fails, the initial objective (obnode, obvalue) is resumed and another backtracing is tried (at most 20 times in this experiment).
Finally, another backtracing threshold TH(bt) is added in the backtracing algorithm to assure termina- tion of each backtracing process, whenever the total number of node traversals exceeds TH(bt).It fails if the backtracing process exceeds the backtracing threshold TH(bt), or is unable to find any input with X value or a PIMP type.
Algorithm Backtracking
The backtracking process is employed to explore the solution space and recover from incorrect decisions.If implication fails, or an objective can not be found, or backtracing is unsuccessful, then backtracking is invoked.This algorithm backtracks only on primary inputs, and pseudo input nodes are not involved.All implication values and types left by the last implica- tion must be removed, before another new value is assigned to the backtracked primary input.
The backtracking process used in S-PODEM has three values since it allows assignment of 0, and P to any input.The logic value of primary input N is inverted at the first backtracking, and N is assigned value P at the second backtracking.One more back- tracking will assign value X to N and remove N from the decision tree.Again, we emphasize that an implication on event (N X, NIMP) should be performed to remove the implication history left by the previous assignment on N. In addition, backtracking is a very time-consuming process, so a threshold is added to prevent backtracking from exploring the entire solu- tion space.In PGEN, the backtracking threshold is set to 2w, where w is the number of primary inputs and k is 2 or 3.
Algorithm Implication
The implication process determines the value at dif- ferent nodes based on the primary input and pseudo input node values.It should be emphasized that there is a basic difference between implication and simula- tion, though the basic purpose of both is quite similar.In the case of simulation, real time behavior of the circuit is imitated and each line will have a value from the set 0, 1, X.In implication the complete fault model, which includes some variables to represent fault effects, is used.Thus, implication includes both faulty and fault-free behavior using model values such as D, 1P, P0, etc. Conventionally, implication is used only for combinational circuits.For sequential circuits, all traditional approaches use simulation be- cause they did not use the concept of compressed timing behavior.This is one of the major features of PGEN.
Implication also uses an event list.The event list is implemented as a priority queue containing all the nodes which are already implied, but have not had their output nodes implied.Again observability val- ues are used for priority determination; however, un- like the D-frontier, the priority queue is maintained according to the descending order of observability values.The nodes which are not easily observed from the primary outputs are thus given a higher priority for implication.Furthermore, implication of other nodes depending upon the values of these hard-to- observe nodes is delayed.This guarantees that impli- cation values of the nodes closer to the primary in- puts are settled, before these values are used for further implications.
As mentioned earlier, implication starts each time when a new preset value is assigned to a primary input by backtracing or backtracking.The primary input node is placed in the implied_list by the setu- p_implied_list routine.In the first implication of a given fault, all D flip-flops are also inserted into the implied_list for implication.The implication algorithm is designed as an event-driven one.For a fault f, implication values of all nodes in the CUT are only cleared in the initialization process of the first impli- cation (for f).We do not clear implication values for the other implications for f (unless backtracking oc- curs) because: 1. there is no reason to destroy the implication his- tory; 2. the implication speed is much faster with implica- tion values retained; 3. the implication results of sequential circuits are quite implication-order dependent.If the implica- tion values are erased, the implication order problem can further deteriorate.
the implication algorithm returns FAILURE.Since PGEN is not proved as a complete algorithm, this mechanism helps avoid infinite loops.At present, the implication threshold value is set to 4 times the num- ber of nodes in the CUT.Successively a node, called implied_node, is de- queued from the implied_list; and each node N im- mediately connected to the output of the implied_node is checked for implication by the imply_node routine (see Section 5.4.2).The imply_node routine also enqueues N into the implied_list, if N can be further implied and new events take place.A special flag, MORE_ASSIGN_REQ, signals an X-oscillation (see Section 5.4.2) and indicates a requirement for further backtracing.The objective node for backtrac- ing is returned by parameter sp_node and the impli- cation routine returns the same flag to the calling rou- tine.
When all nodes are stabilized and no more events are left, a check is made to determine whether or not the faulty line receives a sensitizing value.If it does not have a sensitizing value and the in_fault_node is IIMP, failure is returned from the implication routine.Inhibition of fault effect propagation is also a cause of backtracking, but that is signaled by the find_ob- jective routine.
Algorithm setup_implied_list
The setup_implied_list algorithm prepares an event list from the nodes in the decision tree.As discussed before, for the first implication of a fault f, the as- signed primary input and all D flip-flops arc initial entries of the implied_list.All D flip-flops are ini- tially set to value 0, and the primary input node is set to its preset value.If f is on a primary input line, the fault effect will be taken into account immediately.
On subsequent implications of f, only the newly as- signed primary input is placed into the implied_list.Note that the implied_list provides event sources.
The implication algorithm keeps iterating until the implied_list is empty.Counter imp_cycle is em- ployed to keep track of the number of times that dif- ferent nodes have been implied.When the value of imp_cycle exceeds the implication threshold value,
Algorithm imply_node
The imply_node algorithm is responsible for implying a node called imnode, and enqueuing the imnode into the implied_list if it is successfully implied.
First, pseudo input nodes are considered.The vari- able result in the imply_node routine represents the new implication value at the D flip-flop output in the next time frame, and depends upon the Din value.To consider the pulsating behavior, the new output im- plication value of flip-flop FF is resolved with its previous output implication value and a resolved value is created.Some simple examples of the re- solved output include: conversion from 0 to changes into P, 0 to D changes into P0 and 0 to/) into 0P; conversion from to 0 changes into P, to D changed into 1P and to/3 into P1.
The reason for the conversions is very simple.For example, if there is a 0 to D implication change on the output of FF, i.e., in the fault-free circuit the sig- nal has a 0 to conversion, a pulsating effect (P) has occurred.However, if the circuit is faulty, the signal remains at 0. In summary, the new and old implica- tion values on the output of FF are resolved and stored in the variable new_imvalue.For some signal conversions, the new implication value just over- writes the old implication value.For example, if the old implication value on the output of FF is P1 and the new value is 1P, then the new implication value is resolved as 1P.The implication value conversions are summaried in Table II.The first column (row) of Table II gives the old (new) implication values of FF, and resolved values can directly be obtained from the table entries.Note that if the output of FF is the fault site, then special implication value conversion must be considered.Assume that the output of FF is stuck- at-0, and the new_imvalue of FF is P1; it is necessary to resolve the new_imvalue as P0.The implication type of FF is changed to IIMP if: (1) The resolved implication value of Q is the same as the implication value of Di; or (2) The output of FF is faulty, and the old imvalue and new imvalue of FF are the same.
The problem of X oscillation is very important.
Consider the circuit in Fig. 5.1(a); flip-flop G1 is rec- ognized as a pseudo input node.Implication starts with value 0 on node G and continues with the out- put value of G 2 implied to logic 1.At that moment,in- put and output of pseudo input node G1 will be dif- ferent and resolved to value P. As it can be observed from Fig. 5.1(b), it is no longer possible to propagate the value P through node G2, and the output of G 2 will be set to X. Now, the input of node G is X and, according to the implication rules, X can not be prop- agated through a pseudo input node.Hence, the out- put will again be set to 0. This will repeat the situa- tion of Fig. 5.1(a), and the output of node G1 will keep oscillating between values 0 and P. Hence, each pseudo input node keeps an individual counter which counts the number of changes from a non-X value to X value.If there is an X-value oscil- lation occurring at pseudo input node N, then the X-oscillation counter of N will exceed the X-value change threshold.This indicates that there is one in- put (in the feedback loop) not assigned a value which provides the source of the X-value changes.So, this input should be found and backtracing continued un- til it is assigned an appropriate value.To indicate this, the imply_node routine returns a special value MORE_ASSIGN_REQ.
Implication on combinational nodes is much easier.If implication of the current node N is successful, then the implication value with type of N is updated.
In addition, N is enqueued into the implied_list to trigger more implication events.
Other Routines
The implication routine is actually invoked by the check_implication_and_justify routine.This routine simply invokes implication, and if implication returns the flag MORE_ASSIGN_REQ then it invokes the backtracing routine.Check_test_and_simulate is an- other simple algorithm.First, it determines whether or not the fault effect has reached a primary output by checking implication values on primary outputs.A fault is potentially detected if there exists at least one implication value of primary outputs containing P1, P0, 0P, 1P, PP, D, or/3 regardless of the implication types.If so, S-PODEM performs logic simulation to expand the single test vector into a test sequence.A successful result from the logic simulation indicates that the test sequence can detect the fault.
SIMULATION AND RESULTS
Logic simulation is an integral part of the S-PODEM algorithm.Given a fault f, the test vector generation phase of S-PODEM produces a single test vector which should have a high probability of being ex- panded as a test sequence for f.However, the com- pression of multiple time frames for test generation serves as both an advantage as well as a disadvan- tage.In general, only logic simulation can provide accurate information about circuit behavior.One can not rely entirely on the information given by the sin- gle test vector, since the P-model contains vague sig- nal representations as discussed in previous sections.
Logic Simulation and Sequence Expansion
In the end of S-PODEM, there might be primary in- puts assigned X, i.e., S-PODEM does not assign val- ues to them.To achieve the maximal freedom on se- quence expansion, all primary inputs assigned X are assigned P. The logic simulator used in PGEN simu- lates both fault-free and faulty circuits simulta- neously, and stops when different values are observed at the output of faulty and fault-free circuits, or when the circuit is simulated for the predefined number of clock cycles.To expand the P signals at primary in- puts, the logic simulator uses dynamic cost analysis 1] to guide the sequence expansion.
Assume that S-PODEM generates a single test vec- tor OPP for fault f, dynamic cost analysis will deter- mine the first test pattern as 1000; and only primary inputs assigned P can change logic value for se- quence expansion.Ill is not detected by 1000, further expansion is required.Based on the first P (of 10PP ), two test patterns 1000 and 1010 are tried; and costs C and C2 are estimated respectively based on fault sensitization and propagation.If f is not detected by either test pattern and C2 is smaller than C1 (for example), then 1010 is a better choice than 1000.From 1010, two more test patterns 1010 and 1011 (based on the second P of 10PP) can be further tried.
If 1011 has a lower cost than 1010, then the second test pattern is determined as 1011.Thus, multiple bit changes are possible.Note that the cost analysis of allows only single bit change.
The sequence expansion process repeats until f is detected or a simulation threshold is exceeded.Once the single vector is successfully expanded as a test sequence by the logic simulation, a counter S-vec (number of successful vectors) is incremented by one.
Fault Simulation
When a test sequence T has been successfully ex- panded from the single test vector for fault f, using the dynamic cost analysis, fault simulation is performed to find all other faults covered by T. Differ- ential fault simulation (Dsim) [8] has been recognized as very powerful in aspects of speed and mem- ory requirement, and was implemented in PGEN with minor modification.If a sequence T is expanded and fails to detect the target fault f, it might be wasteful if T is abandoned immediately.Thus, fault simulation (called intermediate fault simulation) is performed to find other faults which are covered by T. If T covers other faults in the fault list, then T is incorporated into the test sequence; otherwise, T is abandoned.Note that if intermediate fault simulation (Isim) is successful, then the counter S-vec is incremented by one as well.
Circuits
The test generation phase (S-PODEM) of PGEN is strongly incorporated with the fault simulation phase (Dsim), and their relationship is bidirectional.In PGEN, S-PODEM generates test patterns for Dsim; and Dsim provides S-PODEM with good and faulty circuit states from which the test generation phase can be resumed.Thus, starting from an unknown cir- cuit state, S-PODEM generates a single test vector V by selecting a fault fl which is closest to primary outputs.Also, starting from an unknown state, test vector V is expanded into test sequence T using one of the three expanding methods.Test sequence T is then passed to Dsim to find all other faults covered by T. These faults are removed from the fault list.In addition, Dsim manipulates the faulty state informa- tion for all faults which can not be detected by T. S-PODEM then selects another fault f2, and resumes single test vector generation using the faulty state of f2 provided by Dsim.The process is repeated until all faults are detected, or no new faults can be detected.
Note that the faulty state of f2 discussed above is the circuit state derived by applying test sequence T start- ing from unknown state under fault f2.
So far, discussions of PGEN are mainly restricted to the domain of resettable synchronous sequential cir- cuits.The results can be further extended to a more general case: nonresettable synchronous sequential circuit testing.In the process of single test vector synthesis, S-PODEM assumes the unknown state (X) for all pseudo input nodes, when the implication pro- cess starts.In addition, the fault list is sorted such that the faults closer to outputs will be processed ear- lier.This strategy avoids the use of an initialization sequence.Assume that fault f is closer to primary outputs than other faults; in most cases, a sequence T can be generated to detect f from the unknown state.However, if f does not occur then T will drive the CUT to a known (or partially known) state from which test generation for other faults can be done smoothly.It has been found that all ISCAS89 bench- mark circuits can be tested using the aforementioned philosophy except $510.It is almost impossible to drive $510 to a specific state since a synchronization sequence might not exist.
Simulation Results
The test generation algorithm described previously was implemented in the program PGEN, which con- sists of about 5000 lines of C code and runs in a SUN 3/260 environment.Results for several ISCAS89 se- quential benchmark circuits are shown in Table III.For each circuit, circuit name (Circuit), test length (Length), number of faults (Faults), number of faults detected (Detected), number of faults detected by Dsim (Dsim), number of faults detected by Isim (Isim), number of total single vectors synthesized (T-Vec), number of successful single vectors (S-Vec), fault coverage (Cov), ATPG time (Time) and logic simulation threshold (TH(sim)) are provided.The fault coverage presented in this table is sure fault coverage, and possible fault detections are excluded in the calculation of Cov.The ATPG time is represented in hours of SUN 3/260 except $27 (in sec- onds).The simulation was conducted using the fol- lowing thresholds: backtrackingm(PI22), backtracing (TH(bt))--(N22), X-oscillation--( 20), implication-- (N22); where PI denotes the number of primary in- puts of the CUT, N is the number of nodes.In all benchmark circuit simulations, no reset line is assumed and all CUTs are tested starting from the unknown state.As expected, $510 can not be tested since no flip-flop can be initialized.The pulsating model used by S-PODEM is weak in pulsating rep- resentation, and there might be faults for which S-PODEM can not synthesize a test vector; but test patterns really exist.PGEN avoids this situation by expanding test patterns for all hard-to-detect faults using vector PPP...PP before the end of the test gen- eration process.For example, PGEN detects 370 (368 + 2) faults for $400 as shown in Table III.Among these 370 faults detected, 2 faults are detected by using vector PPP...PP.
Threshold values have strong impact on the performance of PGEN.For example, PGEN can detect only 257 faults when the logic simulation threshold is set to 20.However, 353 faults can be detected if TH(sim) is increased to 50.It is worth noting that the summa- tion of Dsim, Isim and S-Vec is greater than the num- ber of detected faults (Detected).For example, there are totally 262 faults detected in circuit $298.Among these 262 faults, 160 of them are detected by Dsim, 98 detected by Isim and 4 (= 262 160 98) detected by logic simulation which is used to expand the test sequence.However, the number of successful vectors (S-Vec) are 11 which hints that 7 out of the 11 successful vectors are contributed by Isim.Table III demonstrates that Isim is very powerful, and it de- tects a majority of faults in some circuits (such as $208, $349).
CONCLUSIONS
In this paper, a novel sequential circuit testing method called PGEN has been introduced.Results of the proposed method might not be attractive when compared with existing solutions [5][6].However, the motivation of this research is to unify the determin- istic test generation method and simulation-based method for sequential circuit testing and to seek for the possibility of resulting in a better solution.Using a new logic model (P-model) and the concept of circuit time compression, the multiple time-frame circuit be- havior can be efficiently represented by a single time frame.In addition, the concepts of pseudo input node and implication type support the implementation of PGEN.Thus, PODEM has been successfully ex- tended to the domain of sequential circuit testing.The idea behind PGEN is to determine the input patterns which are fixed (and also can be easily generated) in the test process, then use logic simulation (guided by cost analysis) to expand the pulsating signals.
According to benchmark circuit simulation, single test vectors generated by S-PODEM have relatively low probability of being expanded as real test pat- terns.This comes from the inadequacy of the pulsating model, and can be remedied using a more powerful model.Computing time used by PGEN is generally high, and the reasons are: (1) PGEN can just predict resettability of the circuits using SCOAP and wastes lots of CPU time on synthesizing single vec- tors for untestable faults; (2) the backtracing and im- plication processes are easy to be trapped in feedback loops (until threshold values are exceeded), and this wastes CPU time; (3) the PGEN program was not optimally coded; for example, a lot of events are un- necessarily activated in logic simulation and fault simulation.Test sequences are generally very com- pact, and fault coverage can be higher if larger values are set to the thresholds (but CPU time will be in- creased).
FIGURE 5 .
FIGURE 5.l X oscillations with a pseudo input node.
TABLE II Conversions
of Implication Values
TABLE III Experimental
Results on ISCAS89 Benchmark Sequential Circuits | 11,004 | 1996-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Implications of the Harmonization of [18F]FDG-PET/CT Imaging for Response Assessment of Treatment in Radiotherapy Planning
The purpose of this work is to present useful recommendations for the use of [18F]FDG-PET/CT imaging in radiotherapy planning and monitoring under different versions of EARL accreditation for harmonization of PET devices. A proof-of-concept experiment designed on an anthropomorphic phantom was carried out to establish the most suitable interpolation methods of the PET images in the different steps of the planning procedure. Based on PET/CT images obtained by using these optimal interpolations for the old EARL accreditation (EARL1) and for the new one (EARL2), the treatment plannings of representative actual clinical cases were calculated, and the clinical implications of the resulting differences were analyzed. As expected, EARL2 provided smaller volumes with higher resolution than EARL1. The increase in the size of the reconstructed volumes with EARL1 accreditation caused high doses in the organs at risk and in the regions adjacent to the target volumes. EARL2 accreditation allowed an improvement in the accuracy of the PET imaging precision, allowing more personalized radiotherapy. This work provides recommendations for those centers that intend to benefit from the new accreditation, EARL2, and can help build confidence of those that must continue working under the EARL1 accreditation.
Introduction
A rising challenge in medicine is finding more accurate and personalized therapies. Advances in medical imaging are strongly linked to patient-tailored therapy planning, monitoring, and disease follow-up [1,2]. Since biological changes are expected during treatments, the functional information provided by nuclear medicine images plays an important role in clinical areas, such as oncology [3]. In the case of radiotherapy (RT), in which the prescribed dose is scheduled for multiple sessions, in addition to the diagnosis and staging of cancer, molecular imaging is also required to define the structures to be used in the planning process, which makes quantification and metrics a relevant process [4]. Significant advances in RT have allowed us to obtain an excellent balance between delivering a high dose to the tumor and a low dose to the healthy tissues surrounding the lesion [5,6]. Custom shielding blocks have been replaced with a versatile motorized multileaf collimator (MLC) that allows computer-controlled linear accelerators (LINACs) to be connected to the treatment planning system (TPS), in which the dose distribution is calculated beforehand based on the image of the patient. In most cases, treatment planning is solved by considering the structures defined in the patient image from the computed tomography (CT) study. The role of image support in the whole process is so important that if some change is observed throughout treatment with image-guided radiotherapy (IGRT) techniques, adaptive planning must be considered [7,8]. In this scenario, where the highest precision in RT is dependent on the image, the provided information must be as complete as possible about the lesion for treatment. In this way, molecular images must be fused with morphological information from CT or magnetic resonance imaging data to be implemented in TPS for dose planning [9]. Positron emission tomography (PET) images can provide the visualization and quantification of the effects of treatment under monitoring to adapt the RT planning and dose prescription to the new targets if the procedure is ready to accurately consider biological changes [10].
The dose-painting technique (DP) is a new approach in RT where the prescription to the target volume is a non-uniform dose distribution based on functional information [11,12], usually provided by a PET study. Unfortunately, the quantification variability inherent to molecular imaging, such as that provided by positron emission tomography (PET) with [ 18 F]fluorodeoxyglucose ([ 18 F]FDG), is not ready for direct use in RT treatment planning. Beyond visual evaluation, the definition of the therapeutic target and the prescription dose based on parameters such as the standardized uptake value (SUV) require numerical values and standardization of imaging procedures [2,13]. In this scenario, the EARL [ 18 F]FDG-PET/CT accreditation program ("resEARch for Life"-EARL) (EARL1) launched by the European Association of Nuclear Medicine (EANM) turned out to be essential for betting on personalized RT based on molecular imaging with guarantees. From then on, an increasing number of RT departments in many hospitals are being prompted to use the PET/CT image, as nuclear medicine departments are adopting the required guidelines and specifications to obtain the accreditation of their PET/CT devices and to be able to participate in multicenter studies. Today, specialists in nuclear medicine and radiation oncology work together to write procedure guidelines in which this accreditation program is specifically recommended to be followed for tumor imaging [14][15][16].
Unfortunately, but not unexpectedly, a normalization procedure shared by multiple PET/CT systems can cause the range of requirements to underuse some of the performance available in the latest generation of devices. This involves a loss of precision in spatial resolution for the target definition process, which could be essential in RT planning [17]. The standard in PET imaging for RT based on functional information should be based on the highest performance of the PET/CT devices [18], although harmonization conditions are required for multicenter studies. Kitajima et al. [19] stated how necessary harmonized quantitative volume-based values obtained with [ 18 F]FDG-PET are to provide essential information regarding prognosis for both recurrence and death in patients with operable invasive breast cancer. Ly et al. [20] observed significant differences in a recurrent parameter to classify the status of the disease and stratify patients with lymphoma, when the updated EARL recommendations were used compared with previous criteria. Some studies propose SUV harmonization strategies that can improve the detection of lesions by using reconstruction algorithms that comply with older systems to allow comparison with historical case cohorts [21]. Other solutions are based on the use of a specific software tool, such as Siemens's EQ.PET, capable of achieving both goals from a single data set with excellent results [22], although this software can be applied only to scanners and reconstruction algorithms developed by the same company and also works as a black box without the ability to check the image segmentation result by the specialist.
In this scenario, since the implementation of EARL1, it has been proposed to update this accreditation to a new standard (EARL2) [23] for the necessary harmonization process [24,25]. Although this new accreditation is recent, studies have already been carried out to evaluate the impact on clinical practice of the implementation of EARL2 [18,20]. Despite the previous works presented as comparison studies to validate software tools such as EQ.PET and assess the ability of several approaches to harmonize SUVs from different PET systems by using multiple reconstruction techniques, it would also be interesting to know in which clinical situations it is convenient to make use of one or the other accreditation, without sacrificing any of them.
The significant impact of PET imaging parameters on automatic tumor delineation for RT planning has been extensively shown. In this work, we tried to establish useful recommendations for the use of PET/CT imaging in RT planning and monitoring [26]. We performed a comparative study between the planning of actual cases in which the volume segmentation process was carried out based on image data obtained with different accreditation protocols. To simplify a rather complex and multivariable problem, a proof-ofconcept experiment based on a specific phantom was carried out. This previous experiment provided us with the relevant characteristics that the lesions must present in order to cover a wide spectrum of situations with as few cases as necessary for this comparative study.
New EARL Accreditation
The EARL [ 18 F]PET/CT accreditation consists of two procedures performed with phantoms. First, the 6 L cylinder with 70 MBq of [ 18 F]FDG is scanned in two or more beds with a typical duration per bed (5 min). The uniformity of scan is checked, as well as the correspondence of the calculated SUV with the reference and expected value of 1. An error of ±10% is accepted. This procedure verifies the well counter measurements for dose preparation with the accuracy of the SUV provided by the reconstruction software and the internal calibration with the daily quality control of PET/CT system. This procedure is the same for the EARL1 and EARL2 standards. The second procedure consists of the NEMA-2007 phantom, which simulates six tumors over the background. The 10 L phantom is filled with 20 MBq of [ 18 F]FDG. The six glass spheres with diameters from 10 to 37 mm are filled with [ 18 F]FDG at ten times the concentration of the background. The EARL1 standard checks the recovery coefficient (RC) using SUVmax and SUVmean on the volumes of interest delineated on the spheres. RC is the ratio between the concentration of the measured and the actual activity of each spherical volume. The EARL1 standard accepts RC in the range 0.27-0.43, EARL2 in the range 0.39-0.61, allowing and enforcing higher quality and correspondence with the activity concentration in the tumor. EARL2 introduced SUVpeak, which averages uptake in a 12 mm diameter VOI positioned such as to yield the highest value across all tumor voxels. SUVpeak replaced SUVmax, which is prone to noise, and SUVmean, which is affected by manual delineation of the tumor. Higher RC values in EARL2 require generally higher spatial resolution of the PET scanner and working with images with smaller pixel size.
For this work, the PET/CT scanner used was the Siemens Biograph mCT 64 model. This device achieved EARL1 accreditation in 2017 [27]. The image reconstruction protocol that met accreditation criteria did not achieve the best resolution that the equipment could provide, underexploiting its capabilities. Therefore, in 2019, to achieve the best performance of this device, new parameters were followed, and recently, we renovated the annual EARL accreditation, which was called EARL2 [23].
The analysis of the images obtained for both studies was carried out with IDL Virtual Machine software [28] and required the calculation of RCs. In EARL1, these RCs were calculated from different values: SUVmean, which is the result of dividing the average SUV inside the volume by the total volume of the sphere, and SUVmax, which is obtained by dividing the maximum SUV inside the volume by the total volume. The new EARL2 accreditation also uses SUVpeak, which considers a spherical volume of 12 mm in diameter over the original volume to define the SUV corresponding to the highest uptake.
Previously, for the reconstruction of the EARL1 image, the iterative algorithm "ordered subset expectation maximization" (OSEM) was used, as well as a post-processing Gaussian filter with full width at half maximum (FWHM) equal to 6 mm. In the case of EARL2, the reconstruction algorithm applied was TrueX, based on the point spread function (PSF), and a Gaussian filter with FWHM = 5 mm. The time-of-flight (TOF) correction was applied for both accreditations. For EARL1, the PET image grid size was 3.1819 × 3.1819 × 5 mm 3 , and 1.5910 × 1.5910 × 1.5 mm 3 for EARL2.
Reconstructions of the anthropomorphic phantom PET/CT image fit the requirements of EARL2 with several parameters, which were established when the RCs appeared to be more centered between the limiting values established by the accreditation and so to be as close as possible to other accredited devices. The RCs obtained for EARL1 and EARL2 are shown in Figure 1. As expected, while in the EARL2 reconstruction most volumes showed very similar RCs (except the smallest), in the EARL1 reconstruction, these RCs showed greater differences between them. As will be discussed later, these results could have some clinical influence when EARL1 reconstructions are used in clinical cases with lesions of various sizes. Previously, for the reconstruction of the EARL1 image, the iterative algorithm "ordered subset expectation maximization" (OSEM) was used, as well as a post-processing Gaussian filter with full width at half maximum (FWHM) equal to 6 mm. In the case of EARL2, the reconstruction algorithm applied was TrueX, based on the point spread function (PSF), and a Gaussian filter with FWHM = 5 mm. The time-of-flight (TOF) correction was applied for both accreditations. For EARL1, the PET image grid size was 3.1819 × 3.1819 × 5 mm³, and 1.5910 × 1.5910 × 1.5 mm³ for EARL2.
Reconstructions of the anthropomorphic phantom PET/CT image fit the requirements of EARL2 with several parameters, which were established when the RCs appeared to be more centered between the limiting values established by the accreditation and so to be as close as possible to other accredited devices. The RCs obtained for EARL1 and EARL2 are shown in Figure 1. As expected, while in the EARL2 reconstruction most volumes showed very similar RCs (except the smallest), in the EARL1 reconstruction, these RCs showed greater differences between them. As will be discussed later, these results could have some clinical influence when EARL1 reconstructions are used in clinical cases with lesions of various sizes. Figure 1. SUV recovery coefficients for EARL1 (blue dots) and EARL2 (black dots) against the volume of each sphere for each chosen reconstruction. The lines with the accepted limits for EARL1 appear in blue and yellow for EARL2. SUVpeak is introduced in EARL2 and did not exist in EARL1.
Resampling Process
To calculate the dose in the clinical image, different interpolations in the PET and CT images are necessary. First, it is necessary to interpolate the size of the PET image with the CT grid size for the segmentation of the volumes of interest (VOI) in the resulting fused image. In the second step, the image must be interpolated to the dose calculation grid, which is selected as a compromise with the accuracy involved in each case, depending on the VOI sizes and expected dose gradient, fundamentally. For this work, we established the size of 256 × 256 pixels per axial slice. This resolution is higher than that usually considered in commercial treatment planning systems despite the higher computational time involved, but our planning system was developed to run on a multiprocessor platform to achieve solutions in a reasonable computational time while maintaining high precision, which was necessary for this work to make a fair comparison between the treatment plans. Figure 1. SUV recovery coefficients for EARL1 (blue dots) and EARL2 (black dots) against the volume of each sphere for each chosen reconstruction. The lines with the accepted limits for EARL1 appear in blue and yellow for EARL2. SUVpeak is introduced in EARL2 and did not exist in EARL1.
Resampling Process
To calculate the dose in the clinical image, different interpolations in the PET and CT images are necessary. First, it is necessary to interpolate the size of the PET image with the CT grid size for the segmentation of the volumes of interest (VOI) in the resulting fused image. In the second step, the image must be interpolated to the dose calculation grid, which is selected as a compromise with the accuracy involved in each case, depending on the VOI sizes and expected dose gradient, fundamentally. For this work, we established the size of 256 × 256 pixels per axial slice. This resolution is higher than that usually considered in commercial treatment planning systems despite the higher computational time involved, but our planning system was developed to run on a multiprocessor platform to achieve solutions in a reasonable computational time while maintaining high precision, which was necessary for this work to make a fair comparison between the treatment plans.
For this work, a configuration of an experiment with an anthropomorphic phantom (CIRS 606 model) hosting known different volumes and SUV distributions was used to find the interpolation method that best provided actual volumes [10]. Two inserts with different volumes and filled with a radioactive solution of [ 18 F]FDG were used to simulate two tumors of different sizes. The first insert consisted of an Eppendorf tube of known volume V1 (0.3 mL, with an activity of 0.116 MBq of [ 18 F]FDG), and another identical tube called V2 (0.3 mL) was placed inside a cryovial tube V3 (2 mL, with an activity of 0.1 MBq). This configuration presented a scenario as generic as possible, to take into account different lesions ranging from very small volumes to larger ones, and with both homogeneous and heterogeneous activities. In this way, this proof-of-concept experiment allowed us to assess beforehand the relevant scenarios to be analyzed in order to reduce the actual cases needed for the comparison study.
The image acquisition of this set-up was carried out using both EARL protocols, and three typical 3D interpolation methods were studied: linear, nearest neighbors, and spline. Segmentation of the volumes of uptake was performed in each grid using an algorithm based on affine propagation [29]. The parameters associated with this algorithm were modified to generate all volumes to simultaneously achieve the set of values closest to the actual corresponding volumes. Once established, these parameters were kept constant in all subsequent reconstructions and interpolations. In the interpolation process from the PET grid to the CT grid, and from this CT grid to the dose calculation grid, a comparative analysis of the volumes generated by segmentation with the known volumes was performed to find the best method to represent the potential actual volumes in patients. In addition, the similarity in the shape and relative position of the volumes was evaluated. For this, the shape coefficient (SC) of each segmented volume was considered as the division between the intersection and the union of the obtained and actual volumes. Therefore, in the ideal case, SC = 1.
Volume Segmentation Method and Radiotherapy Planning
Intensity-modulated radiation therapy (IMRT) plans were calculated for all cases from the segmentation and generation of planning target volumes (PTVs) and organs at risk (OARs). The whole planning process was carried out on the CARMEN platform [30], where image processing and segmentation were performed, and an accurate dose calculation was considered by MC simulation. For this work, a forward planning algorithm, such as BIOMAP [30] implemented on the CARMEN platform, was required, where a direct aperture of the MLC is based exclusively on segmented morphofunctional volumes in patient images [10]. Any change in volumes inherent to the followed accreditation along with the segmentation process had some distinguishable influence on the optimization process.
Therefore, the MLC apertures for the radiotherapy plans were generated considering only the data from the PET/CT images, regardless of the desired dose. The clinical impact on the treatments due to the different reconstructions of the PET image could be directly assessed on the dose calculated through this type of optimization. Treatment planning was conducted independently for the volumes obtained from both reconstructions.
The general procedure for tumor imaging acquisition was taken from EANM guidelines [24]. Since the main objective of our study was to observe the greater clinical impact associated with the establishment of one accreditation protocol or the other, certain conditions were forced into a routine clinical scenario, such as a stress test for both accreditations. In this sense, the differences found could be considered the largest expected in any scenario. Semi-automatic segmentation of metabolic active tumor volumes (MATVs) is recommended for each case by setting a threshold of 41% of the SUV maximum in the region of interest around each lesion. We adopted this single value for all lesions in each evaluated case, since the clinical routine could lead to not repeating the same process on the same image, which is usual in RT when the prescription is based exclusively on the morphological CT image. However, all cases presented higher tumor-to-background values and homogeneous backgrounds to consider 41% of the maximum SUV as defined in the EANM guidelines [31]. Subsequently, an affine propagation-based segmentation algorithm [29] was applied to differentiate heterogeneous areas within the same lesion. Next, the transition from the CT grid size (512 × 512 voxels per slice) to the dose calculation grid (256 × 256 voxels per slice) was carried out using the interpolation methods chosen after the previous study with the anthropomorphic phantom.
Clinical Cases
The study was performed in accordance with the ethical standards of our institutional research committee and with the Declaration of Helsinki of 1964 and its later amendments. Informed consent was obtained from all patients involved in the study.
All patients prior to [ 18 F]FDG-PET fasted overnight. The prescribed dose was in a range of 3-4 MBq/kg (resulting in 230 MBq per 70 kg patient). The patients were weighed prior to the study. CT was performed as necessary for respective EARL accreditation. For EARL1, the CT image had a size of 256 × 256 pixels at an 80 × 80 cm field of view with an axial slice thickness of 5 mm, and for EARL2, a size of 512 × 512 pixels at the same field of view and axial slice thickness. PET images were acquired following the protocols described above for EARL1 and EARL2 accreditations.
The clinical cases presented below were selected according to the differences between them regarding morphology, size, location, and number of lesions with the idea of covering a wide range of possible scenarios for the DP approach. In fact, the range of prescriptions for DP was previously considered in the study conducted with the anthropomorphic phantom [10]. Just like in the phantom, the different sizes considered in all clinical cases were relatively small, since, as it appears in Figure 1, no potential differences between accreditation protocols were expected for volumes larger than 5 mL.
Case 1
A case of head-and-neck cancer was evaluated with a single morphological lesion but with two regions in the PET image for different dose prescriptions. This was a cervical paraganglioma located next to the right parotid gland where two different size targets were considered according to the heterogeneity of [ 18 F]FDG uptake in the PET study with lesions ( Figure 2). The OARs considered in treatment planning were the spinal cord, the left and right parotid glands, the larynx, and the jaw. The planning objectives were that at least 90% of the internal target (PTV2) must receive 56 Gy, and at least 90% of the external target (PTV1) must receive 50 Gy.
values and homogeneous backgrounds to consider 41% of the maximum SUV as defined in the EANM guidelines [31]. Subsequently, an affine propagation-based segmentation algorithm [29] was applied to differentiate heterogeneous areas within the same lesion. Next, the transition from the CT grid size (512 × 512 voxels per slice) to the dose calculation grid (256 × 256 voxels per slice) was carried out using the interpolation methods chosen after the previous study with the anthropomorphic phantom.
Clinical Cases
The study was performed in accordance with the ethical standards of our institutional research committee and with the Declaration of Helsinki of 1964 and its later amendments. Informed consent was obtained from all patients involved in the study.
All patients prior to [ 18 F]FDG-PET fasted overnight. The prescribed dose was in a range of 3-4 MBq/kg (resulting in 230 MBq per 70 kg patient). The patients were weighed prior to the study. CT was performed as necessary for respective EARL accreditation. For EARL1, the CT image had a size of 256 × 256 pixels at an 80 × 80 cm field of view with an axial slice thickness of 5 mm, and for EARL2, a size of 512 × 512 pixels at the same field of view and axial slice thickness. PET images were acquired following the protocols described above for EARL1 and EARL2 accreditations.
The clinical cases presented below were selected according to the differences between them regarding morphology, size, location, and number of lesions with the idea of covering a wide range of possible scenarios for the DP approach. In fact, the range of prescriptions for DP was previously considered in the study conducted with the anthropomorphic phantom [10]. Just like in the phantom, the different sizes considered in all clinical cases were relatively small, since, as it appears in Figure 1, no potential differences between accreditation protocols were expected for volumes larger than 5 mL.
Case 1
A case of head-and-neck cancer was evaluated with a single morphological lesion but with two regions in the PET image for different dose prescriptions. This was a cervical paraganglioma located next to the right parotid gland where two different size targets were considered according to the heterogeneity of [ 18 F]FDG uptake in the PET study with lesions ( Figure 2). The OARs considered in treatment planning were the spinal cord, the left and right parotid glands, the larynx, and the jaw. The planning objectives were that at least 90% of the internal target (PTV2) must receive 56 Gy, and at least 90% of the external target (PTV1) must receive 50 Gy.
Case 2
A case of lymphoma with a single lesion located in the right lung close to the sternum was selected ( Figure 3). As in the previous case, the lesion presented some heterogeneity in [ 18 F]FDG uptake, so, although this case presented only one morphological volume, two different regions with different dose prescriptions were distinguished thanks to functional information from PET study. Additionally, this case involved uncertainty in breathing movement for further discussion. The OARs considered in treatment planning were the left and right lung, the heart, the spinal cord, and the esophagus. The planning objectives were that at least 90% of the internal target (PTV2) must receive 36 Gy, and at least 90% of the external target (PTV1) must receive 30 Gy. Furthermore, a sub-volume was generated in the lung that contained the lesion to quantify the different doses delivered to the whole lung and the region closest to the tumor to assess the influence of breathing movement, since it could be an important aspect to consider when one or the other accreditation protocol should be chosen.
Case 2
A case of lymphoma with a single lesion located in the right lung close to the sternum was selected (Figure 3). As in the previous case, the lesion presented some heterogeneity in [ 18 F]FDG uptake, so, although this case presented only one morphological volume, two different regions with different dose prescriptions were distinguished thanks to functional information from PET study. Additionally, this case involved uncertainty in breathing movement for further discussion. The OARs considered in treatment planning were the left and right lung, the heart, the spinal cord, and the esophagus. The planning objectives were that at least 90% of the internal target (PTV2) must receive 36 Gy, and at least 90% of the external target (PTV1) must receive 30 Gy. Furthermore, a sub-volume was generated in the lung that contained the lesion to quantify the different doses delivered to the whole lung and the region closest to the tumor to assess the influence of breathing movement, since it could be an important aspect to consider when one or the other accreditation protocol should be chosen.
Case 3
Another case of lung lymphoma was studied. Unlike the previous cases, there were two disconnected lesions of different sizes, one in the right lung and the other in the left ( Figure 4). The OARs involved in this study were the same as in the previous case, and similarly, two auxiliary structures were generated in both lungs. For this case, the planning objectives were at least 90% of each target (PTV1 and PTV2) receiving 36 Gy.
Case 3
Another case of lung lymphoma was studied. Unlike the previous cases, there were two disconnected lesions of different sizes, one in the right lung and the other in the left ( Figure 4). The OARs involved in this study were the same as in the previous case, and similarly, two auxiliary structures were generated in both lungs. For this case, the planning objectives were at least 90% of each target (PTV1 and PTV2) receiving 36 Gy.
Segmentation of PET Images from the EARL1 and EARL2 Reconstructions
The results obtained for the study of the interpolation procedure for each reconstruction method are presented in Tables 1 and 2. The volume values obtained after the two involved interpolations (from PET grid to CT grid, and from CT grid to dose calculation grid) are shown for the three assessed 3D interpolation methods, i.e., linear, nearest neigh-
Segmentation of PET Images from the EARL1 and EARL2 Reconstructions
The results obtained for the study of the interpolation procedure for each reconstruction method are presented in Tables 1 and 2. The volume values obtained after the two involved interpolations (from PET grid to CT grid, and from CT grid to dose calculation grid) are shown for the three assessed 3D interpolation methods, i.e., linear, nearest neighbors (NN), and spline, by considering all the possible combinations over both interpolation processes. Tables 3 and 4 show the SC of the volumes V1, V2, and V3 hosted in the phantom obtained for the different interpolation methods applied to each of the reconstructions involved in the accreditations EARL1 and EARL2, respectively. To consider the overall effect of each interpolation method, the coefficients obtained for all volumes in Tables 3 and 4 were multiplied. Table 5 shows these cumulative coefficients for each interpolation method in both accreditations throughout the resampling process. Table 5. Shape coefficients obtained after the two consecutive interpolation processes (from the PET grid to the CT grid, and from the CT grid to the dose calculation grid) for each image reconstruction followed in EARL1 and EARL2 accreditations.
Planning of Clinical Cases under EARL1 and EARL2 Accreditations
An assessment of each volume of interest was carried out for both reconstructions following the optimal interpolation methods for each stage, as was presented in the previous section. The corresponding volume quantifications are presented in Table 6. The dose distributions and dose-volume histograms of the planning solutions for the three cases are shown in Figures 5-10. In Figures 5, 7 and 9, the isodose lines of the three cases under evaluation were calculated for the volumes generated with EARL1 accreditation (left) and with EARL2 (right) on an axial slice of the EARL2 reconstruction, considering this as the protocol capable of generating structures more similar to actual lesions. The two targets corresponding to the EARL1 and EARL2 accreditations appear in blue and light blue regions, respectively. In Figures 6, 8 and 10, the dose-volume histograms of the calculated dose for EARL1 (dashed lines) and EARL2 (solid lines) solutions are shown for the volumes corresponding to EARL2, in the three cases. In general, isodose lines showed a slightly better conformation for EARL2 and, therefore, significant differences were not present in the DVHs for targets in all cases, except for Case 1, as will be discussed later. More important differences were found for OARs, which are also discussed next.
Discussion
Although a limited number of cases were evaluated, sufficiently representative situations for assessing EARL accreditation impact were considered based on a proof-of-concept experiment designed on an anthropomorphic phantom. Otherwise, this study would have been difficult to carry out, since many factors are involved in the whole process of radiotherapy planning when PET/CT images are used for segmentation of target volumes.
In this proof-of-concept experiment, the volume deviations obtained (Tables 1 and 2) with respect to the actual known volumes V1, V2, and V3 hosted in the phantom showed that the EARL1 accreditation generated volumes deviating further from the actual values and always larger than those obtained with the EARL2 accreditation. The optimal interpolation procedure for each reconstruction was established as a compromise between the evaluation of the shape coefficients in Table 5 obtained after the interpolation process and the evaluation of the segmented volumes shown in Tables 1 and 2, obtained with each interpolation method. The methods chosen for EARL1 were spline for the 3D interpolation from the PET grid to the CT grid and linear from the CT grid to the dose calculation grid, and linear for EARL2 in both interpolation processes. These interpolation methods did not always show the best results considered individually, but their combination throughout the whole process provided more uniform values for the different volume sizes and an adequate and robust form factor. It is remarkable that, contrary to our expectations, the spline method was not the best selection. The diffuse character of the [ 18 F]FDG signal in the PET image does not sufficiently contribute to generating morphological irregularities to make the spline method more appropriate than others.
For the clinical cases, as expected, EARL1 always provided larger volumes than EARL2, and greater differences in size were found between both accreditations for smaller structures, consistent with the observations reported by Kaalep et al. [18]. Furthermore, cases involving lesions or targets of different sizes, where some of them were smaller than 5 mL, led to the worst result when the accreditation of EARL1 was followed. On the other hand, the increase in the volume of the structures observed for the EARL1 accreditation caused an increase in the level of undesirable doses in the OARs and in the adjacent regions. In the case of moving targets such as the lungs, this increasement may become more relevant, since the healthy tissue surrounds the target.
The plan calculated for the EARL1 accreditation and evaluated on the volumes generated with images from the EARL2 accreditation delivered a higher dose on the structures, as can be seen in Figures 6, 8 and 10. Planning for EARL1 covered the prescribed dose of both PTV1 and PTV2 from EARL2 reconstruction because EARL1 volume sizes were larger. The toxicity limits for OARs were not compromised with this overdose region around PTVs under EARL1 accreditation.
Unlike in Case 1, the therapeutic volumes of Cases 2 and 3 were surrounded by large OARs, the lungs, so the high dose administered in the vicinity of the lesion could be clinically relevant due to the risk of damaging healthy tissue. For this reason, although the DVHs did not show important differences in the doses to the PTVs generated with both accreditations, an overall increase in the dose to the OARs was caused with EARL1 planning (Figures 8 and 10). It is important here to consider the inherent uncertainty associated with breathing in lung cases to evaluate the real clinical impact. For this reason, the auxiliary structures were generated for planning to evaluate both the movement of the lesion due to respiration and the undesirable dose in the region surrounding the PTVs. The greatest differences were observed in these auxiliary volumes. It can be concluded that with the EARL2 reconstruction, it is possible to achieve the corresponding dose prescription with greater spatial accuracy than with the EARL1 reconstruction, and, therefore, to avoid delivering an additional low dose to the lung around the lesion. The use of EARL1 accreditation could be considered acceptable in these cases, as this spatial uncertainty is within the uncertainty associated with respiratory movement. However, in those radiotherapy techniques, such as stereotactic ablative radiation therapy (SABR), where an important escalation of the prescription dose is planned, we would recommend considering the breathing movement of the lesion and identifying all phases of the respiratory cycle by a segmentation procedure based on the EARL2 accreditation.
In the EARL2 reconstruction, the RCs were always higher than the corresponding ones in the EARL1 reconstruction ( Figure 1) and closer to 1. If these coefficients are less than 1, this implies that the maximum value of the SUV obtained after the reconstruction will be less than the value of the actual maximum SUV. This will cause a greater extension and diffusion and, therefore, an increase in volumes after reconstruction. On the contrary, in EARL2 accreditation, the RC values were higher than those in EARL1, and therefore, they generated smaller volumes, closely corresponding to the reconstructed structures. This is especially relevant for volumes less than 2.5 mL, as can be seen in Figure 1. Only Case 1 presented a PTV with this small size, and this case showed the most important differences in the DVHs of the corresponding PTVs for the two accreditations.
The differences between the EARL1 and EARL2 reconstruction settings were even more remarkable in Case 3 where multiple, unconnected lesions with different sizes and the same prescription were presented. Again, our results were consistent with observations of Kaalep et al. [18]. This could be related to the level of demand for the different voxel sizes in each accreditation. While in EARL1, the adjustment is approximately logarithmic, for EARL2 accreditation, this adjustment is practically linear, except for volumes smaller than 0.5 mL. Therefore, we could say that in the scenario of multiple lesions of different sizes for the same prescription dose, the EARL2 accreditation becomes more necessary to avoid a clinically relevant inaccuracy in the definition of target volumes from PET/CT images under EARL1 accreditation.
Conclusions
This work was aimed at answering the question of how necessary a new PET accreditation is when reconstructing therapeutic targets for radiotherapy planning. Generic analysis of sizes, shapes, and locations was established by using known volumes with [ 18 F]FDG inside a specific phantom to find out which interpolation method for image fusion and grids for dose calculation throughout the planning process was able to achieve the highest precision for defining different uptake volumes. This analysis concluded that the EARL2 accreditation generates smaller volumes and is more similar to the actual sizes than the EARL1 PET accreditation, providing an increase in accuracy that could be significant in the planning process and dose calculation, depending on the relative sizes of the targets in each case. To illustrate this, several clinical cases were chosen representing the most sensible scenario due to the sizes of the therapeutic target and were evaluated following both EARL accreditations, in order to observe the clinical impact.
In summary, we think the new EARL accreditation represents an advance in the reconstruction of the PET image for its implementation in the treatment planning process and subsequent monitoring. The volumes generated maintain greater precision when defining tumor lesions and their physiological extension, resulting in a clear advantage over the previous EARL reconstruction. For this reason, this work supports the application and propagation of the new EARL accreditation to achieve better accuracy when performing a more personalized radiotherapy treatment and monitoring purposes are considered. However, it was verified that, under the previous EARL accreditation conditions, the targets that would be generated by the new EARL accreditation would receive the prescribed doses. Therefore, this work also helps reduce concerns in those centers that, having somewhat less advanced devices, need to continue working under previous accreditations and do not plan to implement ambitious techniques, such as SABR or monitoring cases with breathing move. Funding: Project P20_01053 funded by the European Union and the Junta de Andalucía through the European Regional Development Fund (FEDER).
Institutional Review Board Statement:
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the Declaration of Helsinki of 1964 and its later amendments or comparable ethical standards. This study was approved by the Ethics Committee of the Hospital Universitario Virgen Macarena de Sevilla for the study entitled: "Integración de la imagen PET/CT en una aplicación radioterápica de precisión y adaptative" with reference C.P. DP-PET/CT-C.I. 1958-N-20.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available within the article. Further data are available on request from the corresponding author. | 9,130.8 | 2022-04-01T00:00:00.000 | [
"Physics",
"Medicine"
] |
Kinetic evaluation of a partially packed upflow anaerobic fixed film reactor treating low-strength synthetic rubber wastewater
A bench-scale model of a partially packed upflow anaerobic fixed film (UAF) reactor was set up and operated at five different hydraulic retention times (HRTs) of (17, 14, 10, 8, and 5) days. The reactor was fed with synthetic rubber wastewater consisting of a chemical oxygen demand (COD) concentration of 6355–6735 mg/L. The results were analyzed using the Monod model, the Modified Stover-Kincannon models, and the Grau Second-Order Model. The Grau Second-Order model was found to best fit the experimental data. The biokinetic constant values, namely the growth yield coefficient (Y) and the endogenous coefficient (Kd) were 0.027 g VSS/g COD and 0.1705 d−1, respectively. The half-saturation constant (Ks) and maximum substrate utilization rate (K) returned values of 84.1 mg/L and 0.371 d−1, respectively, whereas the maximum specific growth rate of the microorganism (μmax) was 0.011 d−1. The constants, Umax and KB, of the Stover-Kincannon model produced values of 6.57 g/L/d and 6.31 g/L/d, respectively. Meanwhile, the average second-order substrate removal rate, ks(2), was 105 d−1. These models gave high correlation coefficients with the value of R2 = 80–99% and these indicated that these models can be used in designing UAF reactor consequently predicting the behaviour of the reactor.
Introduction
Anaerobic digestion was first introduced as a method for treating industrial and agricultural waste for decades. Anaerobic digestion has many advantages, the most important of which is that it can achieve both pollution control and energy recovery. The anaerobic digester must be designed to perform effectively so that it will not encounter any problems such as process instability or low methane yield.
Previous studies have improved upon the design of biological wastewater treatment reactors by mainly focusing on retaining the biomass within the reactor (Tay et al., 2006). A high-rate anaerobic reactor such as an upflow anaerobic filter (UAF) is one of the earlier designs with well-defined characteristics and operational parameters (Saravanan and Sreekri, 2006). At high loading rates, the continuous operation of packed up flow anaerobic filters may cause clogging to occur (Escudi e et al., 2005). Therefore, low-density floating media were introduced as a novel solution to overcome this problem. This solution includes employing a kinetic model to model the design, operation, and optimization of a full-scale plant (Rajagopal et al., 2013).
A better understanding of the microbiology of an anaerobic digester and the process modifications, particularly fixed-film processes, has allowed anaerobic digesters to be used for dilute wastewaters and a large variety of industrial wastes. The development of the fixed-film filter is a significant achievement in anaerobic technology. The filter provides a relatively long solid retention time (SRT). Increased retention time makes it possible to treat moderate to low strength soluble organic industrial waste with a COD concentration of 2000-20,000 mg/L.
With the development of a mathematical model, the dynamic behavior of a process can be better understood. Furthermore, a kinetic model serves as a useful tool for understanding the underlying biological and transport mechanisms within a reactor (Acharyaa et al., 2011). Knowledge concerning the kinetic microbial growth rate, the substrate utilization rate, the limiting substrates or nutrients that affect the growth of cells, and the endogenous decay or death rate of microorganisms in the system is essential to ensuring the effective growth control and the proper balance of biomass in the system (Contreras et al., 2001).
The constants that are determined from the kinetic equation are called bio-kinetic coefficients or growth constants. These kinetic constants describe and predict the performance of the system. The biokinetic constants depend on the type of microbial species and the environmental conditions such as pH, temperature, dissolved oxygen, nutrients, inhibitory substances, and the degradability of the organic substrates in wastewater.
To date, kinetic modeling has been applied in a simplified form such that only a few parameters are involved to make the model easier to monitor and apply for industrial purposes and to determine the kinetic coefficients (Rajagopal et al., 2013). However, limited information is available on the process kinetics of substrate removal for low strength synthetic rubber waste water using upflow anaerobic fixed film reactor (UAF) reactor.
In this study, a partially packed upflow anaerobic fixed film reactor (UAF) was operated at different COD loading rates at ambient temperature conditions (28 C-32 C) in order to determine the kinetic constants involved in the process using kinetic models such as Monod model, the Stover-Kincannon models, and the Grau Second-Order model. The last part of this study is to compare the bio-kinetic coefficients with previous studies.
Experimental setup
The UAF reactor used in this study is shown in Figure 1. The reactor consists of 5 main pipes: the feeding inlet pipe, the effluent outlet pipe, the recycle pipe, the gas pipe, and the sludge outlet pipe. Plexiglas having effective volume 7.0 L, internal diameter 15 cm, and effective height 50 cm was used in the study. All experiments were performed at ambient temperature (28 C-32 C) and no temperature control was imposed. A tubular polyvinylchloride (PVC) microbial filter 10 mm in height, 10 mm diameter and having density and specific surface area of 0.96 g/cm 3 and 850 m 2 /m 3 respectively. The UAF reactor was packed with 3116 pieces of media units, which equally about 40% of active volume of the reactor. These packing media were floated against a fixed screen (weir coil) at a height of 39.5 cm and placed 6.5 cm from the bottom of the anaerobic filters. To distribute the feed uniformly, an influent liquid distributor was mounted at the base of the column. Then, the substrate was continuously fed to the reactor through the base using a peristaltic pump (Cole Parmer, Masterflex L/S).
Biogas production was monitored daily until gas production can be negligible. A 3 L Tedlar Bag was used for daily collection of biogas through a valve mounted at the upper part of the digester. Displacement method was used to measure biogas production by measuring the downward displacement of water in the measuring cylinder and recorded difference of initial and final reading after feeding the digester. The reactor was fed from the bottom and the effluent was collected from the outlet provided at the top portion of the reactor.
Feed solution and digested sludge
The experiment was started by pumping about 0.5 L effluent at an initial loading rate of 0.1 g COD/l/d and a COD of 1.3 g/l daily. Then, the loading was gradually increased up to 0.4 g COD/l/d. The start-up of the reactor process took about 30 days to complete where the food-tomicroorganism ratio and biomass content were monitored. After the reactor reached more than 80% COD removal rate, the operation to reduce the HRT commenced. The change to a different HRT was done once the reactor hydraulically reached to almost steady-state condition which was assumed to be reached when fairly constant biomass growth and permeate COD were attained. In order to determine this condition, it can be done by obtaining almost the same effluent COD concentration (standard deviation less than AE10%) for the last five consecutive operation days as considered by Kapdan and Erten (2007). The average value obtained from the bench-scale reactor under the effects of different hydraulic retention times and organic loading rate is presented in Table 1 whereas the feed solution characteristics was presented in Table 2. It was found that the wastewater has an average COD/N/P ratio of about 275/10/1. At this ratio, the wastewater was found to have sufficient amount of nutrients. A mixture of digested sludge obtained from an anaerobic pond of Malaysian Rubber Development Corporation (MARDEC) Berhad, Mentakab, Pahang was used for seeding. The digested sludge was used and contained 633,545 mg/L Total Solids (TS), 83,245 mg/L Volatile Solids (VS) and pH ranging from pH 6.62 to pH 6.92. Before 0.85 L of the mixture was loaded into the reactor, the mixture was passed through a screen to remove any debris. The reactor was left for 1 week to allow time for the sludge to stabilize.
Analytical procedure
The pH, chemical oxygen demand (COD), total suspended solids (TSS), volatile suspended solids (VSS), nitrogen, phosphorus, and alkalinity were analysed according to the methods described in Standard Methods for the Examination of Water and Wastewater (APHA, 1992). The COD was measured using a Hach DR 2010 spectrometer and a Hach COD reactor following the instructions provided for the Hach higher range test. The biogas composition was measured using a GA 5000 Geotech gas analyzer. All tests were performed in duplicate to obtain a consistent average. All analyses were undertaken at an ambient room temperature of 28 AE 2 C.
Kinetic model application
The bio-kinetic coefficient was determined using a laboratory-scale study of the UAF reactor. The efficiency of the model reactor was evaluated based on its COD removal efficiency. In this study, the Monod, modified Stover-Kincannon, and Grau Second-Order models were applied using data obtained from the reactor operation.
Monod model
In a biological treatment system, the rate of increase in biomass is directly proportional to the biomass concentration in the reactor. This proportionality factor is known as the specific growth rate constant (U). The formula is given below: where, ϴ, is hydraulic retention time (d); X, concentration biomass in the reactor (g VSS/L); S i , influent substrate concentration (g/L); S e; effluent substrate concentration (g/L); K S , half velocity constant (g/L) and K, maximum substrate utilization rate (d À1 ). The yield coefficient, Y, is used to estimate the total amount of sludge produced as a result of wastewater treatment (Enitan and Adeyemo, 2014). The coefficient Y can be defined as the mass of new cells produced per unit of substrate utilized or removed by the microorganisms present in the treatment system. The equation as obtained below where, ϴ, is hydraulic retention time (d); X, is the concentration biomass in the reactor (g VSS/L); S i , is the influent substrate concentration (g/L); S e; is the effluent substrate concentration (g/L); Y, is the yield coefficient (gVSS/gCOD) and K d is the death rate constant (d À1 ).
The maximum specific growth of the bacteria, μ max is related to the maximum specific substrate utilization rate. This growth occurs when the maximum substrate used is equal to the maximum rate of bacterial growth. The constant μ max indicates maximum growth rate of microorganism when the substrate is being used at its maximum rate (Bhunia and Ghangrekar, 2008). Equation below shows the Michaelis-Menten equation that links the substrate removal with the specific growth rate of bacteria.
where, μ max is the maximum specific growth rate of the bacteria (d À1 ); K is the maximum substrate utilization rate (d À1 ) and Y is the yield coefficient (gVSS/gCOD).
Modified Stover-Kincannon Model
The Modified Stover-Kincannon model had been successfully applied in Rotating Biological contractor systems and biofilm reactors as per the study of Stover andKincannon in 1982 (Stover andKincannon, 1982). The special features of the modified Stover-Kincannon model are that the substrate utilization rate is expressed as a function of organic loading rate at steady state. The removal of the organic substrate in the anaerobic filter can be determined based on the substrate removal rate as a function of substrate concentration. Thus, at steady state, the form of the Stover-Kincannon model is presented by equation given below In linear form, equation above can be simplified to obtained as below equation Where, dS=dt is the substrate removal rate (g/L/d); Q, inflow rate (L/d); V, reactor volume (L); S i , influent substrate concentration (g/L); S e; effluent substrate concentration (g/L); U max , maximum utilization rate constant (g/L/d) and K B , saturation value constant (g/L/d).
When written in terms of ϴ and its relationship with OLR, equation above becomes as given below where, ϴ, is hydraulic retention time (d); S i , influent substrate concentration (g/L); S e , effluent substrate concentration (g/L); U max , maximum utilization rate constant (g/L/d) and K B , saturation value constant (g/L/ d).
By plotting the V QðSiÀSeÞ , the inverse of the removal rate versus the V QSi , i.e. the inverse of the total loading rate, a straight line will be produced, with 1 Umax and KB Umax as the intercept and the slope of this line, respectively. (Grau et al., 1975) derived equation below as a general form of the second-order kinetic model; where, ÀdS=dt is the substrate removal rate (g/L/d); k 2ðSÞ is the secondorder substrate removal rate constant (d À1 ); S i , influent substrate concentration (g/L); S e , effluent substrate concentration (g/L) and X, concentration biomass in the reactor (g VSS/L). Equation above can be simplified and linearized to become as below
Grau Second-Order model
(S i À S e /S i ) is expressed as the substrate removal efficiency (E) while the second term on the right-hand side is the constant, so equation above can be written as given below where a ¼ Si k2 ðSÞ X and b is a constant greater than unity. The kinetic constants 'a' and 'b' can be determined by plotting a graph of HRT E versus HRT.
Results and discussion
The reactor was operated at five hydraulic retention times (HRTs) for about 350 days of operation. The feasibility results of UAF in treating synthetic rubber wastewater are presented with the organic loading rate varied from 0.5-1.3 g COD/L/day to assess the performance of the UAF reactor (Ismail and Suja, 2019). From the experimental results, the bio-kinetic coefficients obtained using the Monod, Stover-Kincannon and Grau Second-Order models were evaluated.
Kinetic analysis using the Monod model
The Monod equation mathematically describes the relationship between the growth rate and substrate concentration using the maximum possible growth rate. Based on Eq. (1), the kinetic coefficients K s and K can be determined from the experimental results by plotting a graph of θX ðSiÀSeÞ versus 1 Se . Figure 2a shows the straight line obtained from the curvefitting method of the graphical data for the kinetic analysis.
(2) can be used to estimate the k d and Y values by plotting a linear regression of 1 θ against SiÀSe θX .The intercept from this line is equal to k d whereas Y is the slope of the straight line that passes through the plotted points as shown in Figure 2b.
Using this model, the bio-kinetic coefficients obtained are as below: The maximum substrate utilization rate constant, K ¼ 0.371 d À1 ; the half unloading or saturation constant, K s ¼ 0.0841 g/L; the endogenous decay coefficient, k d ¼ 0.1705 d À1 ; the yield coefficient, Y ¼ 0.0297 mg VSS/mg COD; and the maximum specific growth rate of bacteria, μ max ¼ 0.011 d À1 . From this model application, coefficient of determination obtained was quite high as R 2 ¼ 0.8-0.9.
The value of K s as estimated by the model (84.1 mg/L) was far from the K value (0.371 d À1 ). This condition is favorable, as the process efficiency will not be reduced when OLR increases, as pointed out by Ahn and Foster (2000). Previous studies proposed that the higher K s value will results the higher biodegradability of substrates (Ahmadi et al., 2015). The value of K is an indicator of the ability of microorganisms to degrade the substrate present in the waste and to produce methane (Enitan and Adeyemo, 2014). A high K value indicates that it is significantly difficult to convert organic matter to methane inside the reactor (Fdez-Güelfo et al., 2012). In addition, from the K value, biomass concentration in UAF can be estimated because it is very difficult to calculate the biomass concentration on support media in anaerobic reactor (Bhunia and Ghangrekar, 2008).
Meanwhile, a large K d value was obtained from the graph, indicating that the net sludge volume produced or to be handled was high.
Kinetic analysis using the Stover-Kincannon Model
The Stover-Kincannon model expresses the substrate utilization rate as a function of organic loading rate in a biofilm reactor (Sentürk et al., 2010). In the modified version of this model, the volume of the reactor is used instead of the surface reactor volume (Ahn and Foster, 2000). This model gives a high correlation compared to other models and has been widely used to determine the biokinetic coefficients of a contact growth system.
By using the data in Table 3, a graph was plotted, as shown in Figure 3. The plot of experimental data was based on the linearized equation at steady state as in Eq. (4), where a high correlation (R 2 ¼ 0.9989) was obtained. 1 Umax and KB Umax were calculated as 0.1521 and 0.9597, respectively. The maximum removal rate constant (U max ) was 6.57 g/L/d and the saturation value constant (K B ) was 6.31 g/L/d. The value of K B was low, indicating that the UAF has a low potential in coping with high-strength wastewater (Sentürk et al., 2010). The close values between U max and K B indicate that the process efficiency will decrease as organic loading rate increases, as reported by Ahn and Foster (2000).
By substituting the value of K B and U max into equation below, the effluent COD concentration, S e , can be predicted using the below equation.
Kinetic analysis using the Grau Second-Order kinetic model
By plotting HRT E versus HRT as shown in Figure 4, a straight line with an R 2 value of 0.9994 is produced. The reciprocal and slope of the line represent the kinetic constant 'b' and 'a' with values of 0.918 and 0.9619, respectively. The second-order substrate removal rate constant, k 2(s) in the unit of time, was derived from the linear equation of Eq. (8), which was calculated from a ¼ Si k 2ðSÞ X for UAF and listed in Table 4.
The COD concentration of the effluent substrate can be predicted by rearranging Eq. (9) to become as below equation. Table 5 summarizes the substrate removal kinetic constants cited in the literature based on the different types of reactors and wastewater used. Many researchers have arrived at different values of kinetic constants using various substrates and reactors.
Evaluation of kinetic models in UAF reactor
Based on the Monod model, the value of Y and μ max determined from this study were 0.0297 mgVSS/mgCOD and 0.011 d À1 respectively was quite near with the study conducted by Bhunia and Ghangrekar (2008) for synthetic waste water having COD concentration in the range of 300-400 mg/L in UASB reactor with the Y and μ max value as 0.083 mgVSS/mgCOD and 0.058 d À1 respectively. Alphenaar (1994) also determined the same value for kinetic coefficients of Y as 0.03 mgVSS/mgCOD for volatile fatty acid mixture but larger value for μ max (0.51 d À1 ). Yousefzadeh et al. (2017) reported higher value of μ max as 0.176-0.151 d À1 for diethyl phthalate removal using anaerobic fixed film baffled reactor (AnFFBR) and up flow anaerobic fixed film fixed bed reactor (UAnFFFBR). Meanwhile, larger value of k d (0.1705 d-1) obtained in this study compared to Bhunia and Ghangrekar (2008) f as the value was 0.006 d À1 whereas K s value of 84.1 mg/L has different values from other researcher.
When applying the Stover-Kincannon model, the value of the kinetic constant was found similar to that of (Raja Priya et al., 2009). The U max and K B values reported for formaldehyde containing waste water in UASB were slightly lower, at 3.4 g/L d À1 and 4.6 g/L d À1 , respectively whereas larger value was obtained in this study using UAF which was the value of U max and K B were 6.57 g/L/d and 6.31 g/L/d respectively. Meanwhile, the findings of the rest of the studies reported have far value compared to the results obtained in the present study. For instance, Rajagopal et al. (2013) reported U max and K B values of 109.9 g/L/d and 109.7 g/L/d respectively for fruit canning waste water and 53.5 g/L/d and 49.7 g/L/d respectively for cheese dairy waste water using UAF which was packed with low-density polyethylene media filled about 80% of active volume of the reactor. Higher value of U max and K B demonstrated that microbial community achieved good biodegradable conditions of substrates and consequently stabilizing COD in the reactor (Yousefzadeh et al., 2017).
Similarly, the kinetic constants obtained using the Grau second-order model was also found to be different and far compared to other kinetic studies as stated in Table 5. Rajagopal et al. (2013) agrees with this statement, and conclude regardless of any substrate concentration, substrate removal rates were mainly depends on the nature of the substrate, the microorganism living in the reactor and reactor configuration. Kinetic parameters for high rate reactors such as fixed bed reactors are apparent values as they embody all the mass transfer parameters. As shown in Table 4, the values of substrate removal rate constant, k 2ðSÞ obviously decreased as HRT decreased even when the microbial community in the reactor increased.
In conclusion, the kinetic coefficients obtained in these study provides good agreement with all the models applied. Thus, the result of kinetic studies obtained from lab-scale experiments can be used in the design of UAF with partially packed media and also for estimating treatment efficiency of full-scale reactors with low to medium strength waste water applied.
Conclusion
The performance of UAF in treating synthetic rubber processing wastewater with a COD concentration of 5900-6600 mg/L was evaluated at different HRTs and OLRs. All kinetic models were found capable of describing the bio-kinetic behavior in the UAF reactor with good correlation. The kinetic coefficients derived from this waste water treatment using UAF with half partially packed with PVC as support media provides good agreement with the Stover-Kincannon and Grau second-order models. In the future research, one has to ensure that the selection for good inoculum is vital. This is because the optimum inoculum to substrate ratio depends on the source of inoculum. The different sources of inoculums will have different metabolic activities so the optimum ratio required for optimum anaerobic digestion of a particular feed may vary using inoculums from different sources. Therefore, it is proposed to have the same inoculum so that it can meet similar results.
Declarations
Author contribution statement NOR FAEKAH I.: Performed the experiments; Analyzed and interpreted the data; Wrote the paper. FATIHAH S.: Conceived and designed the experiments; Contributed reagents, materials, analysis tools or data. | 5,240.2 | 2020-03-01T00:00:00.000 | [
"Chemistry",
"Engineering"
] |
Finding exclusively deleted or amplified genomic areas in lung adenocarcinomas using a novel chromosomal pattern analysis
Background Genomic copy number alteration (CNA) that are recurrent across multiple samples often harbor critical genes that can drive either the initiation or the progression of cancer disease. Up to now, most researchers investigating recurrent CNAs consider separately the marginal frequencies for copy gain or loss and select the areas of interest based on arbitrary cut-off thresholds of these frequencies. In practice, these analyses ignore the interdependencies between the propensity of being deleted or amplified for a clone. In this context, a joint analysis of the copy number changes across tumor samples may bring new insights about patterns of recurrent CNAs. Methods We propose to identify patterns of recurrent CNAs across tumor samples from high-resolution comparative genomic hybridization microarrays. Clustering is achieved by modeling the copy number state (loss, no-change, gain) as a multinomial distribution with probabilities parameterized through a latent class model leading to nine patterns of recurrent CNAs. This model gives us a powerful tool to identify clones with contrasting propensity of being deleted or amplified across tumor samples. We applied this model to a homogeneous series of 65 lung adenocarcinomas. Results Our latent class model analysis identified interesting patterns of chromosomal aberrations. Our results showed that about thirty percent of the genomic clones were classified either as "exclusively" deleted or amplified recurrent CNAs and could be considered as non random chromosomal events. Most of the known oncogenes or tumor suppressor genes associated with lung adenocarcinoma were located within these areas. We also describe genomic areas of potential interest and show that an increase of the frequency of amplification in these particular areas is significantly associated with poorer survival. Conclusion Analyzing jointly deletions and amplifications through our latent class model analysis allows highlighting specific genomic areas with exclusively amplified or deleted recurrent CNAs which are good candidate for harboring oncogenes or tumor suppressor genes.
Background
Chromosomal instability plays an important role in carcinogenesis with numerical and structural genomic alteration leading to selective growth advantages [1]. In recent years, high-resolution array comparative genomic hybridization (aCGH) has replaced conventional metaphase CGH as the standard protocol for identifying segmental copy number alteration across the whole genome. The classical strategy of aCGH technique is to co-hybridize genomic DNA from a cancer sample (labelled with one fluorochrome) with genomic DNA from a normal reference sample (labelled with a different fluorochrome) to the aCGH targets. These targets correspond to chosen genomic clones or non-overlapping oligonucleotides of different lengths that are spotted or directly synthesized onto the solid support. In practice, the distribution and length of the spotted array elements determine the detection sensitivity to various alteration sizes with some recent platforms being able to detect alteration sizes less that 100-kb [2].
In clinical cancer research, large collections of tumor samples are currently being analyzed using aCGH experiments. After assessing regions with copy gains or losses within each individual sample, the main challenge is to identify genomic areas where amplifications or deletions are recurrent across tumor samples and hypothesized to harbour oncogenes or tumor suppressor genes of interest. More precisely, the challenge is to distinguish between "bystander" and "driver" chromosomal aberrations, these latter changes conferring biological properties to the tumor that allow it to proliferate.
In order to identify these functionally and potentially clinically important chromosomal changes, classical approaches focus on loss and gain as separate cases and select aberrations that are deemed significant using adhoc frequency thresholds or permutation-based method [3][4][5]. A shortcoming of these methods is that they analyze copy loss and copy gain as separate events without considering jointly the chromosomal propensity for deletions and amplifications. However, genomic areas harboring either oncogenes or tumor suppressor genes should jointly exhibit high frequency amplification together with a low frequency deletion, and vice versa, respectively. Thus, the ability to identify these "driver" chromosomal aberrations should be improved by modeling jointly the occurrence of deletions and amplifications across the tumor samples.
To achieve this, we propose a novel strategy to identify patterns of recurrent copy number alteration (CNA) based on a latent class model framework. Here, a pattern is considered to be a model-based representation of a clone's propensity for exhibiting chromosomal aberrations (dele-tion and amplification) in a specific disease entity. Based on these patterns, we highlight genomic areas having the highest frequency for amplification together with the lowest frequency for deletion (so called exclusively amplified CNA) and vice versa (so called exclusively deleted CNA). A case study that investigated CNAs in a homogeneous series of sixty-five early stage lung adenocarcinomas using 32K BAC arrays is analyzed to demonstrate the interest of this approach. In particular, we identified regions exhibiting a high rate of amplification together with a low rate of deletion that are likely to confer a selective advantage and probably harbor one or several oncogenes. We also analyse the potential impact of an accumulation of such chromosomal aberrations on patients' outcomes.
Data and preprocessing
The dataset considered in this study is based on a homogeneous series of 65 patients with stage IB lung adenocarcinomas (excluding large cell carcinomas) who underwent surgery (AP-HP, France). This study was approved by the Hôtel-Dieu hospital ethic committee. DNA was extracted from frozen sections using the Nucleon DNA extraction kit (BACC2, Amersham Biosciences, Buckinghamshire, UK), according to the manufacturer's procedures. For each tumor, two micrograms of tumor and reference genomic DNAs were directly labeled with Cy3-dCTP or Cy5-dCTP respectively and hybridized onto aCGH containing 32,000 DOP-PCR amplified overlapping BAC genomic clones (average size of 200 kb) providing tiling coverage of the human genome. Hybridizations were performed using a MAUI hybridization station, and after washing, the slides were scanned on a GenePix 4000B scanner. For this analysis, we only considered BAC genomic clones mapping to automosomal chromosomes. The aCGH signal intensities were normalized using a two-channel microarray normalization procedure. For each sample, inferences about the copy number status of each BAC clone were obtained using the CGHmix classification procedure [6]. In practice, we compute the posterior probabilities of a clone belonging to either one of the three defined genomic states (loss, modal/unaltered and gain copy state) from a spatial mixture model framework. Then, we assigned each clone to one of two modified copy-number allocation states (loss or gain copy state) if its corresponding posterior probability was above a defined threshold value, otherwise the clone was assigned to the modal/unaltered copy state. This latter threshold value was selected to obtain the same false discovery rate of 5% for each sample. Here, a false discovery corresponded to a clone incorrectly defined as amplified or deleted by our allocation rule.
Model
Let denote the 3dimensional random variable which records the number of deletions , amplifications and modal copy observed for genomic clone i (i = 1, ..., I) over the sample set of tumors with size n. Let L i be an unobserved (latent) categorical allocation variable taking the values 1, ..., K with probabilities w 1 , ..., w K , respectively. Here, L i indicates the index of the class to which genomic clone i belongs. These classes are a convenient representation for describing CNA patterns in term of their propensity for amplification and deletion. The class variable is not observed and hence said to be latent. As seen below, we consider a latent class model with three levels (low, medium, high)for both amplification (j = 1,2,3) and deletion (j* = 1,2,3) leading to nine latent classes (K = 9).
For a genomic clone i belonging to class k = (j, j*), we assume that Y i follows a multinomial distribution (here a trinomial distribution) with conditional response probabilities for loss copy state (deletion) , gain copy state (amplification) and modal copy state parameterized with the latent class parameters (deletion) and (amplification) such as: Given these probabilities, we define the conditional distribution of Y i as: Or equivalently Thus, we have implicitly assumed that any dependence of copy number anomalies between clones is captured by the latent class structure. It follows that the marginal cumulative distribution function of Y i comes from a mixture model: where the quantities w k Pr (L i = k) are the mixing proportions or weights with 0 ≤ w k ≤ 1 and . For identifiability, we impose that and .
We summarize the labelling of the nine latent classes in Table 1 and retain the double indexing k = (j, j*)when needed for ease of understanding.
Inference
For each latent class k = (j, j*), our purpose is to estimate the parameters and together with the posterior probability of belonging to one of the K classes for each genomic clone i. We consider a Bayesian framework, where , and w k are given prior distributions. Here, the prior distributions specify that these quantities are all drawn independently, with Normal ( and ) and Dirichlet priors (w k ). In practice, and are given independent normal prior distributions with large variance. The parameter δ of the symmetric prior Dirichlet distribution was set to 0.5 (Jeffreys' prior), instead of the usual value of 1 that corresponds to uniform weights, in order to be less informative. Inference for parameters of interest was undertaken by sampling from their joint posterior distributions using Monte Carlo Markov chain (MCMC) samplers implemented in the WinBUGS software [7]. All results presented correspond to 5,000 sweeps of MCMC algorithms following a burn-in period of 1,000 (period for achieving stability of the algorithm). Summary statistics for quantities of interest, such as and were calculated from the full output of the MCMC algorithm. Furthermore, the samples provided information on quantities of prime interest, the vector of the posterior probabilities for each genomic clone i of belonging to class k: .., 9} These posterior probabilities are directly estimated as empirical averages from the output of the algorithm. Using these estimates, a probabilistic clustering of the data can be achieved. To be specific, we chose to apply the Bayes classification rule and assigned each clone to the class to which it had the highest probability of belonging. We stress that the classes capture chromosomal aberration patterns.
In this work, we compared seven different latent class models with various levels of amplification and deletion (corresponding to 2, 3 and 4 levels of copy gain and copy loss). For each model, we computed the Deviance Information Criterion (DIC) as introduced by Spiegelhalter et al. [8] and extended for mixture models as proposed by Richardson [9]. Models with small DIC provide a better fit than those with high DIC criteria. Thus the number of latent levels can be adapted to the particular cancer investigated and the observed chromosomal patterns in the sample.
Chromosomal pattern analysis
In our dataset, several competing models were tenable ranging from six to nine components. We heuristically chose to favor the nine-component model which leads to a good fit and allow a sufficient number of components for describing finely the different levels of genomic aberrations across the whole dataset. supporting a complex mesh of copy number alterations in lung carcinogenesis.
Probabilistic clustering of the BACs obtained from our latent class model analysis is shown in Figure 2. We observed a mixture of broad and focal contiguous genomic areas with the same patterns of CNAs.
Tables 2 displays for the nine classes the joint estimated average probabilities for amplification and deletion, respectively. Probability for amplification ranges from 3.0% to 29.7% whereas for deletion it ranges from 5.4% to 34.5%. Note that arbitrary probability cut-offs were not imposed to define the classes, rather the observed propensities were flexibly clustered through the latent class model. Table 3 summarizes the number of clones allocated in each class (and corresponding percentage) applying the Bayes classification rule. The class with the highest levels for deletion and amplification (k = 9) is empty. The class with medium rate of deletion and low rate of amplification (k = 2) regrouped the highest number of clones (9,509).
Some interesting patterns emerge from Tables 2, 3 and Figure 2. From a biological point of view, four sets of genomic clones have patterns that are particularly worth highlighting.
The first set is composed of clones from class k = 1, that exhibit simultaneously very low deletion and amplification rates. This group may be interpreted as "refractory" clones with aberration rate below chromosomal background (corresponding to random chromosomal aberrations as defined below). As seen from our results, this set is small gathering only 5.3% of the total number of clones. The second set is composed of clones from classes k = 2, 4 and 5 with medium values of either deletion or amplification rates that can be considered as chromosomal background rate of aberrations. This set gathers about two-third of the total number of clones and may be interpreted as regrouping clones with random chromosomal aberrations.
The third and most interesting set is composed of approximately 9,000 clones from classes k = 3 and k = 7 with very high rate for either deletion or amplification associated with refractory status (below the chromosomal background rate of aberration) for the converse copy state. We refer to the clones in class k = 7 as "exclusively amplified" recurrent CNAs and those in class k = 3 as "exclusively Chromosomal aberration patterns deleted" recurrent CNAs. It can be hypothesized that these "exclusive" behaviors reflect a selective advantage for tumor growth for one state (e.g. amplification) associated with a selective disadvantage of the converse state (e.g. deletion). Thus, it is likely that this set contains "driver" clones, harboring functionally important changes giving selective advantage to tumor cells.
The last set is composed of clones belonging to class k = 6 and k = 8 that exhibit a complex pattern with high and medium values for both amplification and deletion. These classes may be interpreted as regrouping genomic regions that contain multiple genes that contribute to cancer, some of which being selected for copy gain and other for copy loss. In particular, we identified genomic clones located within cytogenetic band 16q23 that are classified in class k = 6 and harbor both the tumor suppressor gene WWOX and the oncogene MAF.
Modeling jointly the occurrence of amplifications and deletions across the tumor samples allows us to identify such patterns. To assess the biological relevance of the patterns found, we examined whether known lung cancer genes were classified as "exclusively amplified" or "exclusively deleted" recurrent CNAs. We found that, with exception of PTEN, all the oncogenes and tumor suppressor genes known to be associated with quantitative genomic changes in lung adenocarcinoma [10][11][12] were classified as "exclusively amplified" (k = 7) or "exclusively deleted" (k = 3) recurrent CNAs (Table 4). It is worth not-ing that PIK3CA gene (3q26.3 locus), described as specifically amplified in another histological subtype (squamous lung carcinomas) [9], was not found within an "exclusively" recurrent CNA emphasizing the histological homogeneity of our series and the specificity of the "exclusively" amplified or deleted classes.
In Figure 3, we look in greater detail at three selected chromosomes (Chromosome 2, 11 and 14) harboring genomic areas classified as "exclusively amplified" recurrent CNAs.
In chromosome 2, we identified a focal area located within the 2p23 locus which harbors the ALK oncogene (anaplastic lymphoma receptor tyrosine kinase). This gene which is known to play a role in lymphomas has been recently shown to be activated in lung cancer either by gene fusion with EML4 or amplification [13,14].
In chromosome 11, we identified a short area located within the locus 11q13.2 which harbors the well-known oncogene CCND1. In a validation analysis, we analyzed protein expression by immunohistochemistry and found that CCND1 amplification was significantly related with gene over-expression (data not shown). We also identified a second small genomic area with "exclusively amplified" recurrent CNAs located within the locus 11q13.4-13.5. This area contains several candidate genes including the Neu3 gene (Human plasma membrane-associated sialidase) which is upregulated in several human cancers and is known to interact with EGFR. Except for these loci, most of the chromosome harbors clones from class k = 2 with medium values of deletion rates and low level of amplification that can be considered with random chromosomal aberrations.
In chromosome 14, we identified the recently described focal area of amplification located within the 14q13.3 locus which harbors the NKX2-1 gene [11]. This gene encodes for the well known TTF1 (Thyroid transcription factor), a protein which is expressed in normal lung and thyroid tissues and in their related adenocarcinomas. Showing NKX2-1 gene located within an "exclusively amplified" recurrent CNAs favors the hypothesis that TTF1 gene product may have a functional role in lung carcinogenesis instead of just being a marker of primary lung origin.
We then compare our results to those obtained from previously used methods that consider arbitrary thresholding rules (frequency cutoffs of 20%, 25% and 30%) or permutation-based approaches. As seen in Table 4, an arbitrary threshold of 20% leads to the selection of the known oncogenes/tumor suppressor genes whereas the widely used 25% threshold will discard interesting genes such as The two percentages given in each cell represent the frequency of amplification and deletion, respectively EGFR-1, c-MET, CCND1, NKX2-1 and E2F. However, the 20% threshold selects a high proportion of the genome (50.5% of the total number of clones) whereas our method selects only 31.4% (9,335 clones) which is comparable to the 25% thresholds (33.6% of the total number of clones).
We also analyzed our data using the method proposed by Klijn et al. [4] that has been previously shown to outperform the one proposed by Diskin et al. [3]. The Klijn et al. method (called KC-SMART) is implemented in the R/Bioconductor package [15] and the null hypothesis is obtained by shuffling the non-discretized data (log-ratio data) over the entire genome. Considering a false discovery rate level of 5% seems inappropriate since it leads to select too many genomic areas (>80%). For a family wise error rate of 5% (with a 4 Mb kernel width), we selected 3,663 (12.3%) recurrent deletions and 2,524 (8.5%) recurrent amplification. Forty nine percent of these recurrent amplifications are classified by our approach as "exclusively amplified" recurrent CNAs, the others belonging to classes with medium amplification rate. We observe that the KC-SMART selection of amplified areas ignores important genomic areas that we classified as "exclusively amplified" such as those harboring MET gene. Moreover, no genomic area belonging to class 8 was selected even when considering various kernel widths. This is not surprising since null hypotheses for detecting marginally amplification or deletion are highly dependent on the definition of the "complementary" state (e.g. for deletion the "complementary" state corresponds to modal or gain copy). For the 3,663 selected recurrent deletions by KC-SMART, 34.7% and 30.7% are classified by our approach in class 3 and 2 respectively whereas the other clones belong to classes with medium deletion rate. This selection does not recognize some genomic regions that we classified as "exclusively deleted" such as those harboring WWOX tumor suppressor gene. As could be expected, this procedure selects a subset of amplified (respectively deleted) clones that have a variety of deletion (respectively amplification) rates, whereas our modeling approach is aimed at refining this characterization, by focusing on highlighting clones with contrasting patterns of amplification and deletion.
Relationship between chromosomal patterns and clinical outcome
Finally, we analyzed the impact of chromosomal aberrations on relapse-free survival (gain and loss considered separately since they have distinct impact on the disease) calculated from the date of the patients' surgery until either disease related death, disease recurrence or last follow-up examination. More specifically, we investigated whether chromosomal pattern information obtained by our latent class model could be useful for distinguishing genomic regions prone to non-random chromosomal event (signal) and with potential impact on clinical outcome from those prone to random chromosomal event (noise).
In practice and for copy gain, we calculate for each patient two different scores that measure the proportion of copy gains over the selected genomic regions. The first score is computed over the 4,4854 genomic clones prone to nonrandom chromosomal event that belong to "exclusively amplified regions" (class k = 7 as defined previously). The second score is computed over 17,432 genomic clones prone to random chromosomal event (classes k = 2, k = 4 and k = 5).
The median value of the scores measured over genomic clones from class k = 7 was of 28.8% [first quartile = 16.1, third quartile = 42.4] whereas it was of 23.6% [first quartile = 11.7, third quartile = 34.2] for genomic clones from classes k = 2, k = 4 and k = 5. The results from the Cox proportional hazard regression model, considering each score as a continuous variable, showed that an increasing proportion of copy gains within "exclusively amplified regions" (class k = 7) was associated with a statistical significant high risk of relapse (p < 0.05). In contrast, the proportion of amplifications in regions prone to random chromosomal event was not significantly predictive of outcome. In Figure 4, we plotted the Kaplan-Meier curves when dichotomizing into high score (above the third quartile) versus low score (below the third quartile) computed over the "exclusively amplified regions" (chi-square statistic = 7.4, p = 0.006).
The same analysis was conducted for copy loss. We found no statistically significant difference for the score computed over "exclusively deleted regions" (class k = 3).
Discussion
In contrast to leukemia, lymphoma and sarcoma where a specific cytogenetic abnormality is usually present, epithe-Chromosomal patterns for chromosome 2, 11, 14 lial malignant tumors such as lung adenocarcinomas are often characterized by aneuploidy (complex and multiple chromosome aberrations) which may reflect an alternative form of genetic instability called chromosomal instability [16]. Chromosomal instability leads to numerical and structural abnormalities that are observed at the gross chromosomal level rather than the nucleotide level. Balanced translocations are rare and the observed chromosomal instability leads to imbalanced aberrations in most cases (gain or loss of genetic material). Genomic gains lead to over-expression of oncogenes whereas genomic losses lead to under-expression of tumor-suppressor genes, both resulting in a selective advantage of the cancer cell. The sequential acquisition of genetic alterations occur in individual cell within a population and leads to a wave of clonal expansion due to the relative growth advantage that the new alteration confers to the cell.
When analyzing aCGH experiments on multiple samples of patients, the challenge is to distinguish CNAs that are likely to represent non-random chromosomal events and are thought to involve the critical genes (drivers) from those which are randomly altered during pathogenesis. Given the vast amount of data obtained from high resolution aCGH, biostatistical modeling is required for the discovery of novel regions with propensity for non-random chromosomal events.
In this work, we consider a latent class model-based approach for capturing chromosomal aberration patterns taking into account the interdependencies among propensity of alterations. The primary data processed by our model are the number of deletions and amplifications in each sample that are obtained from a pre-processing of the aCGH signals. A number of algorithms are available to do this. Here, we chose to use CGHmix [6] to label the clones as it has the benefit of taking into account spatial dependencies along the chromosome. Our latent class model is applicable to any preprocessing of the data, of course its output will depend on the initial classification data for each clone in each patient.
Relapse-free survival from high and low-risk groups according to the proportion of chromosomal amplifications Figure 4 Relapse-free survival from high and low-risk groups according to the proportion of chromosomal amplifications. RFS curves for the 65 lung adenocarcinomas considering high and low-risk groups. High-risk group are those with a high proportion of amplification (above the third quartile, plain line) whereas low-risk group are those with low proportion of amplification (above the third quartile, dash line). These proportions are computed over genomic clones prone to non-random chromosomal event that belong to "exclusively amplified regions" (class 7 as defined in our model). In our dataset, we favor the nine-component model but several competing models are tenable ranging from six to nine components. In practice, we think that finding the "best" fit to the data is not the main interest but rather to obtain a good balance between a reasonable fit and sufficient flexibility for describing finely the different levels of genomic aberrations across the whole dataset. This is why we propose the nine-component model as a prime candidate to be estimated if the samples are sufficiently informative.
Months
Considering the present series of stage IB lung adenocarcinomas, our results show that most of the oncogenes and tumor-suppressor genes known to play a role in lung adenocarcinomas are located within exclusively amplified and deleted regions, respectively. This suggests that these latter regions play a substantial functional role in the selective advantage of tumor cells. It is worth noting that this selective process seems to play an important role since about one-third of the genome is classified as exclusively amplified or deleted. Previous studies on various tumors (breast, colorectal, esophageal, endometrioid carcinomas) have shown that an increasing number of chromosomal aberrations correlate with poor prognosis [17]. In our study, we showed that accumulation of amplification occurring within exclusively amplified genomic regions is related with relapse-free survival whereas genomic clones prone to random chromosomal aberrations blurred the impact of copy gains on survival. This result emphasizes that all copy gains may not be equivalently linked to the disease process, and that a subset of clones associated with contrasting patterns between gains and losses over tumor samples could be a more relevant entity. Thus averaging copy gains within a tumor may be too coarse a measure.
As seen from the data, the strong interdependencies between copy loss and copy gain clearly justifies our joint modeling as compared to simple marginal approach with or without permutation procedures. In particular, our approach avoids having to define an arbitrary cutoff for the marginal frequency across the samples and shows that this latter may depend on the chromosomal aberration studied (loss/gain copy).
Constructing background distributions from marginal approaches for deletion and amplification, as is commonly done rather than considering the joint distribution (multinomial) could be misleading when these events are not independents. As an example, when considering the marginal rate of copy loss, the observed deletion rates for the two distinct genomic areas that harbor HDAC4 gene (histone deacetylase, chromosome 2q37) and PDZRN4 (PDZ domain containing zinc finger 4, chromosome 12q12) are the same (16.9%). However, the observed marginal amplification rates are clearly different with 30.5% and 7.7% for HDAC4 and PDZRN4 areas, respectively, advocating the need to consider two different chromosomal patterns for these genomic areas. In our model, these two genomic areas are classified in two different classes: HDAC4 area is listed in class k = 8 (complex pattern with high level for both amplification and deletion) whereas PDZRN4 area is listed in class k = 2 (background aberration rate). In this case, analyzing marginal deletion rates leads implicitly to define a hybrid state such as the 'non-deletion state' for the null hypothesis which is highly depending on the copy gain state. Our strategy, which is a modeling rather than a hypothesis testing approach, helps to solve this problem by considering copy losses and gains through our multinomial mixture model.
Our method is well suited for an explicit dissection of the complex null hypothesis model. Here, it leads to distinguish between regions with medium levels of loss/gain copy that can be considered as random chromosomal events (background) and regions with refractory patterns.
In future studies, we think that investigating these latter regions should be pursued more thoroughly since these may harbor critical region of the genome that are highly resistant to chromosomal instability. With such complex null-hypothesis, computing adjusted p-value from resampling-based method is not straightforward and crucially depends on the null hypothesis model.
Our method leads to prioritize genomic areas prone to non-random chromosomal aberrations but finding driver genes require functional studies. In this setting, it is worth to correlate copy number changes from exclusively amplified/deleted regions to gene expression changes in order to prioritize those that are functionally involved in the tumor process.
Conclusion
We proposed to identify patterns of chromosomal aberrations across tumor samples from high-resolution comparative genomic hybridization microarrays by modeling copy number states as a multinomial distribution with probabilities parametrized through a latent class model. This model allows distinguishing genomic regions prone to non-random chromosomal aberrations with potential impact on clinical outcome from those prone to random chromosomal aberrations. In a homogeneous series of lung adenocarcinomas, we show that most of the known oncogenes or tumor suppressor genes associated with this tumor type are located within regions with exclusive propensity for either copy loss or copy gain. We also highlight new genomic areas of potential interest and show that an increase of the frequency of amplification in these particular genomic areas is significantly associated with poorer survival. These results suggest that new insights on chro- | 6,752.2 | 2009-07-14T00:00:00.000 | [
"Biology"
] |
A Radio Continuum Study of NGC 2082
We present radio continuum observations of NGC 2082 using ASKAP, ATCA and Parkes telescopes from 888 MHz to 9000 MHz. Some 20 arcsec from the centre of this nearby spiral galaxy, we discovered a bright and compact radio source, J054149.24-641813.7, of unknown origin. To constrain the nature of J054149.24-641813.7, we searched for transient events with the Ultra-Wideband Low Parkes receiver, and compare its luminosity and spectral index to various nearby supernova remnants (SNRs), and fast radio burst (FRB) local environments. Its radio spectral index is flat (${\alpha} = 0.02 \pm 0.09$) which is unlikely to be either an SNR or pulsar. No transient events were detected with the Parkes telescope over three days of observations, and our calculations show J054149.24-641813.7 is two orders of magnitude less luminous than the persistent radio sources associated with FRB 121102&190520B. We find that the probability of finding such a source behind NGC 2082 is P = 1.2%, and conclude that the most likely origin for J054149.24-641813.7 is a background quasar or radio galaxy.
Introduction
In the absence of an active galactic nucleus (AGN), a spiral galaxy's radio emission primarily derives from non-thermal synchrotron radiation from supernova remnants (SNRs), and thermal bremsstrahlung from Hii regions (Condon 1992;Filipović and Tothill 2021b,a). Thus, deep and wide radio surveys from the new generation of radio telescopes such as the Australian Square Kilometre Array Pathfinder (ASKAP) and MeerKAT can shed important light on the processes by which star formation shapes the interstellar medium (ISM).
NGC 2082 is G-type spiral galaxy (of SB(r)b morphology) in the Dorado constellation. It has an absolute B-band magnitude of M B = 12.79 (Lauberts and Valentijn 1989), a diameter of 10.16 kpc, and is located at a distance of 18.5 Mpc (Olivares E. et al. 2010) and redshift z = 0.00395. Unlike some other galaxies in the Dorado constellation (e.g. NGC 1566), NGC 2082 remains poorly studied, with its most notable feature being a type II supernova, SN1992ba (Evans and Phillips 1992).
Here, we study the radio properties of NGC 2082 using ASKAP, Australia Telescope Compact Array (ATCA) and Parkes radio telescope observations. We will also draw on Hubble Space Telescope (HST) observations. Section 2 presents our observations and data analysis of NGC 2082. Section 3 gives our results and discussion, and conclusions are presented in Section 4.
Observations & Data
NGC 2082 has been observed in the ASKAP-EMU 888 MHz radio continuum survey of the Large Magellanic Cloud (LMC; Pennock et al. 2021;Filipović et al. 2022), as well as in the ATCA 20 cm mosaic survey arXiv:2205.11144v2 [astro-ph.GA] 2 Jul 2022 . We have also made new observations from Parkes radio telescope, and obtained new and archival data from ATCA (pre-CABB) and the HST.
CABB
We observed NGC 2082 on 2019 November 30 th using the Australian Telescope Compact Array (ATCA) (project code C3275 with 1.5C array configuration).
The miriad 1 (Sault et al. 1995) and karma 2 (Gooch 1995) software packages were used for reduction and analysis. Imaging was completed using the multifrequency synthesis invert task with natural Briggs weighting (robust=0 for all images), and beam size of 4.5 × 4.1 arcsec, 1.9 × 1.8 arcsec, and 1.3 × 1.0 arcsec for 2100, 5500, and 9000 MHz images, respectively. The mfclean and restor algorithms were used to deconvolve the images, with primary beam correction applied using the linmos task. We follow the same process with Stokes Q and U parameters to produce polarisation maps, except with beam size of 5 × 5 arcsec (see Section 3.2 below).
Australian Square Kilometre Array Pathfinder
NGC 2082 was observed serendipitously in the ASKAP-EMU radio continuum survey of the Large Magellanic Cloud , at the edge of the 120 deg 2 field. This survey was performed at 888 MHz with a 288 MHz bandwidth and 13.9 × 12.1 arcsec beam size.
Parkes Radio Telescope
We observed NGC 2082 on 2021 June 14 th , 15 th and 18 th with the Parkes radio telescope (project code PX075) using the Ultra-Wideband-Low (UWL) receiver (Hobbs et al. 2020), which delivers radio frequency coverage from 704 MHz to 4032 MHz. All observations were pointed at J054149.24-641813.7 and executed using the transient search mode where data are recorded with 2-bit sampling every 64 µs in each of the 0.125 MHz wide frequency channels (26624 channels across the whole band).
The full UWL band was split into multiple 512 MHz subbands for the search of bursts. The search was performed using the pulsar searching software package PRESTO (Ransom 2001). Radio-frequency interference (RFI) were identified and marked using the PRESTO routine RFIFIND with a 1 s integration time. To determine the optimal dispersion measure (DM) steps of the search, we used the DDPLAN.PY routine of PRESTO for a DM range of 200 to 3000 cm −3 pc. Data were then dedispersed at each of the trial DMs using the PREPDATA routine with RFI removal based on the mask file produced by RFIFIND.
Single pulse candidates with a signal-to-noise ratio larger than seven were identified using the SIN-GLE PULSE SEARCH.PY routine for each dedispersed time series and for different boxcar filtering parameters (from 1 to 300 samples). Burst candidates were manually examined.
Hubble Space Telescope
NGC 2082 was first imaged by the Hubble Space Telescope in 1997, following the type II supernova SN1992ba (Evans and Phillips 1992), revealing a bright, face-on spiral galaxy (Fig. 1). Fig. 1 is a 3-colour image created with APLpy (Robitaille and Bressert 2012), using archival HST data 4 (Carollo et al. 2002), where the 4 Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). red channel uses I-band data (F814W filter), the blue channel uses B-band data (F435W filter), and the green channel is pseudo-green which has been constructed by stacking the red and blue channels (B+I-band). No detection of SN1992ba is reported; it cannot be seen in any of these images.
Results and Discussion
A striking feature in all our radio images of NGC 2082 is strong point radio source (J054149.24-641813.7) positioned 20 arcsec from the galaxy centre, as seen in the bottom-left image of Fig. 1 by the ATCA contours. We also note no detection of SN1992ba in any of our images. The 9000 MHz ATCA observations, with our highest resolution, shows an unresolved point radio source, regardless of the parameters used in the data reduction. The top-right subplot in Fig. 1 provides a better look at an 888 MHz emission peak flux density of 0.0013 Jy beam −1 opposing J054149.24-641813.7, and also away from the centre of galaxy. It is unlikely that the two radio sources are related. Finally, we note HST observations with F435W and F814W filters show no optical counterparts to either source, and there are no counterparts at other wavelengths.
Spectral index
In Table 1 we show the flux densities of J054149.24-641813.7, measured using CARTA 5 and treated as a point source. We assume that the flux density errors are <10 per cent. We estimate a flat radio spectral index of α = +0.02 ± 0.09 suggesting that the emission is predominantly of thermal origin if the source is located in NGC 2082 (Fig. 2). Such a flat spectral index would be very unusual among SNRs and radio pulsar sources (Urošević 2014;Bates et al. 2013;Dai et al. 2015) unless this source is an unresolved pulsar wind nebulae (PWN). However, a background galaxy (quasar) could explain this radio spectrum (see Section 3.5).
To measure the flux densities of NGC 2082's entire extended emission, we use the method described in and , which includes careful region selection that also excludes J054149.24-641813.7. We measure reliable NGC 2082 flux densities at two frequencies (888 and 2100 MHz; Table 1), which allows us to estimate the spectral index of α = −0.15 ± 0.23. This flat radio spectral index is unusual for spiral galaxies Gioia et al. (1982), but consistent with thermal emission from Hii 5 https://cartavis.org/ regions across NGC 2082. As the nucleus of NGC 2082 does not show any radio compact source or emission above 0.1 mJy beam −1 , we suggest this might account for the unusually flat radio spectral index.
Polarisation
We also investigate if any polarisation from J054149.24-641813.7 or NGC 2082 can be detected in our ATCA images. The fractional linear polarisation (P ) of NGC 2082 was calculated using the equation: where P is the mean fractional linear polarisation, S Q , S U , and S I are integrated intensities for the Q, U , and I Stokes parameters, respectively. We calculate P 5500 MHz = 6±2 % (see Fig. 3a) and P 9000 MHz = 8±4 % (see Fig. 3c). Their associated polarisation intensity maps are seen in Figures 3b and 3d respectively. This weak polarisation associated with J054149.24-641813.7 is most likely explained if the source is of background origin (see Section 3.5). In addition to the very different radio spectral indexes of J054149.24-641813.7 and SNR 1987A, which indicate different emission origins, J054149.24-641813.7 is probably too bright to be an SNR originating from NGC 2082.
Fast radio bursts (FRBs) are extremely bright transient events of unknown origin (Lorimer et al. 2007).
Our Parkes observations (see Section 2.3) over 3 days detected no transient events. Despite this, considering both FRB 121102 and 190520B show sporadic outbursts (Rajwade et al. 2020;Dai et al. 2022), it is plausible J054149.24-641813.7 could host a repeating FRB and we observed during a quiescent period.
If intrinsic sources associated with J054149.24-641813.7 are implausible, the most likely remaining possibility is an extragalactic background source, such as a quasar, radio galaxy or AGN. If so, we may expect to see some Hi absorption; however, there is currently no high resolution Hi data for NGC 2082. The flat spectral index together with somewhat weak polarisation at 5500 and 9000 MHz images argue in favour of J054149.24-641813.7 background origin.
Our observations (Tab. 1) show that the flux density at 5500 MHz is ∼4.0 mJy. From Wall (1994), we find that for observations at 5500 MHz there are ∼15 sources/deg 2 at ≥4.0 mJy. The probability of finding a source of such brightness behind NGC 2082 is then, where A is NGC 2082s area on the sky in deg 2 . We calculate P = 1.2%, given the radius of NGC 2082 is r = 0.016 deg (de Vaucouleurs et al. 1991).
Conclusions
Nearby spiral galaxy NGC 2082 was found to contain an bright, compact radio source, J054149.24-641813.7 (Fig. 1), which is most likely of background origin. The flux densities reveal J054149.24-641813.7 has a flat spectral index, indicating its source may be of thermal origin. We compare the luminosity of J054149. 24-641813.7 to SNR 1987A, QRS 121102, and QRS 190520B, finding that J054149.24-641813.7 is likely too bright and flat to be a supernova, and is probably not bright enough to be a persistent radio source with an embedded FRB progenitor.
Acknowledgements
The Australia Telescope Compact Array (ATCA) and Australian SKA Pathfinder (ASKAP) are part of the Australia Telescope National Facility which is managed by CSIRO. Operation of the ASKAP is funded by the Australian Government with support from the National Collaborative Research Infrastructure Strategy. The ASKAP uses the resources of the Pawsey Supercomputing Centre. Establishment of the ASKAP, the Murchison Radio-astronomy Observatory, and the Pawsey Supercomputing Centre are initiatives of the Australian Government, with support from the Government of Western Australia and the Science and Industry Endowment Fund. We acknowledge the Wajarri Yamatji people as the traditional owners of the Observatory site. SD is the recipient of an Australian Research Council Discovery Early Career Award (DE210101738) funded by the Australian Government. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research made use of APLpy, an open-source plotting package for Python (Robitaille and Bressert 2012). This research is based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). We thank the anonymous referee for a constructive report and useful comments.
Data Availability All data are publicly available: • Parkes (project code PX075): https://data.csiro.au/ • ATCA CABB (project code C3275) and Pre-CABB (project code C466): https://atoa.atnf.csiro.au/query.jsp • ASKAP (project code AS101): https://data.csiro.au/ • HST (proposal ID 9395): https://hla.stsci.edu/hlaview.html Author Contribution Miroslav Filipović and Shi Dai contributed to the original discovery, conception and design of this study. Parkes data collection and analysis were performed by Joel Balzan and Shi Dai. The first draft of this manuscript was written by Joel Balzan and all authors commented on previous versions of the manuscript. Rami Alsaberi observed and reduced ATCA data. All authors read and approved the final manuscript.
Funding No funding was acquired for this research.
Declarations
Conflict of interest All authors declare that they have no conflicts of interest. | 3,067.4 | 2022-05-23T00:00:00.000 | [
"Physics"
] |
Efficient atomic clocks operated with several atomic ensembles
Atomic clocks are typically operated by locking a local oscillator (LO) to a single atomic ensemble. In this article we propose a scheme where the LO is locked to several atomic ensembles instead of one. This results in an exponential improvement compared to the conventional method and provides a stability of the clock scaling as $(\alpha N)^{-m/2}$ with $N$ being the number of atoms in each of the $m$ ensembles and $\alpha$ is a constant depending on the protocol being used to lock the LO
Atomic clocks are typically operated by locking a local oscillator (LO) to a single atomic ensemble. In this article we propose a scheme where the LO is locked to several atomic ensembles instead of one. This results in an exponential improvement compared to the conventional method and provides a stability of the clock scaling as (αN ) −m/2 with N being the number of atoms in each of the m ensembles and α is a constant depending on the protocol being used to lock the LO. Atomic clocks provides very precise time measurements useful for a broad range of areas in physics. The quantum noise of the atoms limits the stability of atomic clocks, resulting in the standard quantum limit where the stability scales as 1/ √ N with N being the number of atoms [1,2]. Various ways of improving the resolution have been suggested such as using entangled states with reduced atomic noise [3][4][5][6] to push the resolution to the Heisenberg limit where it scales as 1/N [7][8][9][10][11][12]. Another approach to increasing the stability is to use optical atomic clocks where the higher operating frequency leads to an improved stability [13][14][15][16][17]. Since an atomic clock is typically operated through Ramsey spectroscopy [18] the resolution can also be enhanced by increasing the Ramsey time T resulting in an improvement scaling as 1/ √ T [19][20][21]. For clocks with trapped atoms, where there are no other limitations, T becomes limited only by the decoherence in the system. In practice this decoherence often originates from the frequency fluctuations of the local oscillator (LO) used to drive the atomic clock transition [20]. Hence, the stability can also be increased by simply devising methods to increase the Ramsey period by stabilizing the LO [22].
In this letter we suggest a scheme where the frequency of the LO is locked to the atomic transition using several ensembles of atoms. This procedure allows increasing the Ramsey period each time another ensemble is used. As a result we find that the stability of the clock can increase exponentially with the number of ensembles. Fig. 1(a) illustrates the idea behind the scheme. The feedback of the first ensemble locks the frequency of the LO thus reducing the noise to the atomic noise. Having reduced the noise in the LO, the second ensemble can be operated with a longer Ramsey time. Through a second feedback the noise of the LO can be further reduced as shown in the simulation in Fig. 1 scheme can provide an exponential improvement in the stability with the total number of atoms. In order for the clock to be stable we need γT 1 1 and hence the protocol requires a minimum number of atoms to improve the performance. With the conventional Ramsey protocol we find that the scheme works for a minimum ensemble size of 20 atoms. To further optimize the performance of the scheme we study an adaptive measurement protocol for estimating the LO frequency offset, which extends the applicability of the scheme down to ensembles with only 4 (7) atoms for white (1/f ) noise in the LO. This makes the scheme relevant for atomic clocks based on trapped ions, which are typically constructed with only a few ions [19]. A related procedure involving multiple measurements on a single ensemble was proposed in Ref. [22]. By using multiple ensembles our procedure avoids disturbances from the measurements affecting later mea-surements. Recently and independently from this work a manuscript appeared, which treats essentially same locking scheme that we suggest [23]. Taking the different figures of merit into account that work arrives at results consistent with ours.
We will now describe the locking of the LO to the atomic transition using Ramsey spectroscopy. We model an ensemble of N atoms as a collection of spin-1/2 particles with total angular momentum J. We define the angular momentum operatorsĴ x ,Ĵ y andĴ z in the usual way and initially the atoms are pumped to have J along the z-direction, Ĵ x = Ĵ y = 0. In Ramsey spectroscopy the atoms are illuminated by a near-resonant π/2-pulse from the LO, followed by the Ramsey time T of free evolution, and finally another near-resonant π/2-pulse is applied. The Heisenberg evolution ofĴ z iŝ J 3 = cos(δφ)Ĵ y + sin(δφ)Ĵ z where δφ = δωT is the acquired phase of the LO relative to the atoms. At the end of the Ramsey sequenceĴ 3 is measured and used to make an estimate δφ e = −arcsin(2Ĵ 3 /N ) of δφ. The feedback loop then steers the frequency of the LO towards the atomic transition by applying a frequency correction of ∆ω = −αδφ e /T to the LO where α sets the strength of the feedback loop. The operation of an atomic clock thus consists of repeating a cycle of initializing -Ramsey sequence -measurement -feedback. The total time of this clock cycle is denoted T c and we assume that T c ∼ T , i.e. we assume a negligible Dick noise [24].
We now consider an atomic clock with two atomic ensembles operated with different Ramsey times and show how this can improve the stability of the clock. These considerations can then easily be extended to several ensembles. Note that we assume the intrinsic linewidth of the atoms to be negligible such that the atomic linewidth is only limited by the Ramsey time. The first ensemble is operated with Ramsey time T 1 and we assume that the second ensemble is operated with Ramsey time T 2 = nT 1 where n is an integer. We can make two discrete time scales describing ensemble one and two respectively. Ensemble one is measured at t k = kT 1 and ensemble two is measured at t s = sT 2 = s · nT 1 . The frequency offset of the LO between time t k−1 and t k is then where δω 0 (t) is the frequency fluctuation of the unlocked LO, ∆ω 1 (t k−1 ) is the sum of the frequency corrections applied up to time t k−1 from the first ensemble and ∆ω 2 (t s−1 ) is the sum of the frequency corrections applied up to time t s−1 from the second ensemble (t s−1 ≤ t k−1 ). The feedback loops are described by the equations where δφ e1 (t k−1 ) and δφ e2 (t s−1 ) are the estimated phases from the first and second ensemble at times t k−1 and t s−1 respectively. Using Eq. (1) we can write the phase of the LO relative to the atoms of the second ensemble at time t s as where ∆φ s−1 = T2 0 ∆ω 2 (t s−1 )dt is the accumulated phase due to the feedback of the second ensemble and is the accumulated phase due to the frequency oscillations of the LO when locked by the feedback of the first ensemble. For now we assume that T 2 T 1 such that the feedback of the first ensemble has stabilized the LO but later we will relax this assumption. From Eqs. (3)-(4) we then derive the difference equation From this expression we see that the evolution of the second phase δφ 2 is essentially driven by the noise of the stabilized LO from the first step δφ but is stabilized by the second feedback loop described by αδφ e2 .
To solve Eq. (6) we need to characterize the width of the noise of the stabilized LO from the first stage, δφ 2 = T2 0 dt T2 0 dt δω(t)δω(t ) . From Eq. (2) and (5) we can derive a difference equation for δφ(t k ) = T1 0 δω(t k −t )dt , which is the acquired phase of the LO relative to the first ensemble between time t k−1 and t k (we can neglect the feedback from the second ensemble since T 2 T 1 ): Here δφ 0 (t k ) = T1 0 δω 0 (t k − t )dt is the phase of the unlocked LO. In comparison to Eq.(6) we see that the evolution of the phase δφ is driven by the noise of the unlocked LO but is stabilized by the first feedback loop described by αδφ e1 . To solve this equation we follow Ref. [25] where the locking of the LO to a single ensemble is described. First we derive a differential equation from Eq. (7) in the limit N 1, treatingĴ x ,Ĵ y , andĴ z as Gaussian variables and considering for now a LO subject to white noise. Assuming that the atoms start out in a coherent spin state we can solve this equation to obtain where we have defined the parameterγ = 1/N T 1 , which characterizes the noise of the stabilized LO. This noise is effectively white for both white and 1/f noise in the unlocked LO ( Fig. 1(b) and Ref. [25]). The second ensemble thus sees an effective white noise in the LO with γ = 1/(T 1 N ).
We now return to Eq. (6). Writing δφ 2 (t) ∼ δω(t)T 2 the stability of the clock after running for a time τ where ω is the frequency of the atomic transition. Following similar arguments as before we can derive and solve a differential equation from Eq. (6) to obtain an expresion for δφ 2 (t)δφ 2 (t ) . Inserting this into Eq. (9) and taking the limit of τ T 2 results in Eq. (10) describes how the stability improves with T 2 and N . The longest T 2 we can allow is determined by how well the LO is stabilized by the first ensemble as contained inγ and we parameterize it by T 2,max = β 2 /γ. In a similar fashion we assume that T 1,max = β 1 /γ for the first ensemble. With these parameterizations we can express the stability as With white noise in the unlocked LO we can pick β 1 = β 2 .
As previously noted the noise of the LO will also be approximately white withγ ∼ 1/N T 1 after locking it to the first ensemble also for other types of noise e.g. 1/f noise. In that case it is desirable to have β 2 = β 1 but we still expect β 1 /β 2 to be of order unity. Eq. (11) shows that by locking the LO to two ensembles of uncorrelated atoms the stability can be significantly improved. If N γT 1 1 the stability obtained from Eq. (11) is much better than the single ensemble result in Eq. (10) (with T 2 → T 1 ). The arguments leading to Eq. (11) can be generalized in a straight forward way to show that if the LO is locked to m ensembles each containing N atoms, the stability of the clock is σ γ (τ ) = (β 1 /β) (m−1) γ/(ω 2 τ )(N γT 1,max ) −m/2 (since the noise of the LO is white after locking it to the first ensemble we use β = β 2 = . . . = β m ). By continuing the procedure we thus improve the stability exponentially! In our analytical calculations above we have assumed N 1. To investigate the performance for smaller N we simulate an atomic clock locked to between 1 and 4 atomic ensembles each with atom numbers from N = 20 to N = 100. From the simulations we can generalize to the case where the LO is locked to m ensembles. We simulate the full quantum evolution of the atomic state through the Ramsey sequences and subsequent measurements and implement the feedback on the LO similar to the description in Eq. (1) and above. The assumption of T 2 T 1 can be relaxed by applying a phase correction in the measurement [26]. The number of atoms required in each ensemble to increase the Ramsey time by a factor a at each level is set by the white noise level of the stabilized LO. Using Eq. (8) and remembering that β parameterize the maximal Ramsey time for white noise we have that T 2 /N T 1 =γT 2 = β. Assuming T 2 = aT 1 we find that N ∼ a/β atoms are required in each ensemble to increase the Ramsey time by a factor of a at each level. The minimum number of atoms required for our protocol to work is thus obtained by setting a = 2.
To determine β we investigate the errors that limits the Ramsey time T for a LO subject to white noise characterized by γ. For experiments or simulations running with a fixed Ramsey time there is always a finite probability that phase jumps large enough to spoil the measurement strategy occurs since Ramsey spectroscopy with projective measurements is only effective for phases π/2. In our simulations we see these phase jumps as an abrupt break down as we increase T . Simulating a clock running for a time τ = 10 6 T with a single ensemble of N = 10 5 atoms, we see the stability increase with T until a maximum of T max ∼ 0.1/γ is reached. Increasing T beyond this point results in a rapid decrease in the stability. From this we conclude that Ramsey spectroscopy with projective measurements only allows for β ∼ 0.1 and thus N min = 20. To determine β 1 for an LO subject to 1/f noise we do a similar simulation where the noise spectrum of the LO is S(f ) = γ 2 /f (f is frequency). From this simulation we find that β 1 ∼ 0.1 as for white noise. Note that this construction introduces a weak (logarithmic) dependence on the number of steps that we simulate [26].
We have simulated clocks with an unlocked LO subject to both white and 1/f noise with the constraint β = 0.1. In Fig. 2 the stability of the clocks are plotted against the ensemble size N . Fig. 2 confirms that the scheme works down to atom numbers of N = 20 where we gain a factor of ∼ 2 m−1 in σ 2 γ (τ ) by locking the LO to m ensembles for both white and 1/f noise. Furthermore the numerical results are seen to agree nicely with the analytical calculations. We obtain practically the same long term stability for 1/f noise as for white noise since the first feedback whitens the noise for small frequencies (cf. Fig. 1b).
The conventional Ramsey protocol considered so far has a lower limit of N min = 20 in order for our protocol to work. This limit is due to the inability of the conventional protocol to effectively resolve phases larger than π/2. In Ref. [30] we presented an adaptive protocol for estimating the phase, which effectively resolves phases π. Again simulating a clock running for a time τ = 10 6 T 1 with a single ensemble of N = 10 5 atoms we find that this protocol enables us to extend the Ramsey time to β ∼ 0.3 for white noise and to β 1 ∼ 0.2 for a LO subject to 1/f noise [26]. However the type of weak measurements described in Ref. [30] is hard to implement for ensembles of few atoms. We have therefore modified the protocol such that individual atoms are read out one at a time and a Bayesian procedure similar to that of Ref. [31,32] is used for the phase estimation and atomic feedback. We perform intermediate feedbacks during the measurements to rotate the atomic state to be almost in phase with the LO. Due to the rotations the protocol can resolve phases π as the protocol in Ref. [30]. This protocol is described in detail in the supplemental material [26]. With this adaptive measurement strategy we simulate clocks locked to between 1 and 4 ensembles for atom numbers from N = 4 to 34 with an unlocked LO subject to both white and 1/f noise with the constraint β = 0.3. The stability of the clocks is plotted against the ensemble size N in Fig. 3. For the adaptive protocol we can apply the scheme of locking to several ensembles down to ensemble sizes of N = 4 (7) for white (1/f ) noise where we gain a factor of ∼ 2 m−1 in σ 2 γ (τ ) by locking the LO to m ensembles. The minimal number of atoms is higher for 1/f noise since the adaptive protocol is not as effective as for white noise where we have a better understanding of the a priori distribution in the Bayesian procedure [26]. It should be noted, however, that in principle it is only in the first ensemble that we need more atoms than for white noise since the feedback of the first ensemble whitens the noise. The adaptive protocol is thus more effective for the subsequent ensembles.
In conclusion we have demonstrated a scheme for locking the LO in an atomic clock to m ensembles of N atoms each. For this scheme the stability of the clock scales as √ γ(γT 1 N ) −m/2 where T 1 is the Ramsey time of the first ensemble. Our scheme thus provide an exponential improvement in the stability with the number of atoms. For the conventional Ramsey protocol our scheme is applicable down to ensemble sizes of N = 20 atoms while it is applicable down to ensemble sizes of N = 4 (7) using an adaptive protocol. This make the scheme relevant for atomic clocks with trapped ions. The performance of the protocol can be improved further by considering squeezed states but this is beyond the scope of this article.
This supplemental material to our article "Efficient atomic clocks operated with several atomic ensembles" describes the details of our numerical simulations of atomic clocks locked to several atomic ensembles and the details of the modified adaptive protocol where the atoms are read out one at a time. Furthermore we show how the assumption of T 2 T 1 made in the article can be relaxed by applying a phase correction in the measurement of the second ensemble and how we find the limit of the free evolution time.
PHASE CORRECTIONS
The Ramsey sequence and the subsequent estimate of the drifted phase of the LO relative to an ensemble of atoms is described in the article. Eq. (1) -(3) in the article describes the frequency offset of the LO (δω(t)) between time t k−1 = (k − 1)T and t k = kT when the LO is locked to two ensembles. We will now generalize this formalism to the case where the LO is locked to m ensembles. Assuming that the j'th ensemble is operated with Ramsey time T j = n j−1 T 1 (n is an integer describing how many times the Ramsey time can be increased for each added ensemble) the frequency offset of the LO between time t k−1 = (k−1)T 1 and t k = kT 1 is where δω 0 (t k ) is the frequency fluctuations of the unlocked LO and ∆ω m (t sj n j−1 ) is the sum of the frequency corrections applied up to time t sj n j−1 from the j'th ensemble (s j is found by rounding (k −1)/n j−1 down to the nearest integer). Note that the index s j n j−1 should be read as s j times n j−1 describing the exponential increase in the Ramsey time each time another ensemble is used. The iterative equation for ∆ω j (t sj n j−1 ) is where δφ ej (t sj n j−1 ) is the estimated phase from the j'th ensemble at time t sj n j−1 and α sets the strength of the feedback loop (for now we assume equal strengths for all feedback loops). α determines how long time the clocks needs to run before the LO is effectively locked by the feedbacks (The LO is locked after a time ∼ T j /α). In the article we assumed that T 2 T 1 such that the feedback of the first ensemble had effectively locked the LO before the measurement of the second ensemble. In the general setup of locking the LO to m ensembles this corresponds to assuming that n 1. We will now show how we can apply a phase correction in the measurement of the j'th ensemble such that we can relax this assumption. The phase correction will compensate for the fact that the information from the last measurements on the first (j − 1) ensembles has not been fully exploited by the feedback loops before the measurement on the j'th ensemble. Note that we assume that the phase correction is only applied to the measurement and not to the LO.
The phase of the LO relative to the j'th ensemble just before the measurement at time t sj n j−1 is where φ s+(sj−1)n j−1 = T1 0 δω(t s+(sj−1)n j−1 − t )dt and Φ correct j sj n j−1 is the phase correction applied in the measurement of the j'th ensemble at time t sj n j−1 . Using Eq. (S12)-(S13) we can write where δφ s+(sj−1)n j−1 is the accumulated phase between time t (sj −1)n j−1 +s−1 and t (sj −1)n j−1 +s due to the frequency fluctuations of the unlocked LO and the feedback corrections applied up to time t (sj −1)n j−1 . For simplicity we have replaced the time dependence by an index such that δφ ei s n i−1 +(sj−1)n j−1 is the phase estimate from the i'th ensemble at time t s n i−1 +(sj −1)n j−1 . To fully exploit all information from the measurements on the first (j−1) ensembles between time t (sj −1)n j−1 and t sj n j−1 , we choose a phase correction of Φ correct j sj n j−1 = φ correct j,1 Here we assume that when two or more ensembles are to be read out at the same instant in time, ensembles with a shorter Ramsey time are measured before the ones with longer Ramsey times such that the results from these measurements can be used as a correction for the ensembles with a longer Ramsey time.
where φ s+(sj−1)n j−1 = T1 0 δω(t s+(sj −1)n j−1 − t )dt is the accumulated phase of the LO relative to the atoms in the first ensemble between times t s−1+(sj −1)n j−1 and t s+(sj −1)n j−1 . According to Eq. (S17), Φ j sj n j−1 is effectively the accumulated errors between the estimated phases and the actual phases for the (j − 1)'th ensemble between times t (sj −1)n j−1 and t sj n j−1 (this is seen by considering Eq. (S17) for j = 1, 2, . . .). Φ j sj n j−1 is thus the accumulated phase of the LO between time t (sj −1)n j−1 and t sj n j−1 minus the phase change already measured by the first j − 1 ensembles, i.e. it does not require further running time to incorporate the information acquired in the first measurements. As opposed to the feedback loop, which corrects for e.g. frequency drifts by changing the frequency of the LO, the phase corrections directly correct the phase. This phase locking ensures a more rapid convergence, which is important when we want to apply the LO to the subsequent ensembles. With the phase corrections Φ correct j sj n j−1 we can therefore relax the assumption of n 1. Since the noise of the LO is white after stabilizing it to the first ensemble the subsequent frequency corrections from the other ensembles could be replaced with merely phase corrections of the LO, which would simplify the above procedure by removing the need for phase corrections in the measurements. We have however chosen to consider frequency corrections to keep a consistent treatment of the feedback in all stages.
In our simulations we are simulating a clock with a LO locked to m ensembles running for a long but finite time. Similar to our description of the phase corrections Φ correct j sj n j−1 above there will be some remaining information from the last measurements, which have not been fully exploited by the feedback loops when our simulation stops. In our simulations we therefore include an additional phase correction Φ correct final to the LO after the final measurement. In principle the influence of the last few measurements could also have been reduced by running the simulation for a longer time but by doing the phase correction we reduce the required simulation time. With the phase correction the mean frequency offset of the LO (ω(τ )) after running the clock for a total time of τ = lT 1 is where φ s = T1 0 δω(t s − t )dt is the phase of the LO relative to the atoms at time t s and Φ correct final is the final phase correction of the LO. Using Eq. (S12)-(S13) and assuming that the j'th ensemble is operated with Ramsey time T j = n j−1 T 1 we can writeω(τ ) as: where δφ 0 s is the accumulated phase between time t s−1 and t s due to the frequency fluctuations of the unlocked LO and δφ ej s n j−1 is the estimated phase from the j'th ensemble at time t s n j−1 . We find that the ideal performance is With this phase correction the mean frequency offset is where T m is the Ramsey time of the m'th ensemble,Φ m s is the accumulated phase of the stabilized LO relative to the atoms in the m'th ensemble at time t sn m−1 and φ em s is the estimate of that phase. Eq. (S23) shows that the final phase correction effectively incorporate the remaining information from the measurements that has not yet been exploited by the feedback loop. Thus the mean frequency offset simply depends on how well we estimate the phase of the m'th ensemble and this last measurement is effectively a measurement of the accumulated errors of the phase estimates in the previous (m − 1) ensembles. We use Eq. (S23) to determine the stability of the clock, which is given by σ γ (τ ) = (δω(τ )/ω) 2 1/2 .
MODIFIED ADAPTIVE MEASUREMENTS
The adaptive measurement procedure presented in Ref. [30] can effectively resolve phases between ±π due to the inclusion of rotations of the atomic state. However the assumed dispersive interaction between the probe light and the atoms, which is considered in Ref. [30] would be very challenging to implement for small number of atoms. We have therefore modified the procedure such that the weak measurements are obtained by reading out individual atoms one at a time. Such a procedure is much easier to implement in e.g. ion clocks where individual addressing of ions is feasible. Based on the measurement record we estimate the phase using a Bayesian procedure similar to that of Ref. [31,32]. The subsequent feedback on the remaining atoms tries to rotate the atoms into phase with the LO as in Ref. [30].
We will now describe the details of the modified protocol. At the end of the Ramsey sequence, i.e. after the second π/2 pulse, an atom can either be detected in a spin up state s = 0 or a spin down state s = 1. The probability of measuring s = 0, 1 depends on the acquired phase δφ of the LO relative to the atoms during the free evolution in the following way According to Bayes theorem we can write the probability density of δφ conditioned on the measurement result as where P (s) = P (δφ)P (s|δφ)d(δφ) is the total probability of measuring s and P (δφ) is the a priori probability distribution of δφ, which is determined from characterizing the frequency fluctuations of the LO. We choose a Gaussian distribution with zero mean and variance γT as the a priori distribution. This a priori distribution is exact for a LO subject to white noise but a better a priori distribution could possibly be found for 1/f noise. We will however use this a priori distribution in both cases, which results in our modified protocol not being as effective for 1/f noise as for white noise in the LO. Note that this inaccuracy in our a priori distribution only introduce a less ideal performance in our phase estimate. In our numerical simulations we retain the full information about the phase evolution so that our suboptimal assumption about the a priori distribution only degrade the performance of the scheme. We estimate the phase based on the measurement as where P (S m |δφ, nm k=1 δφ e k ) is the probability of obtaining the measurement record S m conditioned on a drifted phase δφ with a total feedback of nm k=1 δφ e k applied during the measurements ( ni k=1 δφ e k is the feedback experienced by the i th atom before it is read out). Note that in general n i = i − 1, i.e. we might read out more than one atom before we do a phase estimate and a subsequent feedback on the remaining atoms. In our simulations we group the measurements such that we perform ∼ 4 feedbacks in total as in the protocol of Ref. [30] for uncorrelated atoms. The final phase estimate δφ e nm+1 after having read out m atoms is where we have used Bayes theorem as described above. All the phase estimates and the feedbacks are performed after the final π/2 pulse in the Ramsey sequence. Thus the final estimate of the drifted phase is δφ e = nm+1 k=1 δφ e k , i.e. the sum of the rotations performed during the measurements and the final phase estimate.
LIMIT OF THE FREE EVOLUTION TIME
As described in the beginning of the main article the stability of an atomic clock increases with the Ramsey time T . For clocks with trapped atoms T is essentially only limited by the decoherence in the system, which in practice often originates from the LO. This decoherence is what results in the phase offset between the LO and the atoms after the period of free evolution in the Ramsey sequence. For experiments and simulations running with fixed Ramsey times there is a finite probability that a phase jump occurs, which is large enough to spoil the feedback strategy used to lock the LO to the atomic transition. This can result in the feedback jumping to a state with a phase difference of 2π (so called fringe hops [33]) or the measurement strategy can fail leading to ambiguous results. For the conventional Ramsey protocol this happens for phase jumps larger than π/2 while the adaptive protocol breaks down for phase jumps larger than π [30]. The probability of these phase jumps increases with T since the width of the distribution of the phase jumps σ 2 δφ increases with T , e.g. for white noise σ 2 δφ = γT . One could include correction strategies for the errors introduced by these phase jumps (e.g. running an ensemble with different Ramsey times would correct for the fringe hops) but for simplicity we do not consider this in our simulations.
As shown in the main article the minimum number of atoms required in each ensemble in order to increase the Ramsey time by a factor of a at each level of our protocol is where β parameterize the maximal Ramsey time T max for a LO subject to white noise. Note that equivivalently β is the maximal width of the distribution of phase jumps allowed, i.e. σ 2 δφ,max = β. The requirement expressed in Eq. (S29) ensures that when we increase the Ramsey time of the next ensemble by a factor of a compared to the Ramsey time of the previous ensemble we still keep σ 2 δφ σ 2 δφ,max for the noise seen by the next ensemble. Note that minimum number of atoms required for our protocol to work is found by setting a = 2.
To determine β we simulate an atomic clock with only a single ensemble with N = 10 5 atoms and a LO subject to white noise characterized by a strength γ. Furthermore to determine β 1 (see Eq. (11) and above in the article) for a LO subject to 1/f noise we do a similar simulation but with a 1/f noise spectrum of the LO i.e. S(f ) = γ 2 /f where S(f ) denotes the noise spectrum and f is frequency. Note that we define the noise spectrum as S(f )δ(f +f ) = δω(f )δω(f ) where δω(f ) is the Fourier transform of the frequency fluctuations δω(t) of the LO. S(f ) is thus the frequency noise spectrum. In the simulations we do not simulate the full quantum evolution of the atomic state as we do for the simulations presented in the article. Instead we approximate the probability distributions ofĴ x,y,z with Gaussian distributions as in Ref. [30]. This Gaussian approximation is legitimate since N 1. Furthermore it is desirable to have a weak feedback strength α for white noise in the unlocked LO since a strong feedback increases the width of the phase noise for the locked LO. For white noise in the unlocked LO we therefore simulate the limit where α 1 such that the phases are uncorrelated. For 1/f noise we use a feedback strength of α = 0.5 since a stronger feedback is desirable to lock the LO more rapidly. The high number of atoms ensures that when we increase the Ramsey time T of the clock we see the onset of the phase jumps as an abrupt break down, which is not blurred by the atomic noise in our phase estimates. In our simulation the clock is running for a time τ = 10 6 T , i.e. for l = 10 6 steps of T (for 1/f noise we average over 100 independent runs with 10 4 steps of T ). The onset of the break down will in principle have a weak (logarithmic) dependence on the number of steps that we simulate, which we do not expect to change our results significantly [30]. Fig. S4 shows the result of our simulations. • is the adaptive protocol of Ref. [30] and is the conventional Ramsey protocol. The adaptive protocol allows for γT ∼ 0.3 and 0.2 for white and 1/f -noise respectively while the conventional protocol only allows for γT ∼ 0.1 for both white and 1/f noise.
The adaptive protocol that we have used here is that of Ref. [30] since the modified adaptive protocol will lead to similar results for large atom numbers where the break down is most apparent but is harder to simulate. Fig. S4 shows that the conventional protocol allows for β ∼ 0.1 for white noise and that β 1 ∼ 0.1 for 1/f noise in the unlocked LO while the adaptive protocol allows for β ∼ 0.3 and β 1 ∼ 0.2 for 1/f noise. With a = 2 in Eq. (S29) the minimum number of atoms required for the protocol of locking to several ensembles to work is thus N min = 20 for conventional Ramsey strategy while the adaptive strategy can extend the applicability down to N min = 7 atoms. We expect β of the modified adaptive protocol to be identical to that of the adaptive protocol in Ref. [30] since both rely on the rotation of the atomic state to resolve phases between ±π. In our numerical simulations of the modified protocol we have therefore set β ∼ 0.3 and β 1 ∼ 0.2 for 1/f noise. Note that in our simulations of the full protocol of locking an atomic clock to several atomic ensembles, we still include the possibility of disruptive phase jumps. However imposing the limits on β (β 1 ) identified from Fig. S4 for all steps in the protocol ensures that we do not see any significant effect of them. The probability to have disruptive phase jumps for the duration of the simulations is simply negligible, i.e, the probability for phase jumps large enough to spoil the feedback strategy in a Ramsey sequence is well below 10 6 . Note that the feedback strength is set to α = 0.01 for white noise and α = 0.5 for 1/f noise in the LO in our simulations of the full protocol. The strong feedback strength of α = 0.5 is only used for the first ensemble for 1/f noise since the noise seen by the other ensembles is white and a weaker feedback strength is thus desirable.
In our above estimates of N min we have assumed that the adaptive protocol leads to a stability at the SQL. This is only true for large N and there are corrections to this for small N . From our simulations we find that with the modified adaptive protocol, and white noise in the unlocked LO, the feedback of the first ensemble stabilizes the LO to a white noise floor below 1/N T 1,max (see Eq. (8)), i.e. better than what we expect from the SQL. We can thus extend the applicability of the protocol to atom numbers below N = 7. As shown in Fig. 3 in the main article we find that we can go as low as 4 atoms and still have the feedback of first ensemble lowering the noise floor of the LO by a factor of two such that the second ensemble can be operated with twice as long a Ramsey time while keeping the width of the phase noise below β = 0.3. For 1/f noise the minimum number of atoms is still N = 7. This slightly worse performance is a result of the shorter required Ramsey time for the first ensemble (β 1 ∼ 0.2) and the incomplete characterization of the a priori probability distribution in the Bayesian approach (see the section "Modified adaptive measurements" above). The latter results in the feedback only stabilizing the LO to the SQL of ∼ 1/N T 1,max also for small N and the minimum number of atoms is thus 7 as seen from Eq. (S29) with a = 2. Note, however, that in principle it is only in the first ensemble that we need more atoms than for white noise since the feedback of the first ensemble will have whitened the noise affecting the subsequent ensembles. The modified adaptive protocol is thus more effective for the subsequent ensembles. | 9,210.6 | 2013-04-22T00:00:00.000 | [
"Physics"
] |
Improved storage mitigates vulnerability to food-supply shocks in smallholder agriculture during the COVID-19 pandemic
Millions of smallholder farmers in low-income countries are highly vulnerable to food-supply shocks, and reducing this vulnerability remains challenging in view of climatic changes. Restrictions to limit the spread of the COVID-19 pandemic produced a severe supply-side shock in rural areas of Sub-Saharan Africa, including through frictions in agricultural markets. We use a large-scale field experiment to examine the effects of improved on-farm storage on household food security during COVID-19 restrictions. Based on text message survey data we find that the prevalence of food insecurity increased in control group households during COVID-19 restrictions (coinciding with the agricultural lean season). In treatment households, equipped with an improved on-farm storage technology and training in its use, food insecurity was lower during COVID-19 restrictions. This underscores the benefits of improved on-farm storage for mitigating vulnerability to food-supply shocks. These insights are relevant for the larger, long-term question of climate change adaptation, and also regarding trade-offs between public health protection and food security.
Introduction
When COVID-19 started spreading globally in early 2020, many countries responded with severe restrictions to protect public health (Weible et al., 2020). Such restrictions are likely to have adverse food security effects particularly in low-income countries. In Sub-Saharan Africa (SSA), which had the highest prevalence of food insecurity even before COVID-19 (FAO et al., 2020), COVID-19 related restrictions are expected to aggravate already high levels of food insecurity (Barrett, 2020;Food Security Information Network, 2020), though empirical evidence on such effects remains scarce.
COVID-19 restrictions are paramount to a severe, unfamiliar (to farmers), and completely unexpected shock in the food supply system. Prior to COVID-19, farmers in large parts of SSA, except for areas previously affected by the Ebola virus disease, were unfamiliar with policy interventions that aim to curb the spread of an infectious disease. Movement restrictions, for example, disrupt local agricultural markets and labor supply for agricultural production and processing, and school closures cause school feeding programmes to cease (Food Security Information Network, 2020). In addition to poor urban households, which have received the most attention in this context, smallholder farming households are, presumably, also highly vulnerable to sudden food supply shocks (Frelat et al., 2015;IPCC, 2014). Smallholder farmers are the backbone of food production in SSA (Torero, 2020). Although smallholder farms are usually less than 2 ha in size, they account for the largest share of food production (Frelat et al., 2015) and are thus critical to food security in SSA (Herrero et al., 2010).
Smallholders' food stocks could, potentially, mitigate various types of food supply shocks, such as those emanating from a bad harvest, or COVID-19 restrictions. However, high storage losses make holding food stocks over extended periods of time unattractive. Losses gradually increase with time and are estimated at 25.6% of the maize production in the region on average, in the absence of suitable storage technologies (Affognon et al., 2015, c.f. also African Postharvest Losses Information System (APHLIS), 2020, for detailed data across regions, crops, and years).
Reducing storage losses could allow smallholders to store their harvest longer, which would increase quantities available for consumption by households and communities. Higher stock levels may help farmers to prepare for expected, periodic scarcities, such as the agricultural lean season, and also contribute to mitigating (unpredictable) supply shocks. In the lean season, which is the time shortly before a new harvest is brought in, food insecurity often increases as smallholder's own food stocks are depleted, and rising food prices limit access to food on markets (Kaminski et al., 2016). Improved on-farm storage has been shown to reduce lean season food insecurity among smallholder farmers in Tanzania , yet little is known about its potential to limit the adverse food security effects of a severe aggregate food security shock.
To assess the potential of a simple and cheap technology to this end, hermetic storage bags, we build on a large field experiment (randomized control trial) in Kakamega county, Western Kenya, which we initiated in September 2019. The study region is typical for many rural areas in SSA, which are characterized by smallholder farming and where the rural population is vulnerable to climate change related shocks to staple food production (maize). The experimental intervention consisted of low-cost hermetic storage bags that minimize storage losses (Likhayo et al., 2016;Ndegwa et al., 2016), as compared to polypropylene bags used by the vast majority of smallholder farmers. For example, evidence from on-farm trials in Kenya shows that maize stored in common polypropylene bags without chemical protectants was associated with losses of around 2.6-2.8% after 3 months of storage, 10-15% after 6 months of storage, and 30% after 9 months of storage (Likhayo et al., 2016). In contrast, maize stored in hermetic storage bags incurred losses of only 0.5-1.8% after 9 months of storage (Ndegwa et al., 2016). Hermetic storage limits atmospheric oxygen, thus causing desiccation of insects and other pests that damage stored grains (Murdock et al., 2012) and restricts fungal growth (Williams et al., 2014), when appropriately used. However, adoption rates of this technology are still very low, including in our study region (Channa et al., 2019).
The COVID-19 crisis and the ensuing lockdown in Kenya beginning in mid-March 2020 came completely unexpected, both to the farmers in our experiment and the research team. It thus created conditions for a quasi-experiment, where we can examine food-security outcomes not only in response to a randomly allocated treatment condition (hermetic storage bags), but also assess how the treatment performs (relative to control) under conditions of a severe food security shock. Specifically, we collected monthly data on household food insecurity for a full harvest cycle covering the time between the main harvest preceding and succeeding the start of COVID-19 restrictions.
COVID-19 restrictions in Kenya
In Kenya, the first set of COVID-19 restrictions were imposed on 16 March 2020 (Coronavirus, 2020). In a first stage, these restrictions included measures to limit or prohibit social events, international travel, and the closing of schools. These restrictions were quickly followed by a "dusk-to-dawn curfew" (7 pm to 5 am), which was effective from 25 March, and the cessation of movement in and out of metropolitan areas (including Nairobi and Mombasa) as of 8 April. These tight lockdown measures were gradually eased after 6 July 2020. Until this date, only a few selective measures had been relaxed (e.g. on 6 June, curfew hours were adjusted to 9pm-4am) and some variation existed in terms of the counties affected by a cessation of movement order.
For smallholder farming households in Kakamega county, the imposed restrictions had tangible effects on daily life. Farmers pointed out in focus group discussions that agricultural markets were severely distorted, inhibiting farmers from selling surplus stocks and limiting their ability to purchase food on the markets. Although agricultural markets were, in principle, allowed to operate, market stalls were obliged to ensure social distancing and hygiene standards, and many market participants were unable to meet those requirements (e.g. if masks or hand sanitizer were unavailable or social distancing simply not feasible). This in turn prompted authorities to close many market stalls.
Additionally, movement restrictions, in particular the nightly curfew, had effects on agricultural trading routes for market participants who were unable to return home by the time of the curfew. Furthermore, conditional on the type of schools their children were supposed to attend, some families reported that the closing of schools implied that children missed out on school meals. Finally, several farmers also mentioned that they were anxious to go to the markets, especially at the beginning of COVID-19 restrictions, and decided to stay home. Taken together, COVID-19 restrictions had strong effects on the daily lives of smallholder farmers, which plausibly affected their household's food security situation.
Methods
To analyze the effects of an improved on-farm storage intervention on smallholder farming household's food security during COVID-19 restrictions, we build on a matched-pair, cluster randomized control trial (RCT).
Setting
Our RCT is undertaken with a representative sample of farmer groups in Kakamega county, Western Kenya. The study region is typical for many areas in SSA, characterized by smallholder farming with high agricultural production potential, yet prevailing food insecurity and poverty. In Kakamega, maize is the staple food and the predominant sales crop. Geographically, all of Kakamega's 12 sub-counties and 59 out of 60 wards are covered by our study. Our sample of farmer groups was randomly selected from a census list of farmer groups in Kakamega, which we established in collaboration with local authorities. Supplementary Fig. 1 provides a map of the study region and farmer group locations. The study was approved by the ETH Zurich Ethics Commission (EK, 2018-N-51) and icipe's Science Committee (no approval number used). The study design is registered in the American Economic Association (AEA) RCT Registry .
Experimental design
We used a matched-pair, cluster-randomization design, as suggested by Imai et al. (2009), who show that from the perspective of efficiency, power, bias, and robustness, pairing should be done whenever feasible. Baseline variables were used for pair-wise matching; specifically, food security, the fraction of female participants in clusters, cluster size, mean maize yield and mean market distance (Bruhn and McKenzie, 2009). To minimize spillover effects from treatment to control groups, random allocation was done at the level of spatial clusters of farmer groups, applying a 5 km geographic radius. Spatial clustering resulted in 62 experimental clusters, consisting of a total of 285 farmer groups (5 ′ 444 smallholder households). 3 ′ 220 smallholder households participated in surveys during the observation period for this analysis. Supplementary Table 1 presents the sample characteristics. The table shows that treatment and control group baseline characteristics do not substantially differ in the sample used in this study, i.e. the measurement rounds before and after COVID-19 restrictions (Panel A). Likewise, when comparing the sample used here with a sample of participants who were originally recruited but did not respond to the survey rounds before and after COVID-19 restrictions (Panel B), we do not find substantial differences in baseline characteristics. A notable exception from these findings are female-headed households, which participated to a slightly lesser extent in the survey rounds before and after COVID-19 restrictions (see also Discussion section).
Improved on-farm storage intervention
The intervention for treatment clusters consisted of five hermetic storage bags per household, with a capacity of 100kg of maize per bag, and a standardized training session on their use. The hermetic bags were sourced in a competitive process according to the procurement rules of ETH Zurich. The bag selected was of the brand "AgroZ". The training session was developed by the authors, based on materials provided by the UN World Food Programme. The interventions were implemented from 3 to 15 September 2019 by icipe.
Measurement
The RCT as a whole focuses on a variety of outcomes presumably affected by the experimental intervention, primarily food security and associated health outcomes . The analysis in this paper focuses on self-assessed food security. We measured self-assessed food security via the reduced Coping Strategies Index (rCSI) (Maxwell et al., 2008(Maxwell et al., , 2014. The rCSI, a 5-item questionnaire, assesses the magnitude of measures taken by households to deal with food insecurity problems and tracks short-term fluctuations in food insecurity (Vaitla et al., 2017) (see Supplementary Table 2 for details). We applied standard thresholds (Vaitla et al., 2017) to classify rCSI values into food (in)security categories, using the threshold for food insecurity (≥5). Supplementary Table 3 shows the results of a robustness check applying an alternative threshold value for food security (Maxwell et al., 2014). As we used a 30-day recall period in our surveys, whereas the threshold values are provided for 7-day recall windows, we rescaled our rCSI values accordingly. The choice of a 30-day recall period reflects the frequency of our data collection (monthly) and the benefits of an uninterrupted and continuous measurement of household food security in the observation period. While we acknowledge that a longer recall period comes with a potential disadvantage in terms of the reliability of our measurement at a specific point in time, our choice also reduces the extent to which short-term changes (e.g. daily) could bias our analysis.
Survey methods
Data was collected through SMS-based mobile phone surveys, an efficient and effective method to collect data at high frequency in our study area. Supplementary Table 4 shows the dates of all survey rounds. Of specific interest are the survey rounds just before and after the start of COVID-19 restrictions in Kenya. The survey round measuring food insecurity before COVID-19 restrictions was sent out on 14 March 2020 at 1pm (Eastern African Time) and was open for completion until 18 March at 3am. The follow-up survey, conducted after COVID-19 restrictions, was sent out on 11 April 2020 at 1pm and was open for completion until 16 April at 3am. Respondents received a phone credit (airtime), valued at 20 Kenyan Shilling, upon completion of a survey. All survey participants received equal airtime incentives, irrespective of experimental assignment or answers.
To facilitate the interpretation of the empirical results presented in this paper, the research team organized a series of focus group discussions with five different farmer groups in October 2020. The focus of these discussions was to explore the extent to which COVID-19 restrictions affected farmer's lives, what kind of expectations farmers had at different stages of the pandemic, and the kind of coping strategies farmers engaged in to mitigate adverse effects.
Statistical analysis
The intent-to-treat (ITT) effect, i.e., the total effect of the treatment on outcomes of interest, irrespective of experimental compliance (Gerber and Green, 2012), was estimated as the weighted average of within-pair mean differences between treatment and control clusters . We use arithmetic weights (w k ) = n 1k + n 2k , which is the sum of the n observations in both clusters of each pair indexed by k, as suggested in Imai et al. (2009). To control for potential differences between experimental groups before COVID-19 restrictions were enacted, we further estimate the ITT effect based on household differences between our measurements immediately before and after COVID-19 restrictions .
Results
Our results show that the prevalence of food insecurity increased in control group households during COVID-19 restrictions and the contemporaneous agricultural lean season. In treatment households, equipped with an improved on-farm storage technology and training in its use, food insecurity was lower during COVID-19 restrictions.
Sharp increase in food insecurity following COVID-19 restrictions
In our control group households, the prevalence of food insecurity was relatively stable prior to the COVID-19 crisis (see Fig. 1), i.e. between the main maize harvest in 2019 (around October 2019) and the start of Kenyan COVID-19 restrictions (mid-March 2020). In the 30 days immediately before the implementation of COVID-19 restrictions, 40.8% of households were food insecure. However, within 30 days of COVID-19 restrictions, the prevalence of food insecurity increased significantly by 8 percentage points (or 19.6%) to 48.8%. This increase amounts to a sudden change in the prevalence of food insecurity as compared to prior months (see Fig. 1). The prevalence of food insecurity among control group households subsequently remained at elevated levels until July. The food security situation then improved as COVID-19 restrictions in Kenya were eased and the agricultural lean season ended with the new main harvest around September 2020. Taken together, our results suggest a strong food security shock to which smallholder households were exposed during the COVID-19 pandemic (see also the Discussion section). Equally interesting, however, are our findings for the treatment group.
On-farm storage mitigates food security shock during COVID-19 restrictions
In contrast to our control group households, the prevalence of food insecurity among treatment households increased only slightly in the 30 days immediately following COVID-19 restrictions. Among treatment households, 39.5% of households were food insecure in the 30 days before COVID-19 restrictions (see Table 1). Within 30 days of COVID-19 restrictions, this prevalence increased by 3.7 percentage points (or 9.4%) to 43.2%, which is significantly less relative to the control group. To examine whether the experimental intervention (improved storage) affected the change in the prevalence of food insecurity before and after COVID-19 restrictions, we additionally estimate the treatment effect based on household differences between the two measurement rounds . We find that the experimental intervention mitigated parts of the increase in food insecurity observed immediately following COVID restrictions (see Table 1).
In subsequent months, food insecurity remained lower relative to control, albeit not significantly so in all measurement rounds (see Fig. 2 and Supplementary Table 4). In treatment households the initial food security shock observed during the COVID-19 pandemic was strongly buffered, whereas the effect was smaller in the subsequent period of prolonged food security stress (see Fig. 3). This latter finding may be explained by farmers' expectations on the duration of the restriction (see the Discussion section for details).
To illustrate the substantive meaning of our results, we extrapolate our findings to all smallholder households in the county of our study (an estimated 1.62 million people, with 90% of households growing maize, the staple crop we focus on; Ministry of Agriculture, Livestock and Fisheries, 2017). Given that our sample was drawn from a census list of farmer groups in the county, it is reasonable to assume these households are very similar in nature to the households in our sample, which implies that approximately 595 ′ 000 people would have been food insecure in the 30 days before COVID-19 restrictions. The number of food-insecure people would then have increased by an estimated 117 ′ 000 people if none of the households had received the five hermetic storage bags and training in their use. In contrast, the number of food-insecure people would have increased only by 54 ′ 000 people in the 30 days following COVID-19 restrictions if all households had access to hermetic storage.
Discussion
In this paper, we examined the effects of an improved on-farm storage technology (hermetic storage bags and training in their use) on smallholder household food security during the COVID-19 pandemic. The findings show that food insecurity suddenly increased after the implementation of COVID-19 restrictions, but that the experimental intervention significantly curbed this increase. The ITT we find is of comparable magnitude to the effects of direct cash transfers to smallholder farmers reported by (Banerjee et al., 2020). Their experiment was implemented in a neighbouring county of Western Kenya, where smallholders were provided with either 0.75 USD per day (for 2 years prior to the pandemic) or 500 USD as lump-sum payment. The authors find that cash transfer recipients were 4.9 to 10.8 percentage points less likely to report experiencing hunger during the COVID-19 pandemic (control mean: 68%), as measured via phone surveys between late April and late June 2020. These food security benefits are similar to what we have reported here, yet the costs of the cash transfer interventions are Fig. 1. Prevalence of food insecurity in control group households before and after COVID-19 restrictions. The figure shows the prevalence of household food insecurity, as measured by the reduced Coping Strategies Index (rCSI) with a 30-day recall period for each survey round. Prevalence is calculated based on the weighted mean of all observations per survey round in the control group (for weights used, refer to section 3.6). Lines show the prevalence of food insecurity, i.e., the percentage of food-insecure households for each 30-day period. The dotted vertical lines represent the start (16 March, 2020) and easing of COVID-19 restrictions in Kenya (6 July, 2020). The average number of observations per survey round (control group only) is 1 ′ 077 (total number of observations in control group is 14 ′ 007 observations; see Supplementary Table 4 for details on number of observations for each survey round).
Table 1
Effects of improved on-farm storage on the prevalence of food insecurity in the 30 days before and after the start of COVID-19 restrictions. The table presents the effects of improved on-farm storage on the prevalence of food insecurity, which is expressed as the percentage of food-insecure households, as measured for the 30 days before and after the start of COVID-19 restrictions. Prevalence for food insecurity based on standard threshold (≥5) (Vaitla et al., 2017). ITT=Intent-to-treat. Negative ITT values correspond to favorable outcomes. CI show 95% bootstrapped confidence intervals, lower (lo) and upper (up). P values based on non-parametric two-tailed t tests. The bootstrap is based on 1000 replications. Sample sizes by number of pairs (m), total number of observations (n), and number of observations in control (n0) and treatment conditions (n1) are reported in the last column. substantially higher compared to our intervention where the provision of hermetic storage bags and training in their use incurred costs of about 20 USD per household.
Fig. 2. Prevalence of food insecurity in treatment
Our results further suggest that the intervention was more effective in buffering the food supply shocks that occurred early in the COVID-19 pandemic, whereas the subsequent food security stress was curtailed by a lesser extent. These results may reflect smallholder households' expectations on the duration of COVID-19 restrictions. At the outset, COVID-19 restrictions were announced to be in place for 30 days (Coronavirus: Kenya introduces tight restrictions, 2020). Our results are consistent with smallholder farmers initially expecting the COVID-19 restrictions to be lifted quickly. Subsequently, as restrictions were extended, farmers became more worried and anticipated that COVID-19 restrictions would remain for longer periods of time. Our results may hence reflect a situation where smallholder farmers, on average, used additional food stocks (enabled through improved on-farm storage) to safeguard against the first short-term shock, but had limited stocks left to fully buffer a prolonged period of COVID-19 restrictions and the gradually increasing lean season food stress. These interpretations are substantiated by farmers' statements in our focus group discussions. Interestingly, farmers also mentioned during the discussions that they had adapted their initial expectations afterwards. The perception of a substantial number of households was that if a second lockdown became necessary, potentially due to a second COVID-19 wave, it may be implemented for a significantly longer period. This consideration already prompted many families to adjust both their consumption and storage behavior regarding the recent harvest, as farmers indicated during the discussions.
Our study analyzes the effects of an experimental intervention during a period of increased food insecurity that coincided with COVID-19 restrictions. Hence we are unable to completely disentangle the relative contribution of COVID-19 restrictions to the observed increase in food insecurity from other factors, in particular the agricultural lean season, which are also likely to have affected household food insecurity at the same time. For several reasons, we are, however, confident that COVID-19 restrictions contributed, at least partially, to increased food insecurity. First, the prevalence of food insecurity in control group households increased suddenly in the 30 days immediately following COVID-19 restrictions. If these effects were due to the agricultural lean season, which is a predictable and familiar shock, we would expect a more gradual increase in food insecurity over time. Second, in our study area, the lean season typically begins later in the harvest year, namely in April or May (Burke et al., 2019), as farmers bring in another smaller (secondary) harvest around January or February. Third, no other major events (apart from the lean season) occurred during COVID-19 restrictions that could explain the sudden increase in food insecurity. While other areas in Kenya experienced severe problems with desert locust in the observation period (Roussi, 2020), our study area was not affected. With movement restrictions in place, and locust outbreaks primarily having affected pastoralist areas, spillover effects on food insecurity in our study region from affected areas elsewhere are unlikely. Taken together, we consider it likely that the observed increase in food insecurity in the first 30 days of COVID-19 restrictions was primarily due to this policy intervention. It is still possible that agricultural seasonality amplified food insecurity to some extent afterwards, as COVID-19 restrictions progressed. However, this does not undermine our main finding that treated households experienced a smaller increase in food insecurity under conditions where both treatment and control households experienced a food supply shock that appears to have been aggravated by COVID-19 restrictions at a time (lean season) where households often have a higher risk of food insecurity.
Our study uses SMS-based surveys, collected with a monthly frequency, over the duration of one full harvest cycle during which the COVID-19 restrictions occurred. SMS-based surveys allowed us to collect a continuous dataset uninterrupted by COVID-19 restrictions. However, given the self-assessed nature of our food security measure, there could be a concern that recipients strategically responded to the surveys in order to elicit support during the COVID-19 pandemic. If such bias were systematically different between experimental conditions, it would bias our ITT estimates. We consider the risk of a systematic response bias limited. Prior research has shown that response bias is reduced in selfadministrated surveys (such as SMS-based surveys) as compared to face-to-face interviews due to the lack of personal interaction between respondents and interviewers (Krumpal, 2013). Furthermore, participants were informed that all data collection is kept separate from the team conducting the intervention, limiting incentives for strategic responses (e.g. households overreporting food insecurity to obtain more government or NGO support). Related to our measure of food insecurity, we consider that the literature has proposed two different thresholds to classify households in food insecurity categories (see Section 2.4), and hence re-estimate our model with the alternative threshold proposed in Maxwell (2014). We find that the substantive results remain very similar; the experimental treatment reduced (significantly) the prevalence of food-insecure households following COVID-19 restrictions (see Supplementary Table 3).
Another potential issue is whether our data collection mode (SMSbased surveys) may have affected the balance between treatment and control group characteristics (covariates). However, Supplementary Table 1 (Panel A) shows that baseline characteristics of the treatment group does not substantially differ from the characteristics of the control group in the sample used in this study, i.e. in the SMS-based survey rounds before and after COVID-19 restrictions. Furthermore, also when comparing the sample used here with a sample of participants who were originally recruited but did not respond in any survey round in our observation period, we find that baseline characteristics are remarkably similar (Supplementary Table 1, Panel B). The only notable exception is that female-headed households appeared less likely to participate in any of our survey rounds in our observation period, but that applies to both treatment and control group households alike, which limits the risk of bias in our treatment effect estimates. However, female-headed households have been shown to be on average more strongly affected by food insecurity (Kassie et al., 2014) and more vulnerable to sudden food system shocks (Kumar and Quisumbing, 2013), and the increase in food insecurity during COVID-19 restrictions estimated here may hence represent a lower bound.
Yet another discussion is merited by the fact that we look mainly at maize, a calorie-rich food. It would be interesting to also investigate to what extent households have access to more nutrient-dense foods as well, and whether such access differs between our treatment and control groups. COVID-19 related food security discussions have in fact paid considerable attention to nutrient-dense foods and their link with immune system functioning. However, in our specific case random allocation to treatment or control should lead to a very similar distribution of observable and non-observable confounding factors in treatment and control groups, including available food other than maize. It would be interesting, nevertheless, to investigate substitution processes between food types that different types of households may engage in as the availability of maize changes.
Finally, our work examined short-term impacts of an improved onfarm storage technology during the COVID-19 pandemic. We can, of course, not yet offer robust evidence on longer-term benefits of improved storage under conditions where farmers are confronted with arguably more common shocks, such as a bad harvest, or climatic changes that impact agricultural output over multiple years. However, evidence from a somewhat similar, but smaller-scale experiment in Tanzania suggests that improved on-farm storage is likely to reduce persistent food insecurity among smallholder farmers as well .
Conclusion
Both policy-makers and scientists have become increasingly interested in how food security in low-income countries could be improved not only through increasing agricultural production, but also through reducing post-harvest losses (Sheahan and Barrett, 2017). Low cost and easy to use technologies to that end are particularly interesting in low-income smallholder farming contexts (Godfray et al., 2010). Some such technologies exist, but adoption rates are still low (Channa et al., 2019), meaning that there is, presumably, a large unexploited potential for improving food security with a technology that is cheap, can be implemented even in the short run, and has no negative ecological implications (Affognon et al., 2015).
Our RCT in Kenya assesses the benefits of such a technology (hermetic storage bags) with regards to food security and also a range of associated health implications (see section 2 for details). The recent COVID-19 restrictions provide an opportunity for a first analysis of data from this research effort, both with respect to food security effects of a policy-induced shock and benefits of improved storage under such conditions. The findings show that smallholder household's food insecurity increased during COVID-19 restrictions, but that improved onfarm storage curbed these increases.
The main policy implication of our research is that greater efforts should be undertaken to promote the adoption and appropriate use of low-cost and easy to use technology for improved on-farm storage. Such action could help not only in attenuating the painful trade-offs between protecting public health and increased food security, as in the current COVID-19 crisis, but also in reducing the vulnerability of smallholder farmers to longer-term, climate change-induced or other types of shocks to the food system. Thus our work also contributes to the scarce, but growing literature that considers improved on-farm storage as an important climate change adaptation strategy, which is especially important as climatic changes may further increase post-harvest losses (Stathers et al., 2013;Lybbert and Sumner, 2012). Higher temperatures and more erratic precipitation can increase the risk of fungal growth (and associated foodborne pathogens, such as aflatoxin) and of insect infestation in stored produce (Fanzo et al., 2018;Stathers et al., 2013;Lybbert and Sumner, 2012). The resulting post-harvest losses and the risk of foodborne pathogens can, however, be mitigated by improved storage (Fanzo et al., 2018), which renders investing in improved on-farm storage solutions even more important (Stathers et al., 2013;Lybbert and Sumner, 2012).
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Author contributions
M.H. and M.B. jointly conceived the project, acquired funding, developed the design and methods, collected, curated and analyzed the data, and contributed to writing of the paper; T.B. and U.E. contributed to the development of the study design, fundraising, and writing of the paper. M.K. contributed to the development of the study design, interpretation of the data, writing of the paper, and coordinated the fieldwork and field interventions.
Data and materials availability
Data and code are available from the authors. Further details on the study design, which also includes surveys on household health that are used for other research activities within the larger project, are available at the AEA RCT Registry . | 7,301 | 2020-12-09T00:00:00.000 | [
"Economics"
] |
Maxwell construction and multi-criticality in uncharged generalized quasi-topological black holes
we demonstrate the existence of N-tuple critical points of uncharged AdS black holes in generalized quasi-topological (GQT) theories. The criticality is shown to have a geometrical interpretation described by the Maxwell’s equal area rule. We present a compact reformulation of the area rule and identify a criterion for the emergence such points. Using this criterion, we construct several multi-critical points with genuine GQT densities, including a quadruple and a quintuple points.
Introduction
It is well known that black holes behave like thermodynamic systems that radiate a thermal flux of particles [1] and whose equilibrium states are governed by the four laws of black hole mechanics [2], reinterpreted as the four laws of thermodynamics.For asymptotically AdS black holes, the negative cosmological constant can be identified as a thermodynamic pressure that validates the extended first law of thermodynamics [3][4][5][6].In this context black hole mass is interpreted as enthapy, and thus the mechanics of black holes can be understood in terms of chemical thermodynamics [7,8].Over the past decade, the perspective of black hole chemistry has led to the discovery of a number of rich properties, including Van der Waals phase transitions [9], re-entrant phase transitions [10,11], superfluid-like phase transitions [12][13][14], and triple points [11,[15][16][17].
A recent interesting discovery in black hole chemistry was that of multi-critical points.These were first seen in Einstein gravity coupled to non-linear electrodynamics [18], but were shortly afterward found to be present in multiply rotating Kerr-AdS black holes [19], and in Lovelock gravity [20].In the latter case multi-critical behaviour can even occur for asymptotically flat black holes [21].An N -th order multi-critical point occurs when N distinct phases merge at a single value of pressure and temperature, generalizing the notion of a triple point (with N = 3).Explicit examples of quadruple and quintuple points have been found in all cases examined so far.
In this paper, we demonstrate an alternative method for finding multi-critical points in black hole phase transtions.Previous methods exploited the fact that each extremum of the temperature (regarded as a function of horizon radius) corresponded to a cusp in the Gibbs free energy [18,20].2N − 2 distinct extrema were thus required to obtain N distinct phases, each with its own swallowtail in the Gibbs free energy diagram.The intersection points of corresponding swallowtails will merge if two adjacent inflections of T (r + ) occur at the same temperature.Extending this to all the inflections will then yield an N -tuple critical point where all such intersections merge.However, the number of parameters used in this approach is more than needed -in fact only approximately half of the horizon radii of the extrema are needed.This redundancy results in extremely low efficiency in computing the (finely tuned) parameters need to obtain a multicritical point, as iteratively tuning the various parameters such that all inflections occur at the same temperature becomes very time-consuming.This problem is particularly acute when dealing with more complicated higher-curvature theories.Indeed the situation deteriorates when considering black holes with large numbers of thermodynamic degrees of freedom since considerably high precision is required because of the finely tuned nature of multi-criticality.
The new method overcomes these difficulties.Inspired by Maxwell's equal area rule [22], we provide an equivalent but more compact description of multi-criticality, which can be perfectly adapted to the construction without the redundancy introduced in previous methods.Instead of iteratively manipulating input parameters, our approach directly indicates whether a given set of non-redundant parameters can or cannot have multi-critical points.If the latter holds, we obtain their accurate values in thermodynamic phase space with notably less computation.
To illustrate our method we consider black holes in higher curvature theories of gravity.Specifically we consider Generalized quasi-topological (GQT) gravity.The reasons for this are as follows.
For the past 2 decades there has been a revival of interest in higher curvature gravity in the theoretical physics community.Such theories have proven to be significant in a variety of contexts in physics, including string theory, holography, the AdS/CFT correspondence, tests of general relativity, and black hole thermodynamics.The gravitational action becomes renormalizable when supplemented with higher-curvature terms [23], making such theories candidates for a quantum theory of gravity.String-theoretic versions of quantum gravity motivate the possibility of higher dimensional spacetimes, and the addition of higher-curvature corrections allows for broader generalizations of the Einstein-Hilbert action to dimensions larger than four.These higher-curvature theories provide toy models for studying the AdS/CFT correspondence and allow for holographic study of Conformal Field Theories (CFTs).
But the inclusion of higher order curvature terms comes at a price -it can yield equations of motion containing higher derivatives in the metric that give rise to instabilities and negative energy modes [24,25].Notably, this inconsistency with general relativity (GR) is absent in some classes of higher-curvature theories [26][27][28][29] in which only a massless spin-2 graviton can propagate to infinity .This subclass of higher-curvature gravity theories is considerably more promising than the others and thus warrants further investigation.
Generalized quasi-topological (GQT) gravities, a class of recently proposed higherderivative theories, satisfies the requirements noted above.Theories in the GQT class characterize generalizations of GR in any dimension and to any order in curvature insofar as they contain non-hairy black hole solutions and second-order-differential equations for the metric in any linearized maximally symmetric background [30][31][32].Generally, the bulk part of their action can be written as where Λ is the cosmological constant, S (k) n are independent densities constructed from different constructions of n Riemann tensors and the metric, α n,k is the associated kth higher-curvature coupling, and the Newtonian constant G is set to 1 for simplicity.Quite remarkably, in the context of gravitational effective field theory, any higher-curvature theory can be mapped into a subset of GQT theories via field redefinition [30].Furthermore, there exists a subset in the parameter space of higher-curvature couplings where these theories only allow a massless spin-2 graviton propagating at the linearized level.
In general, the field equations of this class contain metric derivatives up to fourth order.However, for a static spherically symmetric (SSS) ansatz (1.2), where dΩ 2 d−2,κ describes the (d − 2)-dimensional transverse surface of constant curvature normalized to κ = +1, 0, −1 denoting spherical, flat and hyperbolic topologies, respectively.The metric function f (r) is fully determined by the vanishing of a total derivative of a second order differential or algebraic expression, with the constant of integration related to the ADM mass.Theories with an algebraic equation of motion for f are identified as quasi-topological (QT) gravities [33][34][35], and they likely satisfy a Birkhoff theorem [35][36][37][38].However, theories in the QT class only exist in d ≥ 5. Another interesting GQT subclass is that of Lovelock theories [26,27], which are the most direct generalizations of GR in that the field equations are always second order differential equations for any metric.Similar to the QT class, for the ansatz (1.2), the metric function f is fully characterized by a single algebraic equation that only differs from that of the QT theories by an overall constant.Thus Lovelock gravity corresponds to a subclass of QT theories.
However, Lovelock theories seem too restrictive -Einstein gravity is the only possible Lovelock theory in d = 4, and a Lovelock curvature density of order n yields non-trivial dynamics only if d > 2n + 1.So the first non-trivial Lovelock theory appears in d = 5, corresponding to Gauss-Bonnet gravity with n = 2, whereas non-trivial GQT gravity theories exist for any order n ≥ 3 in d ≥ 4 [30,31].In addition, as far as (1.2) is concerned, GQT theories allow multiple inequivalent densities at a given order in d ≥ 5, but QT (Lovelock) theories have one unique density at any order [31].Thus GQTs constitute a much broader class of higher-curvature theories than have already been specified.
We therefore choose to illustrate our approach in GQT gravity theories.The thermodynamics of these theories have not been explored to the same extent that Lovelock theories have.Previous studies have been carried out in limited contexts, with only a few low order couplings in dimensions not much larger than four [39][40][41][42][43][44].Inspired by the multi-critical behaviour found for a broad range of black holes in different contexts [18][19][20][21], our interests lie both in demonstrating our method and in understanding multi-criticality in GQT gravity.
Our paper is organized as follows.In sections 2 and A we review some properties of GQT black hole solutions.In section 3, we provide an interpretation of multi-criticality and introduce the K-rule, obtained by the reformulation of the Maxwell area rule.We develop in section 4 a method based on the K-rule, carry out a discussion on its feasibility, and construct quadruple and quintuple points based on this approach.
Fundamentals of thermodynamics of GQT black holes
Evaluated on a static spherically symmetric metric of the form (1.2), the GQT class of curvature order n ≥ 2 has exactly n − 1 inequivalent densities in d ≥ 5 [31].As required by the integrability of the field equation for f (r), on-shell GQT Lagrangian densities should be total derivatives of the form [31] where k labels one of the n − 1 inequivalent densities, S (n,j) is given by n,j are some constrained coefficients such that S n is induced by a real off-shell GQT density.Then the constraints n,j j(j − 1) = 0.
(2.3) are required to obtain the most general Lagrangian density [31].The first constraint ensures that all densities contribute with a power of r d−1 ; the second ensures that the field equations are of 2nd order when linearized about constant curvature backgrounds.Note that any linear combination of on-shell densities still satisfies (2.1), (2.2) and (2.3); therefore any density can be decomposed into n − 1 independent densities.In other words, it is sufficient to study one particular choice of λ (k) n,j .Incorporating the constraints (2.3), the following choice n,j form a family of GQT densities, with the remaining coefficients identical to 0. This simple choice is employed in our analysis but the results in the remaining part of this section hold generally.
Upon integrating the equations of motion, we obtain [30] where 2 ) , M is the ADM mass [45][46][47][48], and where prime is defined as taking derivatives with respect to r.
For j = 0, 1, F (n,j) becomes an algebraic quantity (having no derivative terms), which implies the density defined by (2.4) with k = 1 is purely QT (Lovelock).To make a consistent notation with common definitions, we separate QT densities from GQT ones (2.4) and define as QT densities.The remainder, with k ≥ 2, are genuine GQT densities.Despite the complexity of (2.5), the thermodynamics of GQT black holes is determined by two simple equations: f (r + ) = 0, which defines the outermost black hole horizon at r = r + , and f (r + ) = 4πT , which defines the temperature of the black hole.These two relations are given by [31] ) where the couplings of the lowest two orders are set to be α 0,1 ≡ −(d − 1)(d − 2)/(2 2 ), and α 1,1 ≡ 1/2 for consistency with Einstein gravity.The parameter here is the AdS length, and so α 0,1 is identical to the cosmological constant Λ.In the context of black hole chemistry, all couplings except for α 1,1 are identified as thermodynamic variables, with as their corresponding conjugate potentials.As a well-known consequence in GR, the first law of thermodynamics and the Smarr relation hold as well in the extended phase space: where the pressure P is a redefinition of α 0,1 and V is its corresponding conjugate volume The thermodynamic quantity S is the Wald entropy [49], which reads [31] This quantity is not always positive, and in such situations it has been common to simply discard solutions for which this is the case.However ambiguities exist in the definition of the black hole entropy.For example, adding to the Lagrangian a term proportional to the induced metric on the horizon will, without having an effect on the other properties of the solution, shift the entropy by an arbitrary constant.One example is that of adding an Euler density to the action [44].We shall therefore retain solutions with S < 0 in considerations, appropriately indicating in our figures where this occurs.
Henceforth we shall consider the Gibbs free energy G = M −T S for investigating phase transitions.The global minimum of G yields the most stable thermodynamic phase at any given temperature.
Before continuing, we note that physical theories should only propagate one type of massless spin-2 graviton on constant curvature backgrounds.This in turn implies the effective Newtonian constant must have the same sign as the one in general relativity, which means that for the class of metrics (1.2) having asymptotically AdS solutions of the form we shall only consider black holes with f ∞ > 0, h (f ∞ ) < 0 and γ 2 > 01 as satisfying the requisite physical criteria.We discuss these issues in appendix A.
Geometric interpretation of interphase equilibrium
We seek to obtain the conditions under which three or more phases merge at a particular temperature and pressure.The Gibbs free energy provides a diagnostic for this.Its global minimum as a function of the temperature T determines the thermodynamically stable state of the system for a given fixed choice of the other thermodynamic parameters.The presence of swallowtails in the Gibbs free energy indicates multiple phases, with first order phase transitions between two distinct phases taking place at the intersection point of the swallowtail.There must be N − 1 swallowtails in order to have N distinct phases.Whenever the intersection points of j different swallowtails coincide, then there is a j-th order multicritical point, where j ≤ N .Previous methods for finding multiple phases and N -tuple critical points exploited the fact that, regarding temperature as a function of horizon radius, each extremum of T (r + ) corresponds to a cusp in the Gibbs free energy [18,20].Hence N distinct phases require 2N −2 distinct extrema.If two adjacent inflections of T (r + ) occur at the same temperature, then the intersection points of corresponding swallowtails will merge.If this takes place for all the inflections, then all such intersection points will merge, correseponding to an N -tuple critical point.These critical points can be found by finely tuning the other thermodynamic parameters.
Here we demonstrate an alternate method that is considerably more efficient.We start with a brief review of the Maxwell construction [22].It is well-known that the multiplicity of the Gibbs free energy G(P, T ) corresponds to the non-monotonic behavior of the pressure P (V, T ).As illustrated in figure 1, a full oscillation AaBbC in the pressure at a fixed temperature T * leads to a swallowtail on the Gibbs phase diagram.With some abuse of notation, integrating dG along the loop A → b → a → C yields The second equality holds because the temperature is fixed, the fourth one comes from integration by parts, and the last expression follows from P (V A ) = P (V C ) = P * , where P * characterizes the swallowtail intersection point in the Gibbs phase plot.The geometric interpretation of (3.1) is obvious: P * corresponds to a pressure that partitions the oscillatory parts of the P − V diagram into equal areas.
It is useful to define the function K(V, V i ) and its derivative K (V, V i ) as follows: It is obvious that K(V A , V A ) = 0 and that the last expression of (3.1) can be rewritten as However for any two points in thermodynamic phase space whose volumes satisfy (3.3), the relation (3.3) alone doesn't imply that their difference in free energy is zero.It is also necessary to ensure that P (V C ) = P (V A ) so that (3.1) holds.In a plot of G vs. P , this requirement is equivalent to the condition that A and C are the same point.This in turn implies that where the first condition ensures that P (V A , T * ) = P * .Geometrically, (3.3) and (3.4) ensure that A and C are the same point in the Gibbs energy diagram, and the continuity of K between A and C guarantees that this point is on some closed loop.A true self-intersection point (or double point) therefore emerges.We note that if the second derivative vanishes at some point then K will no longer be an extremum there.This is illustrated for point C in the rightmost diagram of figure 1 by the red curve.The pressure will then be an extremum at this point (as shown in the leftmost diagram in figure 1), and the corresponding part of the curve in the free energy diagram will get reflected through P * , as shown by the red curve in the middle diagram in figure 1.By convention, we still regard this as a double point.These considerations can be easily generalized to any N -tuple point.We say that an N -tuple point exists at (P * , T * ) if and only if the K-rule is satisfied: namely that the function K(V, V 0 ) has N real zero points {V n } for some fixed V 0 and K vanishes for all those roots, namely are satisfied by exactly N different values of {V n }, including V 0 itself.Since the above argument about multicriticality is quite general, we would expect these discussions apply to any thermodynamic system for any conjugate pair of thermodynamic quantities, such as the temperature and the entropy.
Multiple phases and N -tuple critical points
We shall now construct multiple phases and N -tuple points for GQT black holes based on the K-rule introduced in section 3. The procedure is simple.
1. Write the function K as where P is given from (2.9) (which can be regarded as the the equation of state) with2 replaced by (2.13), V is identified as the thermodynamic volume defined in (2.13), and {α n,k } is the set of undetermined couplings.
2. Apply the K-rule to N positive distinct values of r + (where r 0 is taken to be any one of these values), then solve for P * , T * , {α n,k } from the 2N − 1 independent equations 2 (3.6).This implies that a minimum number of 2N − 3 non-zero higher-curvature couplings are required.
3. Check if the solution P * , T * , {α n,k } provides a real N -tuple point in a sense that exactly N roots solve (3.6) as desired.If not, change the choice of as many horizon radii as needed until a real N -tuple point occurs.
We pause to make a few supplementary comments regarding the feasibility of the method.First of all, K is constructed from P and V instead of T and S because we want K to be a simple function such that the equations in step 2 are solvable: the definition (4.1) fulfills this requirement since K is in fact a polynomial in r + .For convenience, K is defined as a function of the radius rather than the volume.It should be pointed out that the density S (2) n≥3 is quasi-topological and becomes trivial in d = 2n.Therefore, for even dimensions, we exclude α d/2,2 from our considerations.We shall also restrict ourselves to spherical black holes with κ = 1 for simplicity.Since the pressure becomes a polynomial in r + (with the temperature T considered as a non-dynamical parameter), then Descartes' rule of signs can be applied, which relates the largest number of oscillations in the region r + > 0 to the number of sign changes in the sequence of a polynomial's coefficients.Thus we discuss the feasibility of our method through studying the possibility for the occurrence of N − 1 oscillations in P by manipulating signs of couplings in the next paragraph.
The feasibility of step 2 can be seen by induction.As indicated by (2.9) or table 1, switching on an arbitrary genuine GQT coupling (k ≥ 2) always introduces three independent terms proportional to different powers of r + in the expression for the pressure in addition to which is the expression with all higher-curvature couplings set to zero.Since P 0 has one sign change, then turning on any particular coupling can introduce another sign change.
Respecting the fact that temperature and the coupling are free, it is possible to make P have a full oscillation, which means that (3.6) has real solutions and a double point can be obtained.Similarly, for critical points involving more phases, we can obtain additional oscillations by including two additional couplings per oscillation, as long as they switch on at least two monomials in r + that differ from those already present.Hence not any choice of 2N − 3 higher-order densities yields an equilibrium state with four phases or more.For example, as highlighted in table 1, switching on {α 7,2 , α 8,4 , α 9,6 , α 10,8 , α 11,10 } and keeping other couplings zero only introduces 1/r 11 + , 1/r 12 + , 1/r 13 + into the pressure.Together with P 0 , a total of five monomials are present in the expression for the pressure, which is only enough to construct a triple point.In this example we must therefore avoid turning on more than three couplings that contribute to the same three powers of r + .
The number 2N −3 should be considered as the minimum number of couplings required by our method.This may not be the smallest number of couplings needed for the emergence of an N -tuple point in general, since it is possible to have another sign change internally between the three monomials corresponding to the α n,k .The explicit form of new terms that are activated by switching on α n,k reads We claim that there can be at most one sign change between these three terms.If the sign switches twice, the first two coefficients must have a negative product, namely, Since we restrict ourselves to genuine GQT densities (k ≥ 2) in d ≥ 5 only, the above can be simplified to Note that the dimension d cannot be any of 2n − k, 2n − k + 1, 2n − k + 2; otherwise one of the terms would vanish and it would be no longer possible to have two sign changes between two terms.Therefore the first two factors in (4.4) must be positive, which leads to where k ≥ 2 is applied to determine the directions of inequalities.Meanwhile, we require the last two terms produce a sign change as well.Through a similar analysis, we arrive at Our claim is thus proved, since (4.6) contradicts (4.5).Even if there is an extra internal sign change, a half oscillation in the pressure does not necessarily occur, since these variables are not only integers but also constrained relative to each other in a complicated way.However, notice that the range of n in (4.6) is proportional to d, implying that we have a rather large parameter space for n and k.Hence we can still expect that there exists some choices of d, k, n that can give rise to an N -tuple point with less than 2N − 3 couplings.As a consequence, we would also expect N − 1 to be a lower bound on the number of couplings needed for the occurrence of an N -tuple point.Because of these considerations, step 3 is added to guarantee that exactly N phases are obtained.
From the previous discussion, we can see that to figure out a general Gibbs phase rule is challenging.For neutral multi-rotating black holes [19], because the phase structure is invariant under the exchange of any two angular momenta, turning on any two additional couplings always creates a new phase and vice versa.However in the GQT scenario, this symmetry is broken between any two couplings, and theories with distinct values of k (even if they have the same n) differ a lot in their phase structures.A multi-critical point may not occur even if infinitely many couplings are turned on.This implies that the problem is not as simple as it is in Lovelock gravity where k = 1 and each density (2.7) similarly contributes to the thermodynamics [20].We leave this question for future investigation.
In order to obtain multiple phases and multi-critical points, the physical constraints and ensuring P > 0 everywhere must also be considered.Since it is difficult to find a case with positive γ 2 everywhere, we only impose γ 2 > 0 in a neighbourhood of each critical point.In practice, we keep manipulating r + 's until a critical point satisfying all constraints occurs.Under these considerations, we explicitly obtain a quadruple point (figures 2 and 3) and a quintuple point (figures 4 and 5) for two different spherical GQT black holes.Note that to see the merging of multiple swallowtails requires high precision in computations due to the finely tuned nature of multi-critical points.Both multi-critical points have negative Gibbs free energies, implying stable phase transitions.For the quintuple point, an extra coupling α 3,2 is fixed to be 1 before running the procedure in order to make physical cases easier to emerge.
Compared with the previous methods, the advantages of our procedure are quite significant.First, the application of the Maxwell construction turns the problem into algebra which enables our method to produce critical points with arbitrarily high precision such that very tiny phase structures can be discovered easily.More importantly, the method is more efficient in the sense that it does not need any fine-tuning procedure such as those previously employed [18][19][20][21] where N -tuple points were obtained by manipulating thermodynamic variables so that a common point of inflection occurred between multiple maxima and minima of the temperature as a function of r + .However, due to the additional physical constraint γ 2 > 0 induced from the non-algebraic nature of equation of motions in genuine GQTGs, finding a physical multicritical point with a large N is still time-consuming.
Conclusions
We have exploited Maxwell's equal area law to find an interphase equilibrium for black holes with multiple phases, reformulating it into what we call the K-rule.Utilizing the K-rule, we developed a novel approach for constructing N -tuple points in the phase space of black holes.
We applied our results to GQT theories with 2N − 3 genuine couplings.Our analysis suggests that the minimum number of couplings required for the formation of an N -tuple point is likely confined between N − 1 and 2N − 3. We presented quadruple and quintuple points to illustrate the effectiveness of the method.
Future work would involve applying the K-rule to other kinds of black hole holes, particularly those whose horizon structures are not spherically symmetric.These include accelerating black holes in non-linear electrodynamics [50], multiply rotating black holes [21], and various black hole solutions in supergravity theories [51].where the superscripts (±) indicate the sign of γ 2 .In order that the homogenous part is subdominant at large r, we require γ 2 > 0 and A = 0.
Apart from the correct asymptote, physical theories should only propagate one type of massless spin-2 graviton on constant curvature backgrounds.This implies the effective Newtonian constant must have the same sign as the one in general relativity, which means the third term in (2.15) should become negative for positive mass, that is h (f ∞ ) < 0 [42].
Figure 1 .
Figure 1.The Maxwell equal-area construction implies P = P * divides AaBbC into two regions AaB and BbC with equal areas.The red curve indicates the trajectories of the plots if K = 0.
Figure 5 .
Figure 5.The figure shows further magnifications about the quintuple point A in the figure 4. We can clearly see the intersection of 5 curves appears at A.
Table 1 .
The table shows a general pattern that the pressure follows when some genuine GQT densities (k ≥ 2) are turned on.The top element of each column indicates the power of 1/r + , and each row contains all possible couplings with a constant sum of subscripts.The table tells what powers of 1/r + in the expression of pressure are influenced by which couplings.For example, if α | 6,567.4 | 2023-06-11T00:00:00.000 | [
"Physics"
] |
Stabilization/Solidification of Zinc- and Lead-Contaminated Soil Using Limestone Calcined Clay Cement (LC3): An Environmentally Friendly Alternative
Due to increased carbon emissions, the use of low-carbon and low-cost cementitious materials that are sustainable and effective are gaining considerable attention recently for the stabilization/solidification (S/S) of contaminated soils. The current study presents the laboratory investigation of low-carbon/cost cementitious material known as limestone-calcined clay cement (LC3) for the potential S/S of Znand Pb-contaminated soils. The S/S performance of the LC3 binder on Znand Pb-contaminated soil was determined via pH, compressive strength, toxicity leaching, chemical speciation, and X-ray powder diffraction (XRPD) analyses. The results indicate that immobilization efficiency of Zn and Pb was solely dependent on the pH of the soil. In fact, with the increase in the pH values after 14 days, the compressive strength was increased to 2.5–3 times compared to untreated soil. The S/S efficiency was approximately 88% and 99%, with increase in the residual phases up to 67% and 58% for Zn and Pb, respectively, after 28 days of curing. The increase in the immobilization efficiency and strength was supported by the XRPD analysis in forming insoluble metals hydroxides such as zincwoodwardite, shannonite, portlandite, haturite, anorthite, ettringite (Aft), and calcite. Therefore, LC3 was shown to offer green and sustainable remediation of Znand Pb-contaminated soils, while the treated soil can also be used as safe and environmentally friendly construction material.
Introduction
Soil contaminated with heavy metals is a serious threat to the sustainable development and global food security [1][2][3][4]. In contrast to water and air pollution, heavy metal pollution in soils is an invisible and unseen problem [1,[5][6][7][8][9]. Many of the world's contaminated sites have become the dump sites of various industrial by-products that contain inorganic pollutants such as heavy metals. As these heavy metals come in contact with water, the human health and environment within the ecosystem become potentially at risk [10,11]. Among the hazardous heavy metals zinc (Zn) and lead (Pb) are considered as the harmful pollutants that exist at elevated levels in most of the contaminated sites around the world [12]. Further, Zn and Pb are not only harmful to human health and environment, but also Sustainability 2020, 12 lead to mechanical-chemical degradation of contaminated soils, which in turn results in unfavorable conditions for the redevelopment of contaminated sites. It is therefore imperative to identify a time-and cost-effective remediation method for the treatment of heavy metal-contaminated soils, consequently the treated soils can be reused as safe and environmentally friendly construction materials. Stabilization/solidification (S/S) is considered to be most appropriate method for immobilization of heavy metal-contaminated soils due to its ease and workability among the available effective remediation methods [13][14][15][16]. Besides, the United States Environmental Protection Agency (USEPA) recognizes S/S as the best demonstrated available technology (BDAT) for treating hazardous metals [7,17,18]. The mechanisms involved in S/S treatment is as follows: stabilization refers to reducing the hazard potential by converting contaminants into their least soluble/toxic form [19,20], whereas solidification is the encapsulation of waste in a monolith mass of high structural integrity that involves both mechanical binding and chemical interaction between solidifying agents such as cementitious materials, which further restricts the movement of heavy metals by isolating them into less/insoluble crystalline phases [14]. The performance of S/S depends on the nature of the contaminants (organic/inorganic) and binders used. Inorganic heavy metals are commonly immobilized via chemical reaction and physical encapsulation by forming barely insoluble metal hydroxides. Thus, the binder plays a key role in the S/S process, and the development of novel binders has gained special attention recently, specifically low carbon/cost binders. In a previous study, the authors revealed that partial replacement of calcined clay (CC) and limestone (LS) with ordinary Portland cement (OPC) has better immobilization efficiency for Zn-contaminated soils (Reddy et al. [21]). In addition, various hydration products, such as portlandite, ettringite, tri-calcium silicate, and wulfingite, were found to be responsible for the immobilization of Zn-contaminated soils. In addition, Wang et al. [6] reported that supplementary cementitious materials (SCMs) such as CC and LS have improved immobilization efficiency in treating both oxy-anionic As-and cationic Pb-contaminated soils. The leachability efficiency of CC and LS was approximately 96% and 99% for As and Pb, respectively. Further, addition of LS to CC promotes the transformation of metastable hydroxyl-rich Afm to stable carbonate rich Afm, which increases the degree of polymerization in calcined clay hydrates, resulting in enhancement of mechanical properties. Therefore, replacing SCMs with the conventional cement binders has a better performance in treating heavy metal-contaminated soils.
Recently, a new ternary blend known as limestone-calcined clay cement LC 3 was successfully demonstrated in the authors' previous study on Zn-contaminated soils [21]. LC 3 is known as a low-carbon and low-cost binder since the production process involves replacement of low grade calcined clays (CC) and limestone (LS). Typically, LC 3 is a ternary blend of 30% CC, 15% LS, and 5% gypsum replaced with 50% cement clinker. The replacement of low-grade limestone and low-grade kaolinitic clay with (kaolinite content > 40%) when calcined at 750 • C undergoes hydroxylation to form CC/metakaolin (MK) [6] as presented in Equation (1), which possess high pozzolanic reactivity due to the presence of alumina-and silicate-rich phases. Further, when LS reacts with CC it produces carboaluminosilicates-rich mineral phases that are responsible for the formation of primary and secondary hydration products such as calcium silicate hydrate (C-S-H), calcium hydrate (C-H), calcium aluminate silicate hydrate (C-A-S-H), and calcium aluminate hydrate (C-A-H) [14][15][16]. Furthermore, the production of 1 ton cement produces 0.82 ton CO 2 whereas 1 ton CC produces 0.175 ton CO 2 emissions. Therefore, replacing 50% OPC with CC and LS reduces the carbon footprint up to 40% [22][23][24], which makes the binder low-carbon/cost and also an environmentally friendly alternative material [24][25][26][27][28]. Although the influence of LC 3 is validated for Zn alone, its effectiveness and mechanism involved for the immobilization of Zn and Pb when they co-exist are unknown and need additional investigations.
The objective of the study was to evaluate the feasibility of the LC 3 binder upon S/S of Znand Pb-contaminated soils individually as well as combined at elevated levels in terms of strength, Sustainability 2020, 12, 3725 3 of 13 toxicity leaching, chemical speciation, and XRPD analysis. The research aimed to provide scientific insights on environmentally friendly alternative LC 3 , such as: (1) to investigate the immobilization mechanisms involved in the soils treated with Zn and Pb; (2) to study the effect of curing time and binder dosage on physical strength and pH; and (3) to elucidate the hydration products responsible for the S/S of treated soils. This study provides the feasibility of using a sustainable binder LC 3 for the treatment of contaminated soils, while the treated soil can be reused as safe and environmentally friendly construction material.
Materials
Clean soil used in this study was collected from the nearby open area at Sardar Vallabhbhai National Institute of Technology, Surat, India. Approximately 250 kg of the soil sample was collected from the 0.5-1.0 m depth. Later, soil was homogenized, then air dried and passed into a 2 mm screen before use. The soil was classified as CH as per the Unified Soil Classification System based on ASTM D2487 [29], where initial water content, specific gravity, and pH were 19.7, 2.59, and 6.8, respectively, and the detailed chemical composition of the clean soil is shown in Table 1. Further, the binder LC 3 was procured from Technological Action and Rural Advancement (TARA), Delhi, India and the major oxides present were CaO, SiO 2 , Al 2 O 3 , and Fe 2 O 3 of 61.4%, 24.38%, 6.52%, and 4.31%. In addition, the remaining physicochemical parameters of soil and chemical composition of LC 3 binder used in the study can be found in authors' previous study [21].
Artificially Contaminated Soil and S/S Samples Preparation
The target metals used in the study were lead (Pb) and zinc (Zn) as they are considered as the most commonly encountered heavy metals at contaminated sites worldwide [15,18]. The analytical grade zinc nitrate hexa-hydrate Zn (NO 3 ) 2 ·6H 2 O and lead nitrate hexa-hydrate Pb (NO 3 ) 2 .6H 2 O were used and nitrate anion was chosen because it is inert and also eliminates unexpected precipitates with other ions during hydration and pozzolanic reaction [5,15]. Further, the essential volume of stock solutions was added to the air dried soil until the stock solution content reached to 29%, i.e., the optimum moisture content (OMC) of the soil, and then stayed untouched for 14 days to ensure the necessary contact between soil and heavy metals Pb and Zn. Similar procedure for the preparation of artificially contaminated soil was reported by Du et al. [18,30]. Further, the concentrations of 5000 mg/kg and 10,000 for both Zn and Pb and the combination of both at 10,000 mg/kg were used to represent typical field concentration levels. In addition, for comparison purposes, the untreated soil concentration was maintained at 10,000 mg/kg. The samples were designated as ZnU, PbU (untreated Zn and Pb), and Zn 0.5, Zn 1.0, Pb 0.5, Pb 1.0, and ZnPb 1.0 in the study. Furthermore, the binder LC 3 was added to artificially contaminated soil on predetermined dry soil weight basis at 8%. The soil-binder mixture was thoroughly mixed using an electronic mixer for 5-10 min in order to obtain a homogenous mix, until the water content reached a predetermined OMC and maximum dry density (MDD), which are shown in Table 2. The mixture was compacted in three layers of 5 cm-diameter and 10 cm-height PVC molds using a hydraulic jack until it reached the MDD. The molds were carefully sealed in a polythene bags and demolded after curing periods of 3,7,14, 28, and 56 days. The mixing, curing, and compaction procedures were followed as per ASTM C192 [31] to ensure the similarity among all the samples.
Testing Methods
The primary objective of the study was to determine the mechanical strength, leaching, chemical speciation, and mineralogy of the untreated and LC 3 -treated specimens. Physical strength was determined using an unconfined compression strength (UCS) test as per ASTM D2166 [32] with a controlled strain rate of 1%/minute. Later, the crushed samples were taken for determination of leaching, chemical speciation, and mineralogy tests. In addition, pH values were measured in the leachate by using HANNA waterproof tester, as per ASTM standard [33].
Toxicity leaching was performed as per standard toxicity characteristic leaching protocol (TCLP) EPA method 1311 [34]. In total, 10 g of the soil was mixed in TCLP fluid#1, i.e., CH 3 COOH and NaOH mixture at pH 4.93 ± 0.05, and the soil solution mixture was rotated for 18 ± 2 h at 30 rpm using an end-to-end shaker. Later, the leachant solution was separated using centrifuging/decantation at 3000 rpm for 8-10 min. Finally, the leachate was subjected to pH analysis and acidified using HNO 3 at (pH ≤ 2) before proceeding to heavy metal analysis. All the samples were tested in triplicates/quadruplicates for ensuring the repeatability of results, and average values of the results are reported in the study. In addition, to understand the leaching performance, S/S efficiencies of heavy metals [35] were determined using Equations (2) and (3), where S represents S/S efficiency and L is the leaching factor, which is defined by heavy metal concentration in leachate divided by the initial soil contamination condition.
Further, the chemical speciation analysis was performed using a modified Community Bureau of Reference three step sequential extraction procedure (BCR-SEP) [36][37][38]. The test method was comprised of four phases P1 = acid soluble phase, extraction in 0.11 mol/L CH 3 COOH at pH 2.8; P2 = reducible phase, extraction in 0.5 mol/L NH 2 OH·HCl at pH 1.5; P3 = oxidizable phase, oxidation in acid using 30% H 2 O 2 and extraction in 1 mol/L in CH 3 COONH 4 , both at pH 2; and P4 = residual phase, extraction with total digestion using mixture of (3:1) concentrated 70% HNO 3 and 30% HCl using 11,466 protocols [39]. Furthermore, the P1 phase was comprised of heavy metals that were precipitated and co-precipitated in a carbonate phase, which were present in a bioavailable form. The P2 phase was made-up of iron (Fe) and manganese (Mn) oxides that can be activated in high pH conditions (acidic). The P3 phase was incorporated into a stable organic matter and sulfides that were mobilizable and not bioavailable during oxidation. The P4 phase contained primary and secondary minerals, which could hold the heavy metals within the crystal lattices [2,38]. The P4 phase was expected to remain for longer periods in the contaminated soil and very difficult to release, even at high pH conditions. In addition, to measure the reliability of sequential extraction procedure, metal recovery rate (MRR), given in Table 4, is the most commonly used parameter [38,40]. It is defined as the sum of four phases divided by total concentration obtained from the complete digestion as presented in Equation (4).
Moreover, XRPD analysis was performed after hydration stoppage in the treated samples. The samples were powdered using 75 µm mesh and tested using a Rigaku X-ray diffractometer with (Cu-Kα) radiation λ = 1.540538 Å at the 2θ (Theta) of 10 • -70 • with the step size of 0.02 • under room temperature conditions. The system was operated at 45 kV and 30 mA, with a scan time of 20 s at each step in the step scan mode, and XRPD results were analyzed using PANanlytical Xpert High Scom l re plus software V.3e (Malvern Panalytical, Worcestershire, United Kingdom) [41] for attaining phase identification of minerals. Figure 1 represents the leachate pH of Zn-and Pb-treated soil with varying curing periods. It can be seen that LC 3 treatment significantly increased by approximately 1.3-2.4 and 1.1-1.4 units, respectively, for Zn and Pb at 7 days of curing as compared to untreated soil. However, Pb failed to reach the remediation goal compared to Zn. This may be due to the molar concentration of Pb, which is much lower than that of Zn, which induces more significant retardant effect on hydration in the system. For instance, at 28 days, mean pH values increased from 8.33 to 9.12 and 7.22 to 7.94, i.e., 2.66-3.45 and 1.98-2.7 units for Zn and Pb, respectively. Further, it can be noted that the pH values of Pb 1.0 and ZnPb 1.0 showed similar trends, which indicates Pb at higher concentrations retards the hydration mechanism in the system, which agrees well with Wang et al. and Xia et al. [17,42]. Furthermore, the increase in the pH values was noted at 56 days curing time by 8.71-9.36 and 7.37-8.38, which is 3.06-3.71 and 3.17-2.16 units. The increase in the pH values is attributed to the chemical composition of the binder, which facilitates the release of OH − , Ca + , and Al + ions in the pore water, creating an alkaline environment in the system [43,44]. Further, the dissolution of aluminates and silicates expedite the pozzolanic reaction over time, which are responsible for the binding of heavy metals to form insoluble metal hydroxides [24,45,46]. Therefore, the increase in pH over time in the treated system supports the formation of various hydration products, such as [ZnAl(OH) 2 3 ], which is also validated by XRPD analysis Section 3.5 in the study. Figure 2 shows the leached Zn and Pb concentrations in the TCLP test. It was observed that leached Zn and Pb concentrations exceeded Hazardous Waste Management (HWM) rules [47] limits of Zn (250 mg/L) and Pb (5 mg/L), respectively, suggesting that the soil is toxic and requires remediation. The average leached concentrations of LC 3 -stabilized soils decreased with the rising curing time. Besides, the leached Zn concentrations were below regulatory limit after 14 days of curing. Whereas, Pb reached the regulatory limit only after 28 days of curing, showing the solidification efficiency of approximately 88% and 99% with leaching factors 11.89 and 1.22 for Zn and Pb. The decrease in the leached concentrations could be due to increased pH values in the previous Section 3.1 and formation of metal hydroxides in the presence of freely available Ca (OH) 2 and Ca + ions in the binder [43]. Moreover, LS, when added with CC, reacts to form carbo-aluminates, which produces C-S-H and C-A-H-based hydration products that have a tendency to arrest the heavy metals in forming insoluble Sustainability 2020, 12, 3725 6 of 13 metal hydroxides that increase the immobilization efficiency and reduce leaching. Overall, it can be concluded that LC 3 stabilization promotes the immobilization of Zn-and Pb-contaminated soils, which further reduces leachability. Figure 2 shows the leached Zn and Pb concentrations in the TCLP test. It was observed that leached Zn and Pb concentrations exceeded Hazardous Waste Management (HWM) rules [47] limits of Zn (250 mg/L) and Pb (5 mg/L), respectively, suggesting that the soil is toxic and requires remediation. The average leached concentrations of LC 3 -stabilized soils decreased with the rising curing time. Besides, the leached Zn concentrations were below regulatory limit after 14 days of curing. Whereas, Pb reached the regulatory limit only after 28 days of curing, showing the solidification efficiency of approximately 88% and 99% with leaching factors 11.89 and 1.22 for Zn and Pb. The decrease in the leached concentrations could be due to increased pH values in the previous Section 3.1. and formation of metal hydroxides in the presence of freely available Ca (OH)2 and Ca + ions in the binder [43]. Moreover, LS, when added with CC, reacts to form carbo-aluminates, which produces C-S-H and C-A-H-based hydration products that have a tendency to arrest the heavy metals in forming insoluble metal hydroxides that increase the immobilization efficiency and reduce leaching. Overall, it can be concluded that LC 3 stabilization promotes the immobilization of Zn-and Pb-contaminated soils, which further reduces leachability. Table 3. Moreover, it was seen that increases in the strength values were observed even after 28 days of curing, which is because LC 3 can improve strength even up to 365 days of curing [48] as a consequence of hydration reactions [45,49]. In addition, binder LC 3 includes partial replacement of cement with calcined clay and limestone, and during the hydration process the combination of Ca (OH) 2 and CC increases the pozzolanic reactivity [24,28,41,43,50], which produces more binding phases, resulting in improved density and reduced pore spaces that ultimately lead to increased compressive strength. Therefore, the results demonstrate that Zn and Pb concentrations at higher levels have synergetic effects that could favorably affect the strength behavior of contaminated soils.
Chemical Speciation of Heavy Metals
Typically, four phases of soil sample are analyzed to recognize the environmental activity and bioavailability of heavy metals, namely acid soluble (P1), reducible (P2), oxidizable (P3), and residual phases (P4). Commonly, P1 and P2 phases are considered to be bioavailable in nature due to their weak binding capacity in the acidic/low pH environment [38]. The higher the proportion of P1 and P2 in an active fraction, the greater the heavy metal's ion mobility [36,51,52]. Therefore, the phases P1 and P2, particularly the P1 phase, are not stable and impose environmental risks resulting from leached heavy metals in the environment. Further, to assess the reliability of the phase extractions, the metal recovery rate (MRR) is given in Table 4. Figure 4a,b shows the histograms of Zn and Pb metal distribution after LC 3 treatment at 28 and 56 days of curing. As shown in Figure 4a, the P1 of Zn in LC 3 -stabilized soil was 21-32% lower than 61% of ZnU (untreated Zn) and approximately 37-51% of Zn in the stabilized soil was bound to the P4 phase. Besides, the P1 phase of Pb in the LC 3stabilized soil ranged from 33-46% lower than 58% of PbU (untreated Pb), and 36-44% of Pb in the stabilized soil was bound to the P4 phases after 28 days of curing. While at 56 days of curing, as shown in Figure 4b, the increase in the P4 phase in the stabilized soil ranged from 53-67% and 41-
Chemical Speciation of Heavy Metals
Typically, four phases of soil sample are analyzed to recognize the environmental activity and bioavailability of heavy metals, namely acid soluble (P1), reducible (P2), oxidizable (P3), and residual phases (P4). Commonly, P1 and P2 phases are considered to be bioavailable in nature due to their weak binding capacity in the acidic/low pH environment [38]. The higher the proportion of P1 and P2 in an active fraction, the greater the heavy metal's ion mobility [36,51,52]. Therefore, the phases P1 and P2, particularly the P1 phase, are not stable and impose environmental risks resulting from leached heavy metals in the environment. Further, to assess the reliability of the phase extractions, the metal recovery rate (MRR) is given in Table 4. Figure 4a,b shows the histograms of Zn and Pb metal distribution after LC 3 treatment at 28 and 56 days of curing. As shown in Figure 4a, the P1 of Zn in LC 3 -stabilized soil was 21-32% lower than 61% of ZnU (untreated Zn) and approximately 37-51% of Zn in the stabilized soil was bound to the P4 phase. Besides, the P1 phase of Pb in the LC 3 -stabilized soil ranged from 33-46% lower than 58% of PbU (untreated Pb), and 36-44% of Pb in the stabilized soil was bound to the P4 phases after 28 days of curing. While at 56 days of curing, as shown in Figure 4b, the increase in the P4 phase in the stabilized soil ranged from 53-67% and 41-58% for Zn and Pb, respectively. The increase in the P4 phase thus promotes the development of highly insoluble and immobile complexes that are responsible for making the heavy metals less bioavailable in nature under low pH/acidic environmental conditions. It can be concluded that LC 3 stabilization/solidification results in the transformation of (acid-soluble) P1 phases of Zn-and Pb-contaminated soil into more insoluble (residual) P4 phases.
XRPD Analysis
The mineralogical analysis was conducted after 28 days of curing for both untreated and LC 3 -treated samples to examine the effect of hydration and pozzolanic reaction on various phases of the contaminants, i.e., Pb and Zn in a stabilized matrix, as shown in Figure 5. 3 ] were the major cementitious products responsible for stabilization in the treated samples and these hydrated products controlled the heavy metal migration in the LC 3 samples, which agrees well with Wang et al. [48]. The formation of aluminates and silicates-based products after LC 3 treatment was also noticeable, which was due to the availability of carbo-aluminate phases in the hydroxyl-rich calcined clay and limestone [24,28,46]. The changes in the structural and crystalline phases were due to lime, which is effectively activated by calcined clay, which further enhances metal hydrates and hydroxide phase formation. Accordingly, these products are normally insoluble, promoting Zn and Pb immobilization, which improves soil stabilization and reduction in the leaching of contaminated soils.
XRPD Analysis
The mineralogical analysis was conducted after 28 days of curing for both untreated and LC 3treated samples to examine the effect of hydration and pozzolanic reaction on various phases of the and silicates-based products after LC 3 treatment was also noticeable, which was due to the availability of carbo-aluminate phases in the hydroxyl-rich calcined clay and limestone [24,28,46]. The changes in the structural and crystalline phases were due to lime, which is effectively activated by calcined clay, which further enhances metal hydrates and hydroxide phase formation. Accordingly, these products are normally insoluble, promoting Zn and Pb immobilization, which improves soil stabilization and reduction in the leaching of contaminated soils.
Conclusions
The present study investigated the role of LC 3 on Zn-and Pb-contaminated soils and evaluated the S/S performance through analyses of unconfined compressive strength, chemical speciation, leaching, and XRD. XRD results showed that Zn and Pb had become an integral part of the crystalline phase physically and chemically in forming Si-and Al-based carbo-alumino-silicate products such as Ca(OH) 2 , Ca 3 O 5 Si, CaAl 2 Si 2 O 8 , and Ca 6 Al 2 (SO 4 ) 3 (OH) 12 .26H 2 O (Aft), which were responsible for the immobilization of heavy metals. The compressive strength results indicated that the addition of the LC 3 binder at 8% could improve the strength values up to three times compared to untreated Zn-and Pb-contaminated soils. The pH transformation of acidic to alkaline nature after a 14 day curing period allowed the adsorption of heavy metals in forming various insoluble metal hydroxides. Leachability of Zn-contaminated soil reached the regulatory limit after 14 days of curing, whereas Pb-contaminated and mixed contaminated soil, ZnPb, reached the limit only after 28 days of curing. Chemical speciation results indicated that reduction in the acid-soluble phases and increased residual phases significantly supported the formation of insoluble hydration products, which were responsible for increased immobilization efficiency and strength. The results illustrated that LC 3 is a promising binder in solidifying/stabilizing the contaminated soils and the treated soils can be reused as safe and sustainable construction materials. | 5,839.4 | 2020-05-04T00:00:00.000 | [
"Materials Science"
] |
Guided-acoustic stimulated Brillouin scattering in silicon nitride photonic circuits
Coherent optomechanical interaction known as stimulated Brillouin scattering (SBS) can enable ultrahigh resolution signal processing and narrow-linewidth lasers. SBS has recently been studied extensively in integrated waveguides; however, many implementations rely on complicated fabrication schemes. The absence of SBS in standard and mature fabrication platforms prevents its large-scale circuit integration. Notably, SBS in the emerging silicon nitride (Si3N4) photonic integration platform is currently out of reach because of the lack of acoustic guidance. Here, we demonstrate advanced control of backward SBS in multilayer Si3N4 waveguides. By optimizing the separation between two Si3N4 layers, we unlock acoustic waveguiding in this platform, potentially leading up to 15× higher Brillouin gain coefficient than previously possible in Si3N4 waveguides. We use the enhanced SBS gain to demonstrate a high-rejection microwave photonic notch filter. This demonstration opens a path to achieving Brillouin-based photonic circuits in a standard, low-loss Si3N4 platform.
This supplementary note describes our simulation model. Based on our model, we discuss the influence of each waveguide parameter on the Brillouin gain coe cient. We also discuss the optimization procedures with a genetic algorithm. We verify our model by comparing the results to those of earlier published work.
Simulation Setup and Method
Fig. S1a shows the cross section of our optical simulation model, which matches the real SDS waveguide. In the 2D acoustic simulation model shown in Fig. S1b, everything is the same except that the additional perfectly matched layers (PML) to significantly reduce reflections from the borders. We apply COMSOL Multiphysics for both optical and acoustic simulations, and material properties applied in our model are listed in Table S1. The procedure of our simulations is as follows. First, we simulate the optical modes of the pump and probe. Then, we calculate the electrostrictive stress tensor and optical forces induced by the electrostriction e↵ect. The influence of radiation pressure is ignored here because the refractive index between silicon nitride and silicon oxide is small. After the optical simulation, we map the electrostrictive stress tensor and optical forces to the acoustic model. Finally, the Brillouin gain coe cient is calculated based on the overlap between optical forces (f n ) and acoustic responses (u n ) [5]: where ! s is angular frequency of the Stokes wave and ⌦ is the angular frequency of the excited acoustic mode. The peak value of the Brillouin gain coe cient can also be simplified to [63]: where e is the electrostrictive constant, ⌘ the opto-acoustic overlap, n p the refractive index, p the pump wavelength, ⇢ 0 the density, v a the speed of sound, B the Brillouin linewidth, and A e↵ the e↵ective area. We also apply the 3D acoustic simulation model shown in Fig. S1c to investigate the propagation of the excited acoustic mode. The acoustic simulation cross section is extruded for 10 wavelengths and followed by PML in the z direction. We map the acoustic response from the 2D model as initial condition and monitor the wave as it propagates along the z-direction. As depicted in Fig. S2, the acoustic field stays between the silicon nitride layers beyond more than 5 µm of propagation (10 × the acoustic wavelength), clearly showing acoustic waveguiding.
Influences of di↵erent parameters on SBS gain
The Brillouin gain coe cient is related to multiple waveguide parameters. To investigate the influences of di↵erent parameters, we first find the optimized geometry using a genetic algorithm, as described in the next section, then sweep each single parameter to get the corresponding SBS gain profiles. The optimized geometry parmeters are t int = 420 nm, t g = 200 nm, t c = 7350 nm, and w = 3100 nm. Fig. S3a-d shows the variation of g B when we sweep waveguide width w, stripe thickness t g , stripe separation t int and cladding thickness t c separately. As w increases, the SBS gain first increases because of improved acoustic confinement, then it saturates when the increased e↵ective mode areas of both the optical and acoustic field cancel out the benefit from improved acoustic confinement. As t g increases, the SBS gain also increases at first, due to better overlap between optical and acoustic fields. However, the optical mode will be more concentrated within the stripes as t g increases further, leading to a reduced overlap and a smaller SBS gain. The oscillatory behavior of the gain coe cient with t int and t c is believed to be a result of a change of the resonance condition of the acoustic mode in the intermediate layer.
Optimization with Genetic Algorithm
We apply a genetic algorithm to optimize the geometry of the waveguide for a higher Brillouin gain coe cient [65]. In our gene pool, t g ranges from 100 nm to 300 nm with a step of 10 nm, t int ranges from 200 nm to 700 nm with a step of 10 nm, w ranges from 500 nm to 5000 nm with a step of 100 nm, and t c ranges from 7 µm to 9 µm with a step of 20 nm. These ranges and step sizes result in nearly 5 million combinations, which requires an unreasonably long time to process. Instead, we simulate 1760 SDS geometries based on a 14-round evolution genetic algorithm. The genetic algorithm is carried out as follow. First, we randomly generate 200 candidates from the gene pool. We simulate the SBS gain of each candidate from 12 GHz to 14 GHz. Then, We select 80 elites with the highest peak g B . After that, we generate 80 kids by randomly pairing 80 elites. For each pair gives birth to two kids. One of them inherits three genes from the mother and one gene from the father, while the other kids inherits the opposite. During the inheritance, there is a 15% chance that one of the genes mutates, i.e., the value of that gene increases or decreases by 20%. Finally, we also introduce 40 new candidates from the gene pool to increase the variety. Both parents and kids and the new candidates will then enter the next round of evolution. Fig. S4 shows the peak Brillouin gain coe cients versus the corresponding Brillouin shift of all the structures simulated with the genetic algorithm. The highest gain coe cient is around 1.2 m 1 W 1 at 14 GHz. Fig. S5 shows the normalized acoustic displacement and electric fields, as well as gain profiles of some selected geometries that are labelled a-d in Fig. S4. We observe the linewidth of the SBS gain spectrum narrows down as the gain coe cient increases, which indicates a longer acoustic lifetime. The extended acoustic lifetime would primarily be achieved by reduced radiated acoustics, signifying better acoustic confinement in the intermediate layer.
Box-shaped Waveguide
We also explored a di↵erent variety of the multilayer waveguide for a higher Brillouin gain coe cient. One possible solution is to use the box-shaped waveguide, i.e., adding sidewalls to improve acoustic confinement. Fig. S6 shows the normalized electric and acoustic displacement fields, as well as the Brillouin gain profile of a box-shaped waveguide. The geometry parameters are based on the optimized symmetric double-stripe structure. The peak of the Brillouin gain coe cient is at 1.4 m 1 W 1 , which is slightly higher than that of a double stripe waveguide with the same geometry parameters. However, optical propagation losses in these waveguides are also higher, due to roughness in the sidewalls. Hence, in this work we focus on double-stripe waveguide, given the best combination of the Brillouin gain coe cient and the propagation loss obtained using the double-stripe structure.
Benchmarking and Comparison with Previous Results
To verify our simulation model, we also benchmark our model with the geometries shown in [15] and [48]. Fig. S7a and b show our simulated normalized electric field and normalized displacement field for the waveguide in [15], which match quite well with the corresponding fields provided in the supplementary material of [15]. The peak Brillouin gain coe cient and the trend also match results in [15]. The acoustic frequency at the peak shifts from the results in [15] by about 300 MHz, which may be due to the di↵erent material properties applied in models. Our simulated normalized electric field, normalized displacement field, and SBS gain profile of the waveguide in [48], as shown Fig. S7d and e, are quite close to original simulated and experimental results in [48] as well. Table S2 compares the double-stripe waveguide with recently reported silicon nitride [15,48], silicon [16], and chalcogenide [17] waveguides. Our waveguide has low propagation loss, reasonable Brillouin gain coe cient, negligible two-photon absorption, and non-suspended structure, making it a unique choice for higher density integration of Brillouin circuits in a standard process. [15] and [48]. a, Normalized electric field, b, normalized displacement field, and c, SBS gain profile of the waveguide in [15] simulated in our model. d, Normalized electric field, e, normalized displacement field, and f, SBS gain profile of the waveguide in [48] simulated in our model.
SBS Gain Characterisations
In the gain characterisation setup, the probe laser is a Toptica DFB pro BFY laser, which operates around 1550 nm. We scan the laser wavelength using current control. The electric coe cient of this laser is 0.8 GHz/mA. The laser output is modulated using a Thorlabs LN05S-FC intensity modulator, with a 10.075 MHz sine wave generated using a Hewlett-Packard 33120A function/arbitrary waveform generator (AWG). After modulation the light is amplified by an Amonics AEDFA-PA-35 amplifier.
The pump laser is an Avanex A1905LMI, also operating around 1550 nm. Its output is modulated using a Covega LN81S-FC intensity modulator, driven by a 10 MHz sine wave produced by a Wiltron 69147A Synthesized Sweep Generator. The light is then amplified using an Amonics AEDFA-33-B amplifier. To prevent crosstalk from signal mixing, the reference signal is created by mixing the synchronized output of the AWG (a TTL square wave at the same frequency as the sine wave) with the 10 MHz reference output of the sweep generator. These signals are mixed using a Mini-Circuits ZFM-3H mixer. The probe light is sent to a Discovery Semiconductor DSC30S photodiode, after which the signal is sent to an EG&G Princeton Applied Research model 5510 lock-in amplifier which measures the amplitude of the signal. The main experimental parameters for the SBS characterization setup is in Table S3. Fig. S8 shows the same chip response as in Fig. 2b, but this time over a wider detuning range to include the response of the fiber pigtail. This fiber is a 1.4 m length single mode fiber (SMF), which has a Brillouin gain of 0.14 m 1 W 1 [66]. The amplitude of the fiber response is used to calculate the SDS Brillouin gain coe cient, as described in the Methods section of the main text. Table S4 compares the simulated and the measured Brillouin properties of the SDS waveguides investigated in this work. We observe excellent agreement between simulations and experiments in the Brillouin frequency shift and linewidth. The maximum discrepancy in the Brillouin frequency shift is 270 MHz, which is only 2% of the SBS shift (13 GHz). The discrepancies in linewidths are below 50 MHz, except for the narrowest waveguide of 1.1 µm. We observed larger discrepancies in the Brillouin gain coe cients. The measured results are consistently 24-50% lower than what is predicted from simulations, which can be explained by the fabrication uncertainties in layer thickness and waveguide width (See Supplementary Note A). Similar trends and accuracy are also presented in previously reported SBS experiments including waveguides in chalcogenides [67], silicon [67], and silicon nitride [15,48].
To determine the linewidth of our measured Brillouin responses we fitted our experimental results to a Lorentzian function. The Lorentzian function is written as: where f is the frequency, A the amplitude, µ the center frequency and the half-width at half-maximum. We used the Python package lmfit to perform the fitting. Because the measurements have a high noise floor, especially in the case of the lower gain waveguides, we added an o↵set to the Lorentzian model. The experimental results and the fitted curves can be seen in Fig. S9. In the RF photonic notch filter we used the same Toptica and Avanex lasers as probe and pump as described above. The RF response of the setup is measured using a Keysight P5007A (VNA).
The probe laser is modulated using a Covega LN81SFC modulator, and then sent to the first chip, which contains a bus with 8 all-pass ring resonators. These rings each have an FSR of 25 GHz, and are fully tunable. One of these rings is used in the signal processing, and the other 7 are disabled. After the ring resonators the light is amplified with an Amonics AEDFA-PA-35 amplifier, and sent to the second chip, which is SBS active. The medium is pumped by the Avanex laser, which is amplified using an amonics AEDFA-33-B amplifier. After the signal has passed through the gain chip it is filtered using an EXFO XTM-50-SCL-U tunable bandpass filter to remove unwanted pump reflections. After this the signal is amplified with a second Amonics AEDFA-PA-35 amplifier, and sent to a Discovery Semiconductor DSC30S photodiode. The main experimental parameters for the microwave photonic notch filter with a ring resonator are shown in Table S5.
We further measured the noise figure and dynamic range performances of the notch filter. A two-tone RF signal, centered at 2 GHz with a space of 10 MHz is generated from signal generators Wiltron 69147A and Rohde-Schwarz SMP02. This two-tone RF signal drives the intensity modulator with input power varies from 0.5 to 5 dBm, and the output of the notch filter is recorded with an RF spectrum analyzer (Keysight N9000B). As shown in Fig. S10, the measured spurious-free dynamic range (SFDR) of the notch filter is around 100.5 dB/Hz 2/3 . The performances of this notch filter are summarized in Table S6.
Alternative RF Photonic Notch Filter Using an IQ Modulator
The ring resonator RF photonic notch filter we discussed in the main text of this paper is one of two schemes we used to create an RF photonic notch filter. The second method also employs cancellation of the mixing products of the sidebands to create a notch, but instead of a ring resonator it uses tailoring of the phase and amplitude of both optical sidebands [7,52,53]. A simplified schematic of this RF photonic notch filter is diagrammed in Fig. S11a. A key component in this filter is the in-phase quadrature (IQ) modulator (also known as the dual-parallel Mach-Zehnder modulator) used often synthesizing the RF modulates sidebands with the correct phase and amplitude relations prior to the narrowband processing using the SBS gain resonance. Table S7 lists the main experimental parameters of the SBS notch filter with the IQ modulator. Fig. S11b shows the working principle of the RF photonics notch filter. The RF input (I) is modulated onto the probe laser using the IQ modulator creating an asymmetric dual sideband modulation, with the sidebands in antiphase (II). The on-chip SBS interaction with the probe light then amplifies a spectral region of the lower sideband (III) making the sidebands equal in amplitude only at the frequency of the SBS peak gain. The processed signal was then sent to a photo-diode, resulting in an RF spectrum with a notch response due to the destructive interference between the mixing products of the sidebands and the optical carrier (IV).
In creating this notch filter we used the 1.4 µm waveguide, and a pump power of 33.5 dBm. This results in an SBS gain of 0.4 dB. We then tuned the IQ modulator to synthesize two sidebands with opposite phase, and an amplitude of the lower sideband that was 0.4 dB higher than that of the upper sideband. The resulting RF photonics notch filter response is depicted in Fig. S11d. The peak rejection of the filter was measured to be 30 dB and the 3 dB-bandwidth of the filter was 400 MHz.
This filter results in a notch that is less than half as wide as the notch created with the ring resonator setup, but the rejection rate is much lower. This is the result of using the asymmetric double sideband which leads to a much lower signal in the passband of the filter. The setup requires an additional optical amplifier to get to the -50 dB transmission that was observed in the ring resonator setup. Increasing the Brillouin gain by optimizing the waveguide's Brillouin gain coe cient and propagation losses requires a larger amplitude di↵erence between the sidebands, resulting in a filter with a higher passband. The RF sidebands are out of phase and their amplitude ratio is controlled to match the SBS gain magnitude generated from the silicon nitride waveguide. (III) SBS gain from the silicon nitride waveguide is used equalize the sideband amplitue at the intended RF notch frequency. (IV) At the detector the mixing products between sidebands and the optical carrier leads to a notch filter due to RF cancellation. c, The measured high-rejection RF photonics notch filter response which was obtained using only 0.4 dB of on-chip SBS gain. The 3 dB bandwidth of the filter is 400 MHz and the rejection is 30 dB.
In this supplementary note we discuss the feasibility of a Brillouin laser based on the waveguide designs described in Supplementary Note A. To do so we describe the boundary condition for the resonator, the threshold condition and give the loss for di↵erent scenarios of propagation loss, from the measured samples and below.
An integrated Brillouin laser consists of a ring resonator pumped resonantly with an external pumping laser whilst on a resonance at the Brillouin downshifted frequency light can build up within the cavity. The restriction of both pump and the SBS shifted light needing to be resonant gives the condition: ⌦/2⇡ = n · FSR ⌫ with n an integer and FSR ⌫ the free spectral range. We can describe the lasing threshold for the first mode [15] and write it as: where ⌫ L is the loaded linewidth of the resonator, L the length of the resonator and the coupling e ciency, not to be confused with decay rate or coupling strength. The coupling e ciency is given here as the ratio of the energy coupling rate (2⇡ ⌫ ext ) to the total decay rate (2⇡ ⌫ L ), i.e., = ⌫ ext / ⌫ L . By taking the loaded linewidth to be approximately the sum of the intrinsic linewidth and the coupling rate, i.e., ⌫ L = ⌫ ext + ⌫ 0 , the coupling rate can be optimized for minimal lasing threshold leading to a coupling e ciency of = 1/3.
Finally the intrinsic linewidth can be converted to a propagation loss via the mean field approximation, i.e., 2⇡ ⌫ 0 ⇡ v g ↵ 10 log 10 (e) with v g the group velocity and ↵ the propagation loss in dB/m. Assuming optimized coupling for minimal threshold, the threshold power can now be expressed as: (S5) Table S8 shows the calculated threshold power (eq.(S5)) for various values of the optical propagation loss ranging from 0.22 dB/cm to 0.01 dB/cm for the four di↵erent SDS waveguide configurations a-d selected in supplementary A and shown in Fig. S5. Also listed in table S8 are the group index calculated using the method given in [68] and the length L of the rings calculated using ⌦/2⇡ = FSR = c ngL . | 4,555.6 | 2021-12-01T00:00:00.000 | [
"Physics"
] |
Analysis of Impulsive Boundary Value Pantograph Problems via Caputo Proportional Fractional Derivative under Mittag–Leffler Functions
: This manuscript investigates an extended boundary value problem for a fractional pantograph differential equation with instantaneous impulses under the Caputo proportional fractional derivative with respect to another function. The solution of the proposed problem is obtained using Mittag–Leffler functions. The existence and uniqueness results of the proposed problem are established by combining the well-known fixed point theorems of Banach and Krasnoselskii with nonlinear functional techniques. In addition, numerical examples are presented to demonstrate our theoretical analysis.
Introduction
Fractional differential equations (FDEs) have recently gained prominence and attention as a way to describe applications in a variety of domains, including chemistry, mechanics, fluid systems, electronics, electromagnetics, and other domains. The study of FDEs encompasses everything from the theoretical aspects of solution existence to the methodologies for discovering analytic and numerical solutions (see [1][2][3][4][5]). In both the physical and social sciences, impulsive differential equations have become essential mathematical models of phenomena. These equations are applied to describe the evolutionary processes that change their state abruptly at a certain moment. This problem has piqued the interest of researchers due to its rich theory and relevance in a wide range of scientific and technological disciplines, including mechanics, ecology, medicine, biology, and electrical engineering, (see [6][7][8][9]).
The investigated impulsive FDEs are not reliant on constant coefficients after reading the previous publications listed. The impulsive fractional boundary value problems (BVPs) with constant coefficients have received little attention. In physics, however, impulsive FDEs with constant coefficients have a stronger foundation and play an important role. Hooke and Newton laws are employed in mechanics to explain the behavior of particular materials under the influence of external forces. Certain researchers propose revising the classical Newton's law, which is considered a generalized Nutting's law, in order to change some possible modification qualities. On the other hand, a mass-spring-damper system is frequently exposed to short-term perturbations (an external force) that are sudden and manifest as instantaneous impulses involving the associated differential equations. In 2014, the author [19] established certain necessary conditions for the existence of a solution to an impulsive fractional anti-periodic BVP with constant coefficients of the form: x(t) + λx(t) = f (t, x(t)), t ∈ J = J \{t 1 , . . . , t m }, J := [0, 1], ∆x(t k ) = y k , k = 1, 2, . . . , m, where λ > 0, C D α t + k denotes the Caputo fractional derivative of order α ∈ (0, 1), f ∈ C(J × R, R), the fixed impulsive time t k satisfy 0 = t 0 < t 1 < . . . < t m < t m+1 = 1, and ∆x(t k ) = x(t + k ) − x(t − k ) denotes the jump of x(t) at t = t k . The existence results of solutions were investigated by helping Lipschitz and nonlinear growth conditions. In addition, Mittag-Leffler functions attributes and computational formula are employed to construct examples. Based on the Banach contraction principle and Krasnoselskii's fixed point theorem, Zuo and co-workers [20] developed existence theorems for impulsive fractional integro-differential equations of mixed type with constant coefficient and anti-periodic boundary conditions in 2017: Kx(t), Sx(t)), t ∈ J = J \{t 1 , . . . , t m }, ∆x(t k ) = I k (x(t k )), k = 1, 2, . . . , m, where I k ∈ R, f ∈ C(J × R 3 , R), J : represent the right and left hand limits of x(t) at t = t k , respectively, K and S are linear operators. In 2020, Ahmed and co-workers [21] estblished the existence and uniqueness of the solution for the impulsive fractional pantograph differential equation with a more broader anti-periodic boundary condition of the form: where C D α 0 + denotes the Caputo fractional derivative of order α, f ∈ C(J × R 2 , R), , with x(t + k ) and x(t − k ) representing the right and left limits of x(t) at t = t k . Using Banach's and Krasnoselskii's fixed point theorems, they established the existence and uniqueness of the solution for impulsive problem (5). We recommend manuscripts [22][23][24][25][26][27][28] and the references given therein for contemporary papers on impulsive FDEs on existence, uniqueness, and stability. The qualitative feature of non-impulsive/impulsive FDEs is increasingly being studied in research.
Recently, Jarad and co-workers [29] constructed a novel brand of fractional operators builded from the modified conformable derivatives. After that, Jarad and co-workers formulated the proportional fractional calculus and shown certain features of the proportional fractional derivatives and fractional integrals of a function concerning another function. The kernel achieved in their consideration contains an exponential function and is function dependent (as specified in Section 2) in [30,31]. The proportional fractional operators have been applied to FDEs with and without impulsive conditions (see [32][33][34][35][36][37][38]). For more interesting work on FDEs, we refer to read [39][40][41][42][43][44][45][46] and references cited therein. Few works have been published on impulsive Caputo proportional fractional BVPs using function via proportional delay term, to the author's knowledge.
The existence and uniqueness results of the solutions for the following nonlinear impulsive pantograph fractional BVP under Caputo proportional fractional derivative concerns a particular function are considered in this manuscript: where Cρ k D α k ,ψ k t + k denotes the Caputo proportional fractional derivative operator with respect to another increasing differentiable function ψ k of order 0 < α k < 1 with 0 < ρ k ≤ 1, t ∈ J k := (t k , t k+1 ] ⊆ J := [0, T], k = 0, 1, . . . , m, J := J \ {t 1 , t 2 , . . . , t m }, The goal of this manuscript is to use the fixed point theorems of Banach and Krasnoselskii to investigate the existence and uniqueness of solutions to the impulsive problem (6). The following are the main points of this manuscript: (i) We consider a new impulsive pantograph differential equations with Caputo proportional fractional derivative concerning a certain function. (ii) Under the Caputo proportional fractional derivative, we explore more broader proportional BVPs with constant coefficients.
The development of qualitative analysis of impulsive fractional BVPs is encouraged in this manuscript. Notice that the significance of this discussion on the manuscript is that the problem (6) generates many types, including mixed types of impulsive FDEs with boundary conditions. For instance, if we set ρ k = 1 in (6), then we have the Riemann-Liouville fractional operators [2] with ψ k (t) = t, the Hadamard fractional operators [2] with ψ k (t) = log t, the Katugampola fractional operators [47] with ψ k (t) = t µ /µ, µ > 0 , the conformable fractional operators [48] with ψ k (t) = (t − a) µ /µ, µ > 0, and the generalized conformable fractional operators [49] with ψ k (t) = t µ+φ /(µ + φ), respectively. In addition, several other special cases can be derived as well. To the best of the author's knowledge, there are some papers that have established either impulsive fractional BVPs [33][34][35] and few papers focused on impulsive Caputo proportional fractional BVPs with respect to another function via proportional delay term. The remainder of the manuscript is organized in the following manner. Section 2 introduces some key concepts and lemmas linked to the major findings. We also present certain definitions of well-known fixed point theorems and construct the formulas for the solution involving Mittag-Leffler functions with the linear impulsive problem. We use Banach's and Krasnoselskii's fixed point theorems to analyze the existence and uniqueness of solutions for the impulsive problem (6) in Section 3. Finally, examples are provided to demonstrate the validity of our primary findings in Sections 4 and 5 contains the conclusion of our findings.
Preliminaries
This part introduces the generalized proportional fractional derivatives and fractional integral notations, definitions, and preliminary facts that will be utilized throughout the manuscript. For more details, (see [30,31,50,51]). Let . . , m} the space of piecewise continuous functions on the interval J . It is clear that P C(J , R) is a Banach space equipped with the norm x P C = sup t∈J {|x(t)|}. Let the norm of a measurable function σ : J → R be defined by Then L q (J , R) is a Banach space of Lebesque-measurable functions σ : J → R with σ L q (J ) < ∞.
where n = [Re(α)] + 1, [Re(α)] represents the integer part of the real number α and The Caputo proportional fractional derivative of order α of the function f with respect to another function ψ is Next, we provide some properties of the classical and generalized Mittag-Leffler functions E α (·) and E α,β (·), which is used throughout in this paper.
The following lemma is used to create an equivalent integral equation for the impulsive problem (6). For the sake of calculation in this manuscript, we set the notation: where t a , t b ∈ {t 0 , t 1 , . . . , t m , T} and c ∈ {α 0 , α 1 , . . . , α m }.
and Ω = 0. The function x ∈ P C(J , R) is given by where is a solution of the impulsive problem: Proof. Assume that x is a solution of (12). We consider the following several cases. For t ∈ [0, t 1 ), in view of Lemma 2, we have, In particular, for t = t 1 , we obtain, Using the impulsive condition in (12), For t ∈ [t 2 , t 3 ), in the same previous process, we obtain Repeating the previous process, for t ∈ [t k , t k+1 ), k = 0, 1, 2, . . . , m, we have From the boundary condition, βx(0) + ηx(T) = γ, it follows that where Ω is defined by (11). In the last step, we insert the value x 0 into (13) to obtain (10). Conversely, it is easy to show by direct computation that the solution x(t) is given by (10) fulfills the impulsive problem (12) with the boundary conditions.
Existence Analysis
This section investigates some sufficient conditions for the existence and uniqueness of a solutions to the impulsive problem (6) using Banach's and Krasnoselskii's fixed point theorems.
In view of Lemma 3, we define an operator Q : P C(J , R) → P C(J , R) as where F x (t) = f (t, x(t), x(µt)). Notice that Q has fixed points if and only if the impulsive problem (6) has solutions.
Then, the impulsive problem (6) has a unique solution on J if Proof. Before proving this theorem, we convert the impulsive problem (6) into x = Qx (a fixed point problem), where the operator Q is defined by (14). It is clear that the fixed points of the operator Q are solutions of the impulsive problem (6).
Step 2. We show that Q is a contraction.
For any x, y ∈ B r 1 , and for each t ∈ J , we obtain Applying Lemma 1 and (9) with 0 < e By (15), the operator Q is a contraction map. According to Banach's fixed point theorem, we can conclude that the impulsive problem (6) has a unique solution. Lemma 5 (Krasnoselskii's fixed point theorem [53]). Let D be a closed, convex, and nonempty subset of a Banach space E, and let Q 1 and Q 2 be operators such that: (i) Q 1 x + Q 2 y ∈ D whenever x, y ∈ D; (ii) Q 1 is compact and continuous; (iii) Q 2 is a contraction mapping. Then there exists z ∈ D such that z = Q 1 z + Q 2 z.
Then, the impulsive problem (6) has at least one solution on J if Proof. Let us define a suitable B r 2 = {x ∈ P C(J , R) : x ≤ r 2 }. Obviously, B r 2 is a bounded, closed, and convex subset of P C(J , R), for each r 2 > 0. Next, we define the operators Q 1 and Q 2 on B r 2 for t ∈ J as Step 1. We show that r * 2 > 0 with Q 1 x + Q 2 y ∈ B r * 2 for each x, y ∈ B r * 2 . Suppose by contradiction that for any r 2 > 0 there exist x r 2 , y r 2 ∈ B r * 2 and t r 2 ∈ J such that |(Q 2 x r 2 )(t r 2 ) + (Q 1 x r 2 )(t r 2 )| > r 2 .
By using Lemma 1, (9), and (H 3 ) with the Hölder inequality, for any x ∈ B r 2 , we have By direct calculation with Lemma 1, (9) and (20), we have Dividing both sides in the above inequality by r 2 and taking the lower limit as r 2 → +∞, we obtain which contradicts (21). Then, there exists r 2 > 0 so that Q 1 x + Q 2 y ∈ B r 2 , for x, y ∈ B r 2 .
Step 2. We show that Q 2 is a contraction mapping on B r 2 .
Step 3. We show that Q 1 is compact and continuous on B r 2 . From the property of continuity of f implies that Q 1 is also continuous. Next, we show that Q 1 is compact. By the same process as in the first part of Theorem 1, which implies that Q 1 (B r 2 ) is uniformly bounded on P C(J , R). We will show that Q 1 (B r 2 ) is an equicontinuous on J k , for k = 1, 2, . . . , m.
Numerical Examples
This section presents three examples to illustrate our results.
Example 1.
Consider the following nonlinear impulsive Caputo proportional fractional BVPs.
Example 3. Consider the following impulsive fractional differential equation with boundary conditions.
Conclusions
A variety of novel forms of fractional derivatives have recently been constructed and employed to better describe real-world phenomena. The so-called generalized proportional fractional derivatives are one of the most recently introduced fractional derivatives, which is an extension of the classical Riemann-Liouville and Caputo fractional derivatives. In this manuscript, the impulsive proportional fractional pantograph differential equations with a constant coefficient and generalized boundary conditions were examined in this manuscript. The Mittag-Leffler functions were utilized to present the solutions for the proposed problem. The existence and uniqueness results are based on the well-known fixed point theorems of Banach and Krasnoselskii. Finally, to guarantee the accuracy of the results, three numerical examples illustrating the implementation of our important conclusions have been provided. By the way, we have accomplished in showing certain particular cases connected to the results as a result of our discussion of this study [18][19][20][21]. This research has enriched the qualitative theory literature on nonlinear impulsive fractional initial/boundary value problems involving a specific function in future research such as the linear Cauchy problem with variable coefficient or convergence analysis. | 3,613.4 | 2021-12-02T00:00:00.000 | [
"Mathematics"
] |
Agent-based Individual Network Teaching System for Modern History Outline of China
Individualize distance teaching introduces individual services in traditional network teaching by considering the learning conditions of students at different levels and the relevance of the content to individualize learning programs for students. Simultaneously, it provides students with the most matched teaching resources. At present, the common network teaching system has the problems of insufficient intelligence and lack of individuation. An agent-based individual network teaching system is an autonomous intelligence system with immediate feedback. It is provided with real-time monitoring and information filtering functions as well as teaching analysis and collaborative learning functions. Based on this study on Agent technology and the current situation of network teaching, an Agent-based individual network teaching system was constructed. The practical application effect of the network teaching system was analyzed using the Modern History Outline of China course as the object of experiment. This paper provides theoretical and data-driven support for the research on individualized distance teaching and network teaching system to examine potential development directions for future network teaching. Keywords—Agent, network teaching system, Modern History outline of China, distance teaching
Introduction
With the development of network technology, network teaching systems including an individualized distance teaching mode have gained rapid development. Simply speaking, network teaching refers to a modern teaching method with the help of a computer network. It is mainly used for distance teaching [1]. Network teaching mainly includes two modes. The first is open network teaching (i.e. sharing network resources). Students choose and learn independently. The other is online interactive teaching based on a network communication platform [2].
In recent years, individualized distance teaching has become the mainstream of the network teaching mode. The common individual distance teaching system can achieve interaction between students and the system. Students can autonomously choose learning content according to their interests. The system may generate recommended teaching strategies according to the students' feedback. Current network teaching technology is mainly applied in networked course teaching. The link between network teaching systems and expert pages and academic forums has been realized. The sound network teaching system is being constructed step by step to enable it to be applied for teaching more subjects. In the common individualized distance teaching system, Web-based distance teaching systems have insufficient intelligence and poor guidance. The introduction of Agent technology can effectively solve the problems of insufficient teaching resources and a single teaching mode and create individualized and digital distance teaching services. Students can more conveniently gain access to teaching resources they need.
Based on analysis of domestic and overseas research about Agent technology and network teaching systems, a networked and expandable integrated teaching system was constructed using Agent technology theory, and empirical research was carried out by using the Modern History Outline of China course as an example. This research addresses the defects of network teaching systems in the aspects of intelligence and individualization, enhance teaching effectiveness and the quality of distance education, and provide support for further promotion of network teaching systems.
State of art
Since network technology is continuously developing and the popularity of network facilities is gradually improving, network teaching is attracting people's attention. The characteristics of network teaching such as convenience, strong interactivity and large resource capacity make it a research hotspot at home and abroad. Alvaro et al. [3] proposed a program of continuing medical education. Network science was proposed as a method to better comprehend the contribution of networking and interactivity among health professionals in professional communities regarding their learning and application of new practices over time. Feldstein et al. [4] reviewed interactions of 311 students reflected in the comments in a digital social learning community and adopted social network analysis to discuss the possibility of applying these interactions to evaluate students' critical thinking, communication, and collaborative feedback skills. The authors summed up the implications and recommendations for instructors who hope to apply Web 2.0 platforms and data to strengthen their understanding of students and class digital interactions and adopt the information to enhance courses. In another study, a research team chose colleges in China to study the design and application of a network teaching platform. Then they proposed corresponding strategies for enhancing research and development, improving application consciousness, and perfecting management to examine the function of a college network teaching platform in promoting teaching efficiency [5]. The research of domestic and overseas scholars shows that the network teaching system that is generally adopted at present lacks sufficient intelligence and individualization. The current system's usability is not strong and the interest appeal is low, so it is hard for students to combine network teaching with daily learning [6]. Additionally, current network teaching mostly shifts traditional teaching to a network without highlighting the features of network teaching such as strong interactivity, timely feedback and diversified teaching strategies [7]. Thus, more researchers are transforming the research emphasis to Agent-based individualized network teaching systems with stronger feasibility.
Intelligent Agent has been a hotspot recently. Since this technology has a prominent advantage in problem solving, it may become a new way for network teaching and will soon become an active research direction. For example, Agent technology is applied to design a distance intelligent teaching system that can conduct individualized learning designs for students. The analysis shows that Agent-based distance intelligent teaching systems are greatly different from previous network teaching systems. Students can gain an intelligent and individualized learning environment with strong interactivity through the system. This is critical for teaching efficiency and teaching quality improvement [8]. Some researchers propose that intelligent Agent systems can efficiently complete concept classification, problem description, problem solving and resource connection demands as well as provide a feasible way for individual recommendation of teaching resources and individual resource seeking [9].
Generally speaking, in most network teaching, one teacher synchronously teaches dozens or hundreds of students online. The biggest shortcoming of this technology is that the teaching process is teacher-centered and the same teaching scheme is applied for all students. Targeted instruction cannot be conducted for different students. Conversely, a private teacher may be employed for one-to-one teaching for students, but the defect of this method is that the cost is too large and rational allocation and utilization cannot be gained. Besides, due to geographical conditions and other conditional limits, most students cannot get the special tutorship of excellent teachers [10]. Agent-based individual network teaching systems will be the trend of network teaching development and have great significance for promoting system usability and meeting students' individual demands. However, Agent-based individual network teaching systems are not mature and need to be further improved. On this basis, an Agent-based individual network teaching system was constructed, and a relevant test is described in this paper.
The innovation advantages of the Agent-based individual network teaching system presented in this paper are as follows. First, the teachers in the network teaching system could provide corresponding learning strategies and learning resources for different students. For example, the teacher could be oriented to more students while ensuring the teaching quality. Meanwhile, students could put forward questions according to their degree of understanding. Second, the Agent-based individual network teaching system could achieve a rational allocation of teachers and teaching resources and realize sharing of the professional teachers' resources.
3
Theoretical construction
Definition of Agent
Agent technology is a packed and designed computer system. It can achieve flexible and autonomous activity under a specific environment. Agent technology has the following characteristics [11].
First, an Agent owns autonomy and can achieve management of its behavior and state without the need of external intervention. Second, an Agent can exchange information with other Agents through its communication language to achieve cooperation, which is the sociality of an Agent. Third, an Agent can discover environmental changes in time and make responses. Finally, an Agent can actively execute a series of actions. Besides, the designed Agent possesses other features such as learning ability, adaptive ability and target selection. Because of these advantages, Agent technology can be widely applied in multiple subjects and fields [11].
Structure of Agent
Cloud computing depends on the numerous personal and enterprise computer terminals connected with internet. Therefore, according to such connection modes, cloud computing has the features of networking and integration and its theoretical basis is mainly divided into two parts: software-as-a-service and Web MVC framework technology.
An Agent is composed of different modules, and the information interaction mode and behavior and state control modes are different among various modules. They form an organic whole. According to the structure, an Agent can be classified into three structures: The first is a thinking-type Agent. It owns basic logical reasoning ability and can properly think about the environment and behavior. Such an Agent can simulate or show awareness of the system users to achieve intelligent processing of individual behavior. A thinking-type Agent has high intelligence and optimizes individual and environment treatments.
The second is a response-type Agent. It displays intelligence by perceptions and actions. Such an Agent gradually evolves through collecting perceptions and actions and improves the whole system's functioning through continuous environmental interactions. Compared with a thinking-type Agent, a response-type Agent has higher execution efficiency and can be integrated in the environment sooner.
The third is mixed-type Agent that is developed on the basis of the above two types. It is a more rational mode of Agent construction. It has high flexibility and efficiency. The mixed-type Agent is composed of a thinking sub-system and a response sub-system. The thinking sub-system is responsible for reasoning, while the response sub-system is used to process the events without reasoning. It is known from the figure that an Agent-based network teaching system model includes three models: a teaching model, a teaching data model, and a listening model. To be specific, the system consists of an expression layer, logic layer and data layer. XML and JSP were used to develop the system and establish B/S application software system. XML is a meta-markup language, and JSP can be used to generate XML page. The detailed design of an Agent-based network teaching system is as follows: Teaching model. A teaching model is the core of an Agent-based network teaching system responsible for all teaching activities, including information publishing, learning exchange, teaching tutoring, and testing. The teaching model includes two modules: teaching preparation and teaching, which are expanded by different types of Agents.
The teaching preparation module is responsible for developing the teaching strategy, organizing teaching content and confirming the teaching links. It completes teaching preparation in the early period and adjustment in the later period. This module is completed through a strategy-making Agent and a teaching content organizing Agent. The strategy making Agent can finish diagnosis and strategy making functions. Before teaching, the system can make proper teaching schemes for students based on their features. In the teaching process, the system can change teaching strategies according to students' learning progress. The strategy making Agent can make adjustments according to dynamic environmental changes. It is applicable to virtual class teaching. The major task of the teaching content organizing Agent is to refer to students' individual features and combine the teaching objective to form teaching content [6]. Agents can carry out intelligent analysis of the students' learning progress and recommend the next teaching resource for students in accordance with their mastery of the existing resources. XML grammar is used for the detailed design of the strategy making Agent as follows: <xsd: element name=" strategy making"> <xsd: element name =" student ID" type> <xsd: element name =" cognitive level No." type> <xsd: element name =" strategy"> <xsd:complextype> <xsd:sequence> <xsd: element name =" strategy No." type> <xsd: element name =" teaching objective" type> <xsd: element name =" precondition" type> <xsd: element name =" conclusion" type> </xsd:sequence> <xsd:complextype> </xsd: element > </xsd: element> Based on teaching preparation, the teaching module is responsible for making the whole process of network teaching proceed smoothly, including tutoring, questioning/ answering and testing. The Agent-based network teaching system completes the detailed teaching process through the integration of multiple Agents, including a retrieval Agent, an exchange Agent, a question answering Agent, and a testing Agent. The structure is shown as Fig. 2 The retrieval Agent has index and seeking functions and can provide students with a directional search of teaching resources. The exchange Agent takes charge of processing the exchange information received and sent, including one-to-one transmission and one-to-more transmission. The question answering Agent can automatically provide answers to the questions proposed by students. Students' questions are not directly sent to the teacher but will first be matched in the common question library. When a satisfying response cannot be gained from the question library, it will contact the teacher. The testing Agent is in charge of testing students. According to a user's request, it combines individual learning progress to extract questions from the question library to form the test. Meanwhile, it feeds students' answers back to the system as a part of the teaching data.
Teaching data model. The teaching data module is mainly responsible for managing teaching videos and audio data transmitting, sending and replaying. It possesses the functions of receiving the users' instructions, returning information to the users and exchanging information. The teaching data model mainly includes three types: an instruction transformation Agent, an information display Agent and an information management Agent.
Listening model. First, the listening model contains basic information about the learning situation of each student, i.e. the data about students' learning progress. Second, it should be able to express the learners' cognitive state and accurately reflect comprehension of a specific student for concepts and content. Third, the listening model uses information about the students' preferences, including learning ability and other dynamic features. In addition, the listening model records the students' learning process and learning features in detail. According to the students' learning progress, cognitive level and interests, the listening model includes Fig. 3 Coordinating the Agent can achieve interaction between students and the system. When students use the system, the coordinating Agent is responsible for management of the other three Agents. The learning progress Agent is mainly used to record whether students have learned the course and mastered some knowledge points. The system can present the learning plan for users according to the records. The cognition estimating Agent classifies students through grading their cognitive level. The interest estimating Agent completes evaluation of the learning preferences through the students' basic information and their preference in learning process.
Agent-based individual network teaching system construction for Modern History outline of China
Modern History Outline of China is a professional and basic required course for history majors. It is also an elementary course in the university. Modern History Outline of China is an important part of Chinese history. Since different students understand Chinese history differently, a network teaching method is suitable for this course. Based on the theory construction of Agent-based technology, an Agent-based individual network teaching system was further constructed. Empirical research of the system was carried out using the Modern History outline of China.
System construction
The Agent-based individual network teaching system included the teaching model, teaching data model and multiple listening models that are inter-connected by a network. The structure diagram of the system is in Fig. 4.
As shown in the figure, the teaching process of the Agent-based individual network teaching system is as follows. First, the teaching model collects the teaching video and audio signals, receives the course materials stored in the teaching data model and transmits them to the listening model through a network. Second, students conduct distance learning through the listening model and send the questions to the teaching data model through the listening model. The teaching data model can transform students' questions to standard questioning code, seek the optimal answer through inquiring in the central processing unit and transmit the optimal answer to the students. If there is no corresponding answer in the teaching data model, the question will be sent to the teaching model and the teacher will answer it online. The questions and answers are stored in teaching data model. A teaching scene is shown in Fig. 5.
Teacher Student
Web based inquiry learning system Network writing learning system
Detailed implementation
The key to the operation of an Agent-based individual network teaching system lies in the teaching data model. The detailed implementation process of the system is as follows: First, the system adopts two English letters to code different teaching courses. In this paper, AA is used as the course code of the Modern History Outline of China.
Second, the teaching data model takes a 12-bit question code as the standard questioning code of the course, as shown in Table 1. The first bit and the second bit of the question code are the course code; the 3rd-6th bits are course chapter codes; the 7th -11th bits are the specific question codes. The 12th bit is the check bit. In practical teaching, the automatic questioning/answering is implemented by question codes with 7-11 bits. The students' questioning interface is shown in the Fig.6. Third, the teaching data model also contains student files, including ID, gender, age, psychological features, course progress and communication method, as shown in Table 2. ID is the students' username for login. Psychological features include IQ, temperament, stress, and psychological age. The numerical value is adjusted according to the students' answers to course questions. Course progress lists students' progress and scores. The student file is continuously updated in the teaching process. After the teacher completes the teaching stage, the system may offer targeted tutoring for students according to their learning situations and make the next teaching plan.
Effect check
Modern History Outline of China was used as the experimental course, with a total of 32 class hours. The experimental subjects were sophomores from the College of Marxism. Ninety students were extracted at random from 500 students in the college as the experimental class. An Agent-based individual network teaching system was used to teach them. Another 90 students were extracted at random as the control class, and traditional multimedia system was applied to teach them. The teachers and teaching conditions of experimental class and control class had no significant differences. The age, gender, physical qualities, and comprehension of both classes had no significant differences.
Students' learning effect was mainly judged according to the examination results. The test adopted teaching and examination as separate modes. The assessment teacher was other teacher except the course teacher. The score was unrelated to class performance, so the assessment was more objective and fair. The test results when the teaching ended are shown in Table 3. It is known from the table that the score of the experimental group is significantly higher than that of control group. The students whose score exceed 80 account for 55.5% in the experimental group, while the proportion is 40% for control group. The number of students failing account for 8.9% in the control group, while the proportion is lower than 5% for experimental group. Second, the investigation results of students' recognition of the two network teaching systems is shown in Table 4. It is known from the table that 88.9% of students in the experimental group accepted the function of the individual network teaching system. Students' recognition of this system is significantly higher than that of the traditional multimedia system. Some students thought the individual network teaching system improved their ability to actively learn and solve problems and greatly helped them learn the Modern History Outline of China content. The advantage of this system is that students could listen to the teacher anytime and anywhere through the course unit. If a student had doubts about the teaching content in the teaching process and hoped to gain special instruction from the teacher, he could ask questions and send the questions to the teacher responsible for the course through the listening unit. Such individual teaching instruction improved the students' problem solving. After students' questions were solved, their active learning ability was cultivated. The teacher could answer questions according to the different learning situations of each student to teach students in accordance with their aptitudes. Moreover, since one teacher can be faced with several hundred of students in the teaching process, the teacher's workload can greatly decrease to significantly promote teaching effectiveness. Meanwhile, an Agent teaching model includes teaching content organization, retrieval, exchange, questioning/answering and testing. The teacher imparts the knowledge to the students. More importantly, the teacher conveys his practical experience and ideas to students so the knowledge develops in continuous accumulation. Although the application of an intelligent Agent in a distance teaching system is the research hotspot and there have been various conceptions and experimental systems at home and abroad, an Agent-based network teaching system was explored in this paper. It has strong research value and practical significance.
Conclusions
An Agent-based individual network teaching system showed significant effectiveness in the network teaching of the Modern History Outline of China course. Indi-viduation and intelligence of the Agent technology promoted the effectiveness of the development of network teaching. Agent technology provided technological and theoretical support for distance education. Agent-based individual network teaching system did not just achieve rational allocation of teachers and teaching resources, but it also offered better network learning experiences for students and met their individual needs. It had high practical value. Because the researcher's time and research conditions were restricted, it is necessary to further study potential problems. Future research needs to focus on learning content organization in the teaching process, including the correlation among knowledge points of courses. | 5,006.4 | 2018-03-30T00:00:00.000 | [
"History",
"Computer Science",
"Education"
] |
Chemical Relationship among Genetically Authenticated Medicinal Species of Genus Angelica
The genus Angelica comprises various species utilized for diverse medicinal purposes, with differences attributed to the varying levels or types of inherent chemical components in each species. This study employed DNA barcode analysis and HPLC analysis to genetically authenticate and chemically classify eight medicinal Angelica species (n = 106) as well as two non-medicinal species (n = 14) that have been misused. Nucleotide sequence analysis of the nuclear internal transcribed spacer (ITS) region revealed differences ranging from 11 to 117 bp, while psbA-trnH showed variances of 3 to 95 bp, respectively. Phylogenetic analysis grouped all samples except Angelica sinensis into the same cluster, with some counterfeits forming separate clusters. Verification using the NCBI database confirmed the feasibility of species identification. For chemical identification, a robust quantitative HPLC analysis method was developed for 46 marker compounds. Subsequently, two A. reflexa-specific and seven A. biserrata-specific marker compounds were identified, alongside non-specific markers. Moreover, chemometric clustering analysis reflecting differences in chemical content between species revealed that most samples formed distinct clusters according to the plant species. However, some samples formed mixed clusters containing different species. These findings offer crucial insights for the standardization and quality control of medicinal Angelica species.
Previous research endeavors have sought to categorize Angelica species based on their chemical, genetic, or morphological differences.Chromatographic techniques such as TLC, HPLC, and LC/MS have been employed to discern A. sinensis, A. pubescens f. biserrata, A. dahurica, and other related Umbelliferae plants by analyzing coumarins, phthalides, phenolics, and polyacetylene [8].Additionally, seven Angelica species (A.gigas, A. acutiloba, A. tenuissima, A. dahurica, A. koreana, A. polymorpha, and A. decusriva) were authenticated through quantitative analysis of coumarins and micro-morphologies [9].Furthermore, HPLC was utilized to differentiate A. sinensis, A. acutiloba, A. acutiloba var.sugiyamae, and other related Umbelliferae herbs based on their chromatographic fingerprints [10].Lastly, quantitative analysis of coumarins and phenolics using HPLC enabled the chemical differentiation of three Angelica species of Dang-gwi (A. gigas, A. acutiloba, and A. sinensis) [11].
In this study, genetic analysis was employed as a tool to explore the phylogenetic relationships among various Angelica species as well as related plants within the Apiaceae family.These species were genetically classified using a combination of nrDNA internal transcribed spacer (ITS) and external transcribed spacer sequences, cpDNA sequences (rpsl6 intron, rpsl6-tmK, rpl32-trnL, and trnL-trnT), and macro-and micro-morphological characteristics [12].DNA barcoding regions, which included three chloroplast regions (rbcL, matK, and trnH-psbA) and the nuclear ITS region, were utilized to determine phylogenetic relationships among A. sinensis, A. biserrata and A. dahurica [13].Chloroplast genome sequences were used to establish the phylogenetic relationships of 33 Angelica species and 31 other Apioideae species [14].Another study reported the use of 5S-rRNA spacer domains and chemical components (ferulic acid and Z-ligustilide) as genetic and chemical markers, respectively, to compare species differences among A. gigas, A. sinensis, and A. acutiloba [15].However, there were limitations in the study, as the samples of Angelica species used in the chemical analysis were not guaranteed by their exact botanical species, and the chemical relationships among the Angelica samples in the genetic analyses were not confirmed.
As mentioned above, herbal medicines from Angelica species have been utilized for their medicinal purposes in Korean traditional medicine.However, there is controversy in defining the original species of Gang-hwal, of which the botanical origin is Ostericum koreanum Maximowicz in Korean pharmacopeia [1].Therefore, in this study, to find the possible quantitative explanations for the differences in genus Angelica-oriented herbal medicines, we genetically identified eight species of medicinal Angelica genus and two non-medicinal Angelica species, namely, A. polymorpha Maxim.(Korean name: Gunggungi) and Ostericum grossiserratum (Maxim.)Kitag.(=O.koreanum (Maxim.)Kitag., A. grosseserrata Maxim., and A. koreana Maxim.)(Korean name: Singamchae).Those two non-medicinal species were examined to define the original species of Gang-hwal.We used the ITS and psbA-trnH regions for genetic identification.Furthermore, we chemically distinguished these species for chemotaxonomic classification using quantitative HPLC analysis of fortysix marker compounds.Subsequently, we investigated the chemical relationships among medicinal Angelica species as well as non-medicinal species and thereby found feasible alternatives to medicinal species.
DNA Barcode Analysis
To identify the species among 120 samples derived from 8 medicinal species and 2 non-medicinal species of Angelica genus, including 13 samples of A. acutiloba, 10 samples of A. biserrata, 15 samples of A. dahurica, 13 samples of A. decursiva, 11 samples of A. gigas, 24 samples of A. reflexa, 12 samples of A. sinensis, 8 samples of C. tenuissimum, 6 samples of A. polymorpha, and 8 samples of O. grossiserratum, the nucleotide sequences of the ITS and psbA-trnH regions were analyzed.In the ITS region, approximately 688-694 bases of amplified product sequences were examined for each species.No intraspecies variation was observed in either the ITS or psbA-trnH nucleotide sequences.The analysis revealed nucleotide sequence differences ranging from 11 bp to 117 bp depending on the species, with an efficient classification of 10 species (sequence identity matrix range 0.832-0.978,Table S1).For the psbA-trnH region, approximately 307-350 bases of amplified product sequences were analyzed for each species.This region exhibited a 3-95 bp nucleotide sequence difference depending on the species (sequence identity matrix range 0.990-0.738,Table S2).The results of species discrimination in both the ITS and psbA-trnH regions were found to be consistent.The base sequence of the analyzed sample was confirmed for species identification through dual verification using both standard sample data and the NCBI database (Table 1).
. 'Purchased', the samples were purchased from herbal companies in Korea.'Provided (NG)', the samples were provided as the name of 'Nam-Gang-hwal'.'Provided (BG)', the samples were provided as the name of 'Buk-Gang-hwal'.'Collected', the samples were collected from wild habitats.
Phylogenetic Analysis
In the PhyML + SMS (Maximum likelihood-based inference of phylogenetic trees with Smart Model Selection) tree constructed based on concatenated nucleotide sequences of the ITS and psbA-trnH regions (Figure 1), the phylogenetic tree displayed clear separation by species, thereby supporting the accuracy of the identification results based on the two DNA barcode regions.All samples derived from the genus Angelica were clustered together, except for ASI (A. sinense).ASI was closely grouped with L. jeholense and L. tenuissium (=CTE), distinct from other genus Angelica samples.Additionally, one of the non-herbal species, O. grossiserratum (OGR), was closely clustered with other Ostericum species, forming a distinct cluster.
Among the non-specific markers, xanthotoxol (10) and phellopterin (36) exhibited significantly higher levels in the ADA samples compared to both the AAC samples and Plants 2024, 13, 1252 9 of 20 APO samples, respectively.Similarly, the contents of angelol A (23), columbianetin acetate (30), and columbianadin (42) were notably elevated in the ABI samples compared to those in the ADA samples, ARE samples, and ADE samples, respectively.However, no significant quantitative differences were observed for prim-O-glucosyl-cimifugin (3) between the ADA and ARE samples nor for osthol (37) between the ABI and ARE samples.
Nodakenin (5) exhibited significantly higher contents in both the ADE and AGI samples, while umbelliferone (6) displayed elevated levels specifically in the ADE samples and benzoic acid (8) demonstrated increased content in both the ASI and CTE samples compared to the AAC samples.Moreover, the contents of marmesin (12), decursinol (15), decursin (38), and decursinol angelate (39) were notably higher in the AGI samples.Similarly, byakangelicin (17), byakangelicol (28), and imperatorin (34) exhibited elevated levels in the ADA samples, while xanthotoxin (22) was more abundant in the AAC samples.Additionally, bergapten (25) showed increased content in the ABI samples, while ostenol (26) and bisabolangelone (27) displayed higher levels in the ARE samples.Moreover, senkyunolide A (32) exhibited elevated content in the CTE samples and falcarindiol (43) demonstrated significantly higher levels in the OGR samples compared to other samples.
Chlorogenic acid (1) displayed significantly higher levels in the AGI and ARE samples, while ferulic acid (7) exhibited elevated content in the ARE and ASI samples.Senkyunolide I ( 9), senkyunolide H (11), ligustilide (35), and levistilide A (45) demonstrated increased levels in the ASI and CTE samples, while oxypeucedanin hydrate (14) and isoimperatorin (40) showed elevated contents in the ADA samples.Additionally, coniferyl ferulate (31) exhibited significantly higher levels in the APO and CTE samples compared to other samples.In contrast, caffeic acid (2) showed significantly lower contents in the ASI and CTE samples and 3-n-butyl-phthalide (33) demonstrated decreased levels in the AAC samples compared to other samples.
Chemometric Clustering Analysis
In the hierarchical clustering analysis (HCA), the majority of samples formed distinct clusters corresponding to their botanical species, with the occasional insertion of samples from other species.Notably, AAC samples (excluding AAC01, AAC06, and AAC12) and ABI05 clustered together, along with a close positioning of a mixture of samples (AAC12, ABI01, -02, ASI02, -03, -07, and CTE02).The remaining ABI samples were clustered with AAC06, ARE14, and ARE16.ADA samples formed a separate cluster without any interspersions.ADE samples were split into two distinct clusters within their species (ADE02−ADE08 vs. ADE09-ADE13).All AGI samples, along with AAC01 and ARE07, formed a distinct cluster.APO samples, except for APO04, also formed their own cluster.Although three samples (ARE07, ARE14, and ARE16) were positioned adjacent to other samples, ARE samples were divided into two separate clusters, closely associated with ADE samples and APO and ADA samples, respectively.The remaining ASI samples were combined with CTE01 and CTE07, near the cluster of CTE samples.OGR samples formed a separate cluster, distinctly different from other samples (Figure 3).
In the principal component (PC) score plot, most ADA samples and ARE samples exhibited negative PC1 scores and positive PC2 scores.Conversely, ASI and CTE samples displayed positive scores for both PC1 and PC2.The ABI samples, except for two samples, had negative PC2 scores but varied in their PC1 scores between positive and negative values.These sample distributions were distinctly discernible based on PC scores.However, there were compact distributions of AAC, ADE, AGI, APO, OGR, and some ARE samples near 'zero' PC scores.Similar to the clustering in Figure 3, the ARE samples were distributed as a Nam-Gang-hwal group including ARE07 and -20 and a Buk-Gang-hwal group including ARE13 and -14.Although samples were grouped by their respective species, there was notable overlap in their distributions (Figure 4).formed a distinct cluster.APO samples, except for APO04, also formed their own cluster.Although three samples (ARE07, ARE14, and ARE16) were positioned adjacent to other samples, ARE samples were divided into two separate clusters, closely associated with ADE samples and APO and ADA samples, respectively.The remaining ASI samples were combined with CTE01 and CTE07, near the cluster of CTE samples.OGR samples formed a separate cluster, distinctly different from other samples (Figure 3).In the principal component (PC) score plot, most ADA samples and ARE samples exhibited negative PC1 scores and positive PC2 scores.Conversely, ASI and CTE samples displayed positive scores for both PC1 and PC2.The ABI samples, except for two samples, had negative PC2 scores but varied in their PC1 scores between positive and negative values.These sample distributions were distinctly discernible based on PC scores.However, there were compact distributions of AAC, ADE, AGI, APO, OGR, and some ARE samples near 'zero' PC scores.Similar to the clustering in Figure 3, the ARE samples were distributed as a Nam-Gang-hwal group including ARE07 and -20 and a Buk-Gang-hwal group including ARE13 and -14.Although samples were grouped by their respective species, there was notable overlap in their distributions (Figure 4).
Correlation Analysis
Chemical correlations among Angelica species were assessed by computing Pearson's correlation coefficient (r) for both intra-and interspecies comparisons, as outlined in Table 2 and illustrated in Figure 5.Among the Angelica species, AGI, APO, ASI, CTE, and OGR samples exhibited notably high intraspecies coefficients, with mean and median r values > 0.9.Following closely were the ADA and AAC samples, displaying mean r values ranging from 0.6 to 0.8 and median r values ranging from 0.8 to 0.9.However, distinct outliers were observed among the samples such as AAC01, AAC06, ASI03, ASI07, and CTE02 in terms of intraspecies coefficients.In contrast, ADE and ARE samples showed lower mean r values of 0.4−0.5 and median r values of 0.3−0.4,respectively.The ABI samples exhibited the lowest mean and median r values of coefficients, both below 0.2 (Figure S3).
Correlation Analysis
Chemical correlations among Angelica species were assessed by computing Pearson's correlation coefficient (r) for both intra-and interspecies comparisons, as outlined in Table 2 and illustrated in Figure 5.Among the Angelica species, AGI, APO, ASI, CTE, and OGR samples exhibited notably high intraspecies coefficients, with mean and median r values > 0.9.Following closely were the ADA and AAC samples, displaying mean r values ranging from 0.6 to 0.8 and median r values ranging from 0.8 to 0.9.However, distinct outliers were observed among the samples such as AAC01, AAC06, ASI03, ASI07, and CTE02 in terms of intraspecies coefficients.In contrast, ADE and ARE samples showed lower mean r values of 0.4-0.5 and median r values of 0.3-0.4,respectively.The ABI samples exhibited the lowest mean and median r values of coefficients, both below 0.2 (Figure S3).
Interspecies correlations presented a wider range of values compared to intraspecies correlations.AAC samples displayed relatively higher mean and median coefficients with ASI and CTE samples (both r > 0.5), while coefficients with other samples were generally below 0.2 (except for ARE samples with mean r values > 0.2) and even showed negative values with ADA samples.ABI samples showed mean coefficients below 0.2, with most median values being negative.ADA samples exhibited mean and median r values ranging from 0.2 to 0.4 with APO and ARE samples but negative values with samples from other species.ADE samples displayed relatively higher mean coefficients with APO samples (r > 0.4) and OGR samples (r > 0.5) and lower mean values with ARE samples (r > 0.1).AGI samples showed coefficients mostly close to zero, with both positive and negative values, when compared to other samples.APO samples showed higher coefficients with OGR samples (mean r value > 0.8 and median r value > 0.7), followed by higher values with ARE samples (mean and median r values > 0.4).However, the coefficients of CTE samples with OGR samples were negative in both mean and median values (Figure S4).
Discussion
Recently, entire cp genome analysis has been widely used for gene-based species discrimination or identification research.Those versatile characteristics make the cp region specifically applicable to plants, and its genetically informative feature is quite an attractive approach to botanical identification.However, the cp genome analysis has a limitation on the samples being processed via multiple steps.Therefore, DNA barcoding and phylogenetic analysis were used to identify the specific botanical species among samples of the Angelica genus.Previous studies highlighted the effectiveness of ITS as a robust tool, but analysis of the psbA-trnH region was also conducted in this study to increase result accuracy [22,23].Despite some reported issues with indels in the psbA-trnH region Pearson's correlation coefficient (r) analysis further confirmed the stronger associations of Nam-Gang-hwal samples with ADA samples (mean r value = 0.44) and Buk-Gang-hwal samples with AAC (mean r value = 0.34) and ADE samples (mean r value = 0.34), compared to the correlations of Buk-Gang-hwal samples with ADA samples (mean r value = 0.09) and Nam-Gang-hwal samples with AAC (mean r value = 0.13) and ADE samples (mean r values < 0), respectively.Meanwhile, APO and OGR samples displayed closer correlations with Buk-Gang-hwal samples (both mean r value > 0.5) than with Nam-Gang-hwal samples.
Discussion
Recently, entire cp genome analysis has been widely used for gene-based species discrimination or identification research.Those versatile characteristics make the cp region specifically applicable to plants, and its genetically informative feature is quite an attractive approach to botanical identification.However, the cp genome analysis has a limitation on the samples being processed via multiple steps.Therefore, DNA barcoding and phylogenetic analysis were used to identify the specific botanical species among samples of the Angelica genus.Previous studies highlighted the effectiveness of ITS as a robust tool, but analysis of the psbA-trnH region was also conducted in this study to increase result accuracy [22,23].Despite some reported issues with indels in the psbA-trnH region due to its non-coding region, it remains an efficient tool for analyzing herbal medicines due to its relatively short length and abundant variation compared to other cpDNA barcode regions such as rbcL and MatK [24].The results of this study also revealed that while the psbA-trnH sequence length is approximately half that of ITS, the variation in sequence between species is similar to that of ITS.
The taxonomic classification of A. reflexa (Korean name: Gang-hwal), of which the Korean botanical name is the same as the Korean herbal name, has been quite complex.It was initially classified as A. koreana by Maximowicz (1886) [25] and later transferred to the genus Ostericum (O.koreanum) by Kitagawa (1936) [26] due to its external morphological similarity.Subsequently, Kitagawa (1971) [27] recognized the taxonomic identity between A. koreana (=O.koreanum) and O. grossiserratum, treating A. koreana (=O.koreanum) as a synonym of O. grossiserratum.This is the reason why O. koreanum still remains a synonym of O. grossiserratum.However, molecular phylogenetic study indicated that A. koreana (=O.koreanum) may be independent from O. grossiserratum [28].Furthermore, both external morphological examination and molecular phylogeny using nuclear DNA ITS sequences have shown that commercial medicinal plants cultivated as Gang-hwal are neither A. koreana nor O. grossiserratum [29].After careful observation of morphological and anatomical characters and examination of relevant specimens, Lee et al. (2013) [5] ultimately proposed this as a new species of Angelica, named A. reflexa.Some researchers still consider Gang-hwal to be A. genuflexa due to strong morphological similarity and regard A. reflexa as a synonym of A. genuflexa [29].However, careful observation [5] and our DNA barcode analysis support the difference between A. reflexa and A. genuflexa.As shown in Figure 1, all A. reflexa (ARE) samples were located within the 'Angelica' clade, not within the 'Ostericum' clade, while O. grossiserratum was clearly classified as genus Ostericum and separated from A. genuflexa.According to the results of phylogenetic analysis, A. genuflexa was closely grouped with A. biserrata (ABI) and A. polymorpha (APO) rather than A. reflexa.Moreover, the herbal samples of Gang-hwal acquired for this study were clearly identified as A. reflexa, not O. koreanum or O. grossiserratum.
Interesting findings were observed for the three Angelica species commonly used as Dang-gwi (Figure 1).Despite A. gigas (AGI) and A. acutiloba (AAC) being grouped within the same Angelica clade, they are distinctly separated into different clusters (AGI clustered with ADE).Additionally, A. sinensis (ASI) was clustered with L. sinense and L. jeholense, positioned within the Ligusticum clade including CTE samples rather than the Angelica clade.
The roots of A. reflexa are utilized in Korean herbal markets to produce two distinct types of Gang-hwal, known as 'Nam-Gang-hwal' through seed-propagation and 'Buk-Gang-hwal' via root propagation [40,41].These differing cultivation methods yield variations in the chemical compositions between Nam-Gang-hwal and Buk-Gang-hwal [35].In this study, the ARE samples were also categorized into the Nam-Gang-hwal group and the Buk-Gang-hwal group.These differences in sample types led to the division of ARE samples into separate groups in both the HCA dendrogram and the PC score plot.Additionally, Nam-Gang-hwal samples exhibited a closer chemical relationship with the ADA samples, while Buk-Gang-hwal samples showed a closer relationship with the ADE and AAC samples, forming clusters of closely related samples [42].
The chemical relationships among three Angelica species of Dang-gwi were depicted differently in the dendrogram and Pearson's coefficients.In the dendrogram, the AAC samples and ASI samples were distinctly separated into individual clusters, while the AGI samples formed a secondary cluster, situated apart from the clusters of AAC and ASI samples.Pearson's coefficients also highlighted the chemical heterogeneity of AGI samples from AAC and ASI samples, with mean r values < 0.2.However, a higher r-value of AAC-ASI (mean r value > 0.5) indicated a closer chemical relationship between AAC and ASI samples compared to the combination with AGI samples.Previous studies have reported chemical heterogeneity among three Angelica species of Dang-gwi, but in contrast to these findings, A. gigas and A. acutiloba exhibited a closer relationship than with A. sinensis samples [10,14].
Overall, the chemical relationship between ASI and CTE samples appears to reflect their phylogenetic relevance, despite their difference in therapeutic activities.The ARE samples (i.e., Nam-Gang-hwals) exhibited chemical and phylogenetic relevance with ADA samples, while ARE samples (i.e., Buk-Gang-hwals) showed relevance with AAC and ADE samples.The APO samples, classified as non-medicinal species, displayed partial chemical and genetic relevance with ABI, ADA, and ARE samples and a stronger correlation with OGR samples.Although the OGR samples, another non-medicinal species, exhibited the lowest genetic relationship with other species, their chemical relationship with other species varied, either parallel or opposite to the genetic results, depending on the statistical tools used.
This study has several limitations: (1) the unequal distribution of samples among species; (2) the limited representativeness of the selected 46 marker compounds to whole chemical characteristics of all species samples; (3) inconsistencies in the chemical relationships among the samples in the chemometric analyses; and (4) the lack of comparison between ARE samples and their therapeutic analogous Notopterygium species.Despite these limitations, this study represents the first attempt to genetically authenticate and chemically classify medicinal Angelica species, as well as non-medicinal species.The quantitative explanation of the differences among the Angelica species, including the alternative use of non-medicinal species, would be further supported by pharmacological study.
Preparation of Genomic DNA
The genomic DNA was extracted from the samples following the instructions provided in the NucleoSpin ® Plant II kit manual (Macherey-Nagel, Dueren, Germany).This process involved the use of a PL1 lysis buffer during a lysis step that lasted at least 2 h.For certain samples, an additional step was incorporated, which involved the use of 10% cetyltrimethyl ammonium bromide (CTAB) and 0.7M NaCl.This extra step was employed to remove phenolic compounds and polysaccharides after the extraction of DNA using the kit.
Preparation of Genomic DNA
The genomic DNA was extracted from the samples following the instructions provided in the NucleoSpin ® Plant II kit manual (Macherey-Nagel, Dueren, Germany).This process involved the use of a PL1 lysis buffer during a lysis step that lasted at least 2 h.For certain samples, an additional step was incorporated, which involved the use of 10% cetyltrimethyl ammonium bromide (CTAB) and 0.7M NaCl.This extra step was employed to remove phenolic compounds and polysaccharides after the extraction of DNA using the kit.
Determination of DNA Sequences of PCR Product
The PCR product, separated from the agarose gel using the MagListo™ 5M PCR/Gel Purification Kit (Bioneer, Daejeon, Republic of Korea), was cloned using the TOPcloner™ TA Kit (Enzynomics, Daejeon, Republic of Korea).The DNA sequences of the cloned PCR product were then determined through analysis performed by Bioneer (Daejeon, Republic of Korea).
Analysis of DNA Sequences and Preparation of Phylogenetic Tree
DNA sequences were analyzed using ClustalW multiple sequence alignment (Bioedit, v7.7.1) and confirmed with multiple sequence alignment in MAFFT (MAFFT, v7) [45].To verify the polymorphisms represented by IUPAC symbols in the sequence data, all sequences were generated at least twice.The chromatograms of nucleotide sequences, provided by the Bioneer sequencing service, were compared.Evolutionary analyses were conducted in MEGA X (ver.10.0.5).Phylogenetic analysis of two concatenated DNA barcode regions (ITS and psbA-trnH) was constructed using the PhyML + SMS/OneClick method, which showed a workflow of MAFFT, BMGE, and PhyML + SMS (maximum likelihood-based inference of phylogenetic trees with Smart Model Selection) [46].All analyzed sequences were compared with NCBI GenBank using BLAST [47].Newly determined nucleotide sequences were deposited in NCBI GenBank.Two other subfamilies of the Apiaceae, Eryngium regnellii (Saniculoideae) and Centella asiatica (Mackinlayoideae), were used as outgroups.NCBI data used in the phylogenetic tree analysis were represented with the accession number and scientific names listed in the NCBI database.
Analytical Sample Preparation
Before use, all samples (each triplicate) were thoroughly dried and then ground to a powder, which was homogenized through a 500 µm testing sieve (Chunggyesanggong-sa; Gunpo, Republic of Korea).A precise weight of the powder (500 mg) was extracted with 5 mL of methanol for 30 min using an ultrasonic extractor (Power Sonic 520; Hwashin Tech, Daegu, Republic of Korea).The extract was then centrifuged at 10,000 rpm for 10 min.The supernatant was transferred to a 1.5-mL microtube and gently dried using a nitrogenblowing concentrator (MGS2200; Eyela, Tokyo, Japan).The residue was re-dissolved in HPLC-grade methanol at a concentration of 10,000 µg/mL and filtered through a 0.2 µm syringe filter (BioFact, Daejeon, Republic of Korea) before HPLC injection.
HPLC Analytical Conditions
The quantitative analysis of the marker compounds was performed using an Agilent 1200 liquid chromatography system (Agilent Technologies, Palo Alto, CA, USA), equipped with an autosampler, degasser, quaternary solvent pump, and diode array detector.The data acquired were processed using Chemstation software (Rev.B. 04.03.; Agilent Technologies Inc., USA).A Capcell Pak Mg II C 18 column (4.6 mm × 250 mm, 5 µm; Shiseido, Tokyo, Japan) was used to separate forty-six marker compounds at 35 • C with a flow rate of 1 mL/min and an injection volume of 10 µL.The mobile phases were pumped via gradient elution by mixing water containing 0.1% TFA (solvent A) and acetonitrile (solvent B).The percentages of mobile phase (solvent B) with equivalent retention times were as follows: 15% for 0-2 min, 15-50% for 2-30 min, 50-50% for 30-32 min, 50-75% for 32-55 min, and 75% for 55-58 min, and then re-equilibrated to 15% until the end of the analysis.The detection wavelength of the diode-array detector was set at 230, 250, 270, 280, and 325 nm.
Validation of the HPLC Method
The stock solution was prepared by dissolving each marker compound in methanol at a concentration of 1000 µg/mL, and the working solution for the calibration curve was generated by serial dilution of the stock solution to seven different concentrations.The correlation coefficients (r 2 ) were determined to assess the linearity of the calibration curve.The limit of detection and the limit of quantification were established as signal-to-noise (S/N) ratios of 3 and 10, respectively.
Precision, indicative of the repeatability of the analytical method, was assessed by analyzing low and high concentrations of the stock solutions three times within one day (intraday precision) and over three consecutive days (interday precision).Precision was expressed as relative standard deviations (RSDs): RSD (%) = (standard deviation/ mean) × 100.
The accuracy of the analytical method, represented by recovery, was evaluated by adding low and high concentrations of the marker compounds to the sample solutions.The equation for calculating recovery was as follows: Recovery (%) = [(detected concentrationinitial concentration)/spiked concentration] × 100.
Table 1 .
Species identification of the samples based on DNA barcode analysis of the ITS and psbA-trnH regions.
Table 2 .
Summary of the Pearson's correlation coefficients of the samples. | 6,094.2 | 2024-04-30T00:00:00.000 | [
"Chemistry",
"Medicine",
"Biology"
] |
New Method Based on Multi-Threshold of Edges Detection in Digital Images
Edges characterize object boundaries in image and are therefore useful for segmentation, registration, feature extraction, and identification of objects in a scene. Edges detection is used to classify, interpret and analyze the digital images in a various fields of applications such as robots, the sensitive applications in military, optical character recognition, infrared gait recognition, automatic target recognition, detection of video changes, real-time video surveillance, medical images, and scientific research images. There are different methods of edges detection in digital image. Each one of these methods is suited to a particular type of images. But most of these methods have some defects in the resulting quality. Decreasing of computation time is needed in most applications related to life time, especially with large size of images, which require more time for processing. Threshold is one of the powerful methods used for edge detection of image. In this paper, We propose a new method based on different Multi-Threshold values using Shannon entropy to solve the problem of the traditional methods. It is minimize the computation time. In addition to the high quality of output of edge image. Another benefit comes from easy implementation of this method. Keywords—image processing; multi-threshold; edges detection; clustering
I. INTRODUCTION
In many applications of image processing, the gray levels of pixels belonging to the object are quite different from the gray levels of the pixels belonging to the background. Thresholding becomes then a simple but effective tool in edge detection to separate objects from the background. Edge detection using thresholding is significant importance in many research areas [1,2]. Since, the edge is a prominent feature of an image; it is the front-end processing stage in object recognition and image understanding system. The detection results benefit applications such as automatic target recognition [3], medical image applications [4], and detection of video changes [5].
Edge detection can be defined as the boundary between two regions separated by two relatively distinct gray level properties [6]. The causes of the region dissimilarity may be due to some factors such as the geometry of the scene, the radio metric characteristics of the surface, the illumination and so on [7]. An effective edge detector reduces a large amount of data but still keeps most of the important feature of the image. Edge detection refers to the process of locating sharp discontinuities in an image. These discontinuities originate from different scene features such as discontinuities in depth, discontinuities in surface orientation, and changes in material properties and variations in scene illumination [8,9].
Most of the classical methods for edge detection based on the derivative of the pixels of the original image are Gradient operators, Laplacien and Laplacien of Gaussian (LOG) operators [7]. Many operators have been introduced in the literature, for example Roberts, Sobel and Prewitt [10][11][12][13][14]. Edges are mostly detected using either the first derivatives, called gradient, or the second derivatives, called Laplacien. Laplacien is more sensitive to noise since it uses more information because of the nature of the second derivatives.
Gradient based edge detection methods, such as Roberts, Sobel and Prewitts, have used two linear filters to process vertical edges and horizontal edges separately to approximate first-order derivative of pixel values of the image. Marr and Hildreth achieved this by using the Laplacien of a Gaussian (LOG) function as a filter [15]. The paper [9] used 2-D gamma distribution, the experiment showed that the proposed method obtained very good results but with a big time complexity due to the big number of constructed masks. To solve these problems, the study proposed a novel approach based on information theory, which is entropy-based thresholding. The proposed method is decrease the computation time. The results were very good compared with the well-known Sobel gradient [16] and Canny [17] gradient results.
The outline of the paper is as follows. In section 2, we have presented the classical edge detection methods that related to the paper. Image thresholding based on Shannon entropy is presented in section 3. Section 4, describes the proposed algorithm of edge detection. In section 5,we have presented the effectiveness of proposed algorithm in the case of real-world and synthetic images, is also, we compare the results of the algorithm against several leading edge detection methods. Conclusion and feature work are presented in Section 6.
II. CLASSICAL EDGE DETECTION METHODS
Five most frequently used edge detection methods are used for comparison. These are: Gradient operators (Roberts, Prewitt, Sobel), Laplacian of Gaussian (LoG or Marr-Hildreth) and Gradient of Gaussian (Canny) edge detections [17,18]. People which would like to read about this subject are referred to [19,20,21] evaluation studies of edge detection algorithms according to different criteria. The details of methods as follows: 91 | P a g e www.ijacsa.thesai.org
A. Roberts edge detector:
It was one of the first edge detectors and was initially proposed by Lawrence Roberts in 1963. It performs a simple, quick to compute, 2-D spatial gradient measurement on an image. It thus highlights regions of high spatial frequency which often correspond to edges [18]. The input to the operator is a grayscale image the same as to the output is the most common usage for this technique. Pixel values in every point in the output represent the estimated complete magnitude of the spatial gradient of the input image at that point, as shown in Figure 1.
B. Prewitt edge detector:
It based on the idea of central difference. It measures two components. The Prewitt edge detector is an appropriate way to estimate the magnitude and orientation of an edge. Although differential gradient edge detection needs a rather time consuming calculation to estimate the orientation from the magnitudes in the x and y-directions, the compass edge detection obtains the orientation directly from the kernel with the maximum response. The operator is limited to 8 possible orientations, however experience shows that most direct orientation estimates are not much more accurate. This gradient based edge detector is estimated in the 3×3 neighbourhood for eight directions as shown in Figure 2. All the eight convolution masks are calculated. One convolution mask is then selected, namely that with the largest module [18].
C. Sobel edge detector:
The Sobel operators are named after Erwin Sobel. The Sobel operator relies on central difference, but gives greater weight to the central pixels when averaging. The Sobel operator can be thought of as 3×3 approximations to first derivative of Gaussian kernels. Sobel operators which are shown in the masks below (rotated by 90°): [18].
Gx Gy
D. Laplacian of Gaussian Edge detection (LOG)
This LOG operator smoothes the image through convolution with Gaussian-shaped kernel followed by applying the Laplacian operator. Laplacian of Gaussian edge detection mask is: Gx Gy Fig. 4. LOG gradient estimation operator.
E. Canny edge detector:
The Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images. It was developed by John F. Canny in 1986. Canny's aim was to discover the optimal edge detection algorithm. In this situation, an "optimal" edge detector means: Good detectionthe algorithm should mark as many real edges in the image as possible.
Good localizationedges marked should be as close as possible to the edge in the real image.
Minimal responsea given edge in the image should only be marked once, and where possible, image noise should not create false edges.
The method can be summarized below: [22] 1) The image is smoothed using a Gaussian filter with a specified standard deviation, to reduce noise.
2) The local gradient and edge direction are computed at each point using different operator.
3) Apply non-maximal or critical suppression to the gradient magnitude. 4) Apply threshold to the non-maximal suppression image.
III. SHANNON ENTROPY AND IMAGE THRESHOLDING
Entropy is a concept in information theory. It is used to measure the amount of information [23]. It is defined in terms of the probabilistic behavior of a source of information. In accordance with this definition, a random event E that occurs with probability P(E) : The amount of self information of the event is inversely related to its probability. If P(E)=1, then I(E) = 0 and no information is attributed to it. In this case, uncertainty associated with the event is zero. Thus, if the event always occurs, then no information would be transferred by communicating that the event has occurred. If P(E) = 0.8, then some information would be transferred by communicating that the event has occurred. The base of the logarithm determines the unit which is used to measure the information. www.ijacsa.thesai.org If the base of the logarithm is 2, then unit of information is bit. If P(E)= ½, then I(E)= -log 2 (½) = 1 bit. That is, 1 bit is the amount of information conveyed when one of two possible equally likely events occurs. An example of such a situation is flipping a coin and communicating the result (Head or Tail) [24,25] The basic concept of entropy in information theory has to do with how much randomness is in a signal or in a random event. An alternative way to look at this is to talk about how much information is carried by the signal. Entropy is a measure of randomness. Consider a probabilistic experiment in which the output of a discrete source is observed during every unit of time (signaling interval). The source output is modeled as a discrete random variable Z. Z is referred as a set of source symbols [26]. The set Z of source symbols is referred to as the source alphabet, Z= {z 1 , z 2 , z 3 , ..., z k }.
The source symbol probabilities is P= {p 1 , p 2 , p 3 , ..., p k }. This set of probabilities must satisfy the condition sum(p i )=1, 0 ≤ p i ≤ 1. The average information per source output, denoted S(Z) [26], Shannon entropy may be described as: k is the total number of symbols. If we consider that a system can be decomposed in two statistical independent subsystems A and B, the Shannon entropy has the extensive property (additivity): this formalism has been shown to be restricted to the Boltzmann-Gibbs-Shannon (BGS) statistics.
Let f(x, y) be the gray value of the pixel located at the point (x, y). In a digital image Typically b 0 and b 1 are taken to be 0 and 1, respectively. The result of thresholding an image function f(x, y) at gray level t is a binary function ft(x, y) such that . In general, a thresholding method determines the value t * of t based on a certain criterion function. If t * is determined solely from the gray level of each pixel, the thresholding method is point dependent [24,25].
Let p i = p 1 , p 2 . . . p k be the probability distribution for an image with k gray-levels. From this distribution, we derive two probability distributions, one for the object (class A) and the other for the background (class B), given by: The Shannon entropy for each distribution is defined as: We try to maximize the information measure between the two classes (object and background). When S(t) is maximized, the luminance level t that maximizes the function is considered to be the optimum threshold value .
In the proposed scheme, first create a binary image by choosing a suitable threshold value using Shannon entropy. The Threshold procedure find the suitable threshold value * t for grayscale image f . It can now be described as follows: Procedure Threshold,
Input:
A grayscale image f of size m × n with histogram H.
Output:
* t of f .
Begin
Step 1: Let f(x, y) be the original gray value of the pixel at the point (x, y), x=1..m, y=1..n .
IV. THE PROPOSED MULTI-THRESHOLD ALGORITHM
This section presents the concept of object connectivity. It introduces a technique of edge detection based on entropy and geometric properties of the object. Geometric properties such as connectivity, projection, area, and perimeter are important components in binary image processing. An object in a binary image is a connected set of pixels. In what follows, we present some definitions related to connectivity of pixels in a binary image [25]. www.ijacsa.thesai.org Connected Pixels: A pixel f 0 at (i 0 ,j 0 ) is connected to another pixel f n , at (i n ,j n ) if and only if there exists a path from f 0 to f n , which is a sequence of points (i 0 ,j 0 ), (i 1 ,j 1 ),…, (i n ,j n ), such that the pixel at (i k ,j k ) is a neighbor of the pixel at (i k+1 ,j k+1 ) and f k = f k+1 for all, 0< k < n -1.
8-connected:
When the pixel a t location (i, j) has. in addition to above two types of four immediate neighbors, together, they are known as 8-connected. Thus two pixels are eight neighbors if they share a common corner. This is shown in Figure (5-c). In order to obtain edge detection, we find classification of all pixels that satisfy the criterion of homogeneousness, and detection of all pixels on the borders between different homogeneous areas. In the proposed scheme, first create a binary image by create a threshold value using Shannon entropy, using of the Threshold procedure. Region labeling in this system is done using 4-neighbor or 8-neighbor connectivity. A common alternative would be to use 4neighbor connectivity instead ( Figure 5).
The Edge Detection Procedure can be described as follows (using the 4-connected or diagonal 4-connected):
Procedure Edge Detection; Input:
A grayscale image f of size m×n and * t .
Output: The edge detection image g of f.
Begin
Step 1: Create a binary image: For all x, y, If Step 2: Initialization of the output edge image of size m×n, g(x, y) = 0 and for all x and y.
Step 3: Checking for edge pixels: For all 1< j< m, and 1< i< n do End For End Procedure.
and t 3 according to the condition, For all 1< j< m, and 1< i< n do: IF ((f(i,j)>= t 2 ) and (f(i,j)< t 1 )) or f(i,j)>= t 3 ) Then A(i,j)=1 else A(i,j) = 0. 5) Applying EdgeDetection procedure with A matrix to obtain the edge detection image g.
End Algorithm.
V. EXPERIMENTAL RESULTS AND DISCUSSION In order to test the method proposed in this paper and compare with the other edge detectors, common gray level test images with different resolutions and sizes are detected by the proposed method, Gradient of Gaussian (Canny), Laplacian of Gaussian (LoG or Marr-Hildreth), Prewitt, Roberts and Sobel methods respectively.
The performance of the proposed scheme is evaluated through the simulation results using MATLAB. Prior to the application of this algorithm, no pre-processing was done on the tested images.
As the algorithm has two main phasesglobal and local enhancement phase of the threshold values and detection phase, we present the results of implementation on these images separately. Here, we have used in addition to the original gray level function f(x, y), a function g(x, y) that is the average gray level value in a 3×3 neighborhood around the pixel (x, y).
Though the performance of the proposed entropic edge detector excels as a shape and detail detector, it is fraught with some drawbacks. It fails to provide all thinned edges. The weak edges are not eliminated but for some applications, these may be required.
This detector has another distinctive feature, i.e. it retains the texture of the original image. This feature can be utilized for the identification of fingerprints, where the ridges may have different intensities. We are experimenting on several images to come up with a useful selection guideline. Second, PSNR, Peak Signal-to-Noise Ratio, often abbreviated PSNR, is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation [26]. Because many signals have a very wide dynamic range, PSNR is usually expressed in terms of the logarithmic decibel scale. This paper shows the new algorithm based on the Shannon entropy for edge detection using histogram of the image. The objective is to find the best edge representation and minimize the computation time. A set of experiments in the domain of edge detection are presented. An edge detection performance is compared to the previous classic methods, such as, LOG, Prewitt, Roberts and Sobel. Analysis show that the effect of the proposed method is better than those methods in execution time, also is considered as easy implementation. The significance of this study lies in decreasing the computation time with generate suitable quality of edge detection. In this way entropic edge detector presented in this paper uses Shannon entropy with multi threshold values. It is already pointed out in the introduction that the traditional methods give rise to the exponential increment of computational time.
Experiment results have demonstrated that the proposed scheme for edge detection can be used for different gray level digital images. Another benefit comes from easy implementation of this method. An important future investigation will be the study of edge detection in the case of automatic target recognition, medical image applications and detection of video changes. | 4,064.2 | 2014-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Antimicrobial resistance in urinary isolates from inpatients and outpatients at a tertiary care hospital in South-Kivu Province (Democratic Republic of Congo)
Background The rate of antimicrobial resistant isolates among pathogens causing urinary tract infections (UTIs) in Democratic Republic of Congo (DRC) is not known. The aim of the current study was to determine this rate at the Bukavu Provincial General Hospital (province of South-Kivu, DRC). Findings A total of 643 isolates (both from inpatients and outpatients) collected from September 2012 to August 2013 were identified using biochemical methods, and tested for antimicrobial susceptibility. The isolates were further screened for Extended-Spectrum Beta-Lactamases (ESBL) production. Beta-lactamase AmpC phenotype was investigated in 20 antibiotic-resistant isolates. Escherichia coli (58.5%), Klebsiella spp. (21.9%) and Enterobacter spp. (16.2%) were the most frequent uropathogens encountered. Rare uropathogens included Citrobacter spp., Proteus spp., and Acinetobacter spp. Resistance was significantly more present in inpatients isolates (22.1% of isolates) when compared to outpatients isolates (8.4% of isolates), (p-value <0.001). Antibiotic-resistant isolates displayed resistance to common antimicrobial drugs used for UTIs treatment in South Kivu province, namely: ciprofloxacin, ampicillin and third generation cephalosporins. ESBL-phenotype was present in 92.9% of antibiotic-resistant isolates. Only amikacin, nitrofurantoin and imipenem displayed satisfactory activity against antibiotic resistant isolates. Conclusions This study confirms the presence of antibiotic-resistant uropathogens (mainly ESBL-producers isolates) at the Bukavu General Hospital. This study should serve as a wake-up call and help to raise awareness about the threat to public health of antibiotic resistance in this DRC province.
Accordingly, the aim of this study was to monitor the rate of resistant urinary pathogens isolated from patients with UTIs (both from community and hospital settings) attending Bukavu Provincial Hospital (South Kivu, DRC) from September 2012 to August 2013, as well as to determine the pattern of antibiotic resistance to commonly used antimicrobial agents in the province.
Study population and bacterial isolates
This cross-sectional study was conducted in both outpatients and inpatients suspected of UTI at the Bukavu Provincial Hospital. This hospital is one of the main healthcare facilities of Bukavu, a city of more than 500 000 inhabitants. The hospital also serves as the health reference center for the province of South-Kivu. It has 385 beds with 6400 admissions and 4900 outpatients per year.
Urinary samples were collected in sterile universal containers from patients presenting with UTI symptoms (dysuria, pelvic pain, with or without fever) from September 2012 to August 2013. Exclusion factors were structural or functional abnormalities of the genitourinary tract. Catheterized patients were also excluded. Samples were cultured within 30 minutes of collection. Samples displaying pyuria (white blood cell count greater than 5 per high-power field upon light microscopic examination) and/or bacteriuria 10 5 CFU per mL were further processed for culture and subsequent antimicrobial susceptibility testing. Each specimen was cultured using a 10 μL calibrated loop to inoculate cysteine lactose electrolytes deficient (CLED) agar and incubated at 37°C for 16-24 hours and the number of colonies was counted. A growth of >10 5 colony forming units/mL of one type of organism was considered as significant bacteriuria. Identification was performed using standard biochemical tests [25].
Ethical approval for the study was granted by the Institutional Review Board (IRB) of the Université Catholique de Bukavu, DRC. The study complied with the World Health Organization and international guidelines on antibiotic surveillance for which no recommendation for an informed consent has been issued. In order to ensure confidentiality, samples were analyzed anonymously.
Antimicrobial susceptibility testing
The isolates were tested by the disk diffusion method on Muller Hinton agar II and the results were interpreted according to the guidelines European Committee on Antimicrobial Susceptibility Testing (EUCAST, 2013) [26]. Antibiotic disks were purchased from Bio-Rad (Nazareth Eke, Belgium). The following antibiotics were tested: amikacin, ampicillin, amoxicillin, amoxicillin-clavulanic acid, ceftriaxone, ceftazidime, imipenem, trimethoprimsulfamethoxazole, amikacin, ciprofloxacin, nitrofurantoin.
Isolates showing resistance to at least one cephalosporin were tested for ESBL production by the double-disk synergy test on Mueller-Hinton agar (Biorad, Nazareth Eke; Belgium) using ceftazidime and ceftriaxone placed at a distance of 20 mm apart from a disk containing amoxicillin plus clavulanic acid. A clear-cut enhancement of the inhibition in front of either ceftazidime and ceftriaxone disks towards the clavulanic acid-containing disk (also called "champagne-cork" or "keyhole") was interpreted as positive for ESBL production [27]. E-test strips (BioMérieux, Marcy l'Etoile, France) were used for confirmation of ESBL production. Minimum inhibitory concentrations (MIC) of cefotaxime and ceftazidime with and without clavulanic acid were determined, after 16-18 hours incubation on Mueller Hinton plates inoculated with suspension of isolates at a fixed density (0.5 to 0.6 McFarland standard). The test was performed and interpreted according to the manufacturer's instructions. Escherichia coli ATCC 35218 and Klebsiella pneumoniae ATCC 700603 strains were used as ESBL negative and positive controls respectively. Twenty ampicillin and/or third-generation cephalosporins-resistant isolates (n = 20) were tested for the presence of the betalactamase AmpC phenotype, using cefoxitin-cloxacillin disk diffusion test as described by Tan et al. [28]. Multi-drug resistance was defined as non-susceptibility to at least one agent in three or more antimicrobial categories [29]. All multi-drug resistant isolates were cryopreserved at −80°C for further studies.
Statistical analysis
Statistical analyses were performed using the SPSS statistical package release 12.0 for Windows (SPSS, Inc., Chicago, IL). Differences in group proportions and categorical variables were assessed using the chi-square test. A p-value <0.05 was considered as statistically significant.
Findings
Clean-catch midstream urine specimens (n = 2724) were processed during the 12-month study period (from September 1 st 2012 until August 31 st 2013) among which 1130 (41.2%), and 1594 (58.5%) were sampled from outpatients and inpatients, respectively. Of these 2724 samples, 643 (23.6%) yielded significant growth of a single organism. Among positive samples (n = 643), 35.0%, and 65.0% were isolated from outpatients and inpatients respectively. The mean age of the study population was 27.2 years, with a range of 0-75 years. Children between 0 and 17 years represented 20.6% of all patients. The female to male ratio was 1.73.
Escherichia coli was the most frequent uropathogen isolated (376 out of 643; 58.5%) at the Bukavu General Hospital both in outpatients and in inpatients. Klebsiella spp. and Enterobacter spp. represented 21.9% and 16.2% of uropathogens. Rare uropathogens included Citrobacter spp., Proteus spp., and Acinetobacter spp. A summary of antimicrobial susceptibility patterns of the most frequent uropathogens is presented in Table 1. 16.3% isolates (n = 643) displayed a MDR phenotype. Multidrug resistance rate was higher in inpatients isolates (22.1%) compared to outpatients (8.4%) (p-value <0.001). Among these MDR isolates, 92.4% displayed an ESBL phenotype. Nitrofurantoin susceptibility for E. coli, Enterobacter spp. and Klebsiella spp. was 69.6%, 78.9%, and 83.3%, respectively. Uropathogens susceptibility rates to amikacin were 42.1%, 69.6% and 77.8% for E. coli, Enterobacter spp. and Klebsiella spp., respectively. Regarding imipenem, all isolates but one, were susceptible in vitro. This single imipenem non-susceptible isolate was identified as an Enterobacter spp. Preliminary molecular data obtained on 25 isolates showed that the CTX-M group 1 gene was the most common in ESBL-producing isolates assayed. Regarding beta-lactamases AmpC, no isolate out of the 20 isolates assayed displayed this phenotype.
Discussion
The steady increase of bacteria resistance to antibiotics is a cause of global concern. Infections caused by resistant microorganisms often fail to respond to the standard treatment, resulting in prolonged illness and greater risk of death [30]. Available studies from several Sub-Saharian countries in Africa have highlighted unexpected high levels of resistance of uropathogens to common antibiotics [9][10][11][12]14,16,17,31]. Unfortunately, there are no data on rates of antibiotic resistance of uropathogens in DRC. In that respect, recent data on Salmonella isolates from blood cultures in DRC have shown high levels of antibiotic resistance [32][33][34].
Accordingly, the goal of this work was to provide a benchmark for the prevalence and patterns of antibiotic resistance of bacterial pathogens involved in UTIs at the Bukavu General Hospital. This study confirmed that E. coli was the most common bacterial uropathogen isolated from both in inpatients and outpatients at the Bukavu Hospital. Besides E.coli, other bacterial species such as Klebsiella spp. and Enterobacter spp. were also frequently encountered. Noteworthy, we were not able to isolate any Group B Streptococcus (GBS), though GBS have been reported as causative agents of UTIs [35]. This possible bias might be linked to the sole use of CLED medium for uropathogens isolation. It has indeed been previously reported that the identification of Streptococcus agalactiae on CLED agar could be challenging [36].
Limitations also include the possibility of selection bias due to the fact that Bukavu Hospital mostly deals with patients who had been treated with antibiotics prior to their visit. Finally, our study may have underestimated the prevalence of UTIs by using our current threshold of 10 5 CFU/mL of urine sample as a pre-requisite for urine culture. Several studies have shown indeed that using lower CFU threshold values improve the identification of UTIs [37,38].
Our study found that the overall prevalence of antibacterial drug resistance was lower than recently reported in neighboring countries [12,20]. A recent study in neighboring Rwanda reported resistance rates of 59.2%, 32.1% and 41.3% for amoxicillin/clavulanic acid, ceftriaxone and ciprofloxacillin, respectively [20].
Regarding the pattern of resistance in the 105 drugresistant isolates, there was a striking low susceptibility to ampicillin, amoxicillin, amoxicillin/clavulanic, cefuroxime, ceftazidime, ceftriaxone, ciprofloxacin and sulfamethoxazole/trimethoprim (see Table 2). Conversely, susceptibility rates for rare pathogens as reported in this study (Citrobacter spp., Proteus spp.and Acinetobacter spp.) should be interpreted with caution, as they were based on a restricted number of isolates, and could therefore result in observational bias with overestimation of effect size. Interestingly, this study showed that ESBL-production was the main mechanism of resistance both in outpatients and inpatients at the Bukavu hospital. Global susceptibility rate of uropathogens to nitrofurantoin was 78.6%. The susceptibility of antibiotic-resistant isolates to amikacin, nitrofurantoin and to imipenem was high, which is a significant observation in terms of management of UTIs. To the best of our knowledge, this is the first study in DRC (South-Kivu Province) on the rate of resistant and/or ESBL-producing uropathogens.
Carbapenems have been recently introduced in the province for treatment of ESBL-producing Enterobacteria, and this study demonstrated very high susceptibility among uropathogens. Based on our findings, we advocate prescription of nitrofurantoin as the first-line antibiotics for UTIs treatment in South Kivu. Despite high susceptibility to imipenem, it is critical to warn physicians about the danger of prescribing this compound as first-line antibiotic for UTIs treatment, because this habit will likely enhance the emergence of imipenemresistant bacteria. Likewise, several other antibiotics previously reported as active against uropathogens in other studies [7,[39][40][41] but not currently prescribed in South Kivu should be tested against MDR-resistant uropathogens. These include drugs such as Fosfomycin [42], Piperacillin/ Tazobactam [39], Fosmomycin/tromethamol [40], cefepime, tigecycline, temocillin [7,41] and other carbapemens [19]. But most importantly, it will be important to put emphasis on public and professional education towards a rational use of antibiotics, i.e. antibiotherapy based on susceptibility patterns of pathogens, promotion and evaluation of medical and veterinary practice guidelines. Curbing the spread of antibiotic resistant pathogens in the province and in the country will also entail to deal with other critical issues which should be investigated. These include controlling the influx of counterfeit drugs, and dealing with issues related to affordability of healthcare in the province.
Conclusions
High rates of ESBL-producing Gram-negative bacteria were found among inpatients and outpatients at the Bukavu Hospital in DR Congo. Most of these ESBL-producing isolates were also multidrug resistant, except for amikacin, nitrofurantoin and imipenem for which susceptibility was high.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions LMI, LK, OV, RBC and JLG participated in the design of the study. LMI oversaw the whole collection of data. LMI and LK were responsible for the laboratory assays. LMI drafted the first manuscript and all co-authors participated in the manuscript revision. All authors read and approved the final manuscript. input during the completion of this work. | 2,769.2 | 2014-06-18T00:00:00.000 | [
"Medicine",
"Biology"
] |
Copper-mediated thiol potentiation and mutagenesis-guided modeling suggest a highly conserved copper-binding motif in human OR2M3
Sulfur-containing compounds within a physiological relevant, natural odor space, such as the key food odorants, typically constitute the group of volatiles with the lowest odor thresholds. The observation that certain metals, such as copper, potentiate the smell of sulfur-containing, metal-coordinating odorants led to the hypothesis that their cognate receptors are metalloproteins. However, experimental evidence is sparse—so far, only one human odorant receptor, OR2T11, and a few mouse receptors, have been reported to be activated by sulfur-containing odorants in a copper-dependent way, while the activation of other receptors by sulfur-containing odorants did not depend on the presence of metals. Here we identified an evolutionary conserved putative copper interaction motif CC/CSSH, comprising two copper-binding sites in TMH5 and TMH6, together with the binding pocket for 3-mercapto-2-methylpentan-1-ol in the narrowly tuned human receptor OR2M3. To characterize the copper-binding motif, we combined homology modeling, docking studies, site-directed mutagenesis, and functional expression of recombinant ORs in a cell-based, real-time luminescence assay. Ligand activation of OR2M3 was potentiated in the presence of copper. This effect of copper was mimicked by ionic and colloidal silver. In two broadly tuned receptors, OR1A1 and OR2W1, which did not reveal a putative copper interaction motif, activation by their most potent, sulfur-containing key food odorants did not depend on the presence of copper. Our results suggest a highly conserved putative copper-binding motif to be necessary for a copper-modulated and thiol-specific function of members from three subfamilies of family 2 ORs. Electronic supplementary material The online version of this article (10.1007/s00018-019-03279-y) contains supplementary material, which is available to authorized users.
In 1978, Crabtree [27] postulated that Cu(I) ions, because of their high affinity for thiols, coordinate them within the active center of odorant receptors, thereby constituting a sensitive thiol detector [27]. In the same year, Day [33] suggested that transition metals may be involved in the olfaction of certain functional groups, such as pyridines [33].
In 1996, Turin [34] published the so-called vibrational theory of olfaction [34], proposing that electron transfer, which is ubiquitous in biology, e.g., for photosynthesis, respiration, and nitrogen fixation, takes place in the OR protein by reducing the disulfide bond via a zinc ion [34].
In 2003, Wang et al. [28] postulated the "HxxC[DE]"amino acid-motif (with x as a hydrophobic residue) in the second extracellular loop (ECL 2) of ORs to be crucially involved in the coordination of Cu 2+ -or Zn 2+ -ions and odorants within their receptors. They could observe a conformational change of this motif from pleated sheet to an α-helix in the presence of Zn 2+ , suggesting that ECL 2 becomes engaged in odorant binding [28]. The consensus sequence "HxxC[DE]" can be found in 74% of all human ORs, which led them to propose ORs as metalloproteins [28]. The role of metals in mammalian olfaction is the subject of recent reviews [35,36].
In addition to ORs, other GPCRs have also been suggested to coordinate metal ions. For example, the binding of ligands in the opioid receptor is enhanced by manganese [37]. By introducing Cu 2+ , Zn 2+ , or Ni 2+ ions into cyclam rings of AMD3100, the response of the CXCR4 chemokine receptor could be increased up to 50-fold [38]. Mutational analysis revealed that the enhancing effect could be eliminated by changing the single amino acid Asp262 in TMH 6 [38]. Furthermore, the two melanocortin receptors MC1 and MC4 have been shown to be enhanced by Zn 2+ [39]. MC1 is expressed in melanocytes and controls skin tanning. MC4 expresses in certain regions of the hypothalamus in the brain, and within the intestinal tissue. It is involved in the regulation of autonomic responses as well as the regulation of energy homeostasis. Possible interaction sites were indicated as Cys271 (ECL 3) and Asp119 (extracellular end of TMH 3) [39]. Transition metals such as copper, zinc, and iron play an important role for the homeostasis of brain neurons [40,41]. Aron et al. [42] showed that metals like copper can serve as dynamic signals that bind and regulate protein function at external allosteric sites in addition to their function as static metabolic cofactors [42].
Yokoi et al. [43] investigated dietary nickel deprivation on olfaction in rats and observed a decreased sniffing rate [43]. Since olfactory CNG channels are suppressed by nickel [44,45], they suggested that nickel ions play a physiological role in olfactory function.
Viswaprakash et al. [46] reported zinc to enhance the odorant-induced responses in olfactory receptor neurons [46]. They observed an enhancement of signaling in these neurons, however, only using nanoparticles, but not using Zn 2+ ions [46]. Furthermore, the use of copper, gold, or silver nanoparticles did not show a similar effect as compared to zinc nanoparticles [46]. Also Vodyanoy [47] investigated zinc nanoparticles, and came up with a model that predicted that one metal nanoparticle binds two receptor molecules to create a receptor dimer, which is consistent with the evidence that many GPCRs form dimers or larger oligomers [47]. In a later study, they showed that nanomolar suspensions of zinc nanoparticles enhance responses by a factor of 5 [48].
In 2012, Duan et al. [29] suggested Cu 2+ ions to be an essential co-factor for the interaction of mouse OR Olfr1509 (MOR244-3) with its agonist (methylthio)methanethiol [29]. Since increasing the copper concentration in the cell-based assay led to a significant increase of the sensitivity of the receptor, whereas chelating agents decreased the receptor's sensitivity, they postulated that thiols and copper ions form a complex, which renders the receptor very sensitive for thiols [29]. Based on this study, and by combining receptor modeling/ligand docking, site-directed mutagenesis, and functional expression of recombinant mutant OR, Sekharan et al. [30] identified three amino acid positions within TMH 3 and TMH 5, His105 3.33 , Cys109 3.37 , and Asn202 5.42 , which supposedly form a Cu-binding site within receptor Olfr1509 [30].
However, the mechanisms underlying the very sensitive detection of thiols by humans in general are still unsolved. So far, the Cu dependence of ORs' responsiveness to thiols has been demonstrated for one human OR [31] and a few mouse ORs [29,32]. Most of the thiol-responsive human ORs identified, so far are members of family 2 of ORs [5, 6, 11-13, 31, 49]. Recently, we identified one narrowly tuned thiol-responding human receptor, OR2M3, as well as two broadly tuned receptors with overlapping thiol agonist spectra [5,6]. One causative mechanism for differences in tuning breadth may be the size of the respective ligand-binding pockets. Baud et al. suggested this for mouse receptors Olfr73 and Olfr74 [50]. Here, the ligand cavity size showed an accessible volume of 200 Å 3 for broadly tuned Olfr73, and 250 Å 3 for narrowly tuned Olfr74 [50].
We, therefore, hypothesized that rather narrowly tuned, thiol-specific ORs may exhibit a Cu potentiating effect on their responsiveness to thiols, and that these ORs have rather size-restricted binding pockets with limited degrees of freedom for alternative docking of thiols into their receptors. In contrast, broadly tuned ORs with larger binding pockets will lack a Cu potentiating effect on thiol activation, but, among many chemically diverse odorant agonists, may nevertheless also detect certain thiol structures.
Here, we investigated narrowly tuned OR2M3 with its agonist 3-mercapto-2-methylpentan-1-ol [6], a common and potent KFO from heated onions, which for thousands of years have been used worldwide as a food and in complementary medicine [51]. We compared OR2M3 with two most recently characterized broadly tuned receptors, OR2W1 and OR1A1, with three of their known agonists, 2-phenylethanethiol, 3-mercaptohexyl acetate, and allyl phenyl acetate [5], in a cell-based, online cAMP-luminescence GloSensor™ assay [52].
We used site-directed mutagenesis and functional expression of recombinant mutant ORs to investigate cognate human OR/KFO pairs in the presence and absence of Cu 2+ -ions. We rationalized the docking of specific thiols into the binding pockets of their respective ORs with QM/MM models involving chelation of copper by these thiols and compared the size of thiol-binding pockets between narrowly tuned thiol-specific OR2M3 and broadly tuned OR2W1. We found the effect of copper to be mimicked by ionic and colloidal silver.
PCR-based site-directed mutagenesis
All receptor variants used were generated by PCR-based site-directed mutagenesis in two steps. Gene-specific primers (mutation primers) were used according to Table S2 and Table S3. The mutation primers, which carried the changed nucleotides, were designed overlapping.
Step one PCR was carried out in two PCR amplifications, one with the forward gene-specific primer and the reverse mutation-primer, the other with the forward mutation-primer and the reverse gene-specific or vector-internal primer.
Both PCR amplicons were then purified and used as template for step two. Here, the two overlapping amplicons were annealed using the following program: denaturation (98 °C, 3 min), ten cycles containing denaturation (98 °C, 30 s), annealing (start 58 °C, 30 s), and extension (72 °C, 2 min). After this, full-length gene-specific forward and reverse primers were added. The amplicons were then sub-cloned as described above.
Cell culture and transient DNA transfection
We used NxG 108CC15 cells [55], a neuroblastoma x glioma hybrid, and HEK-293 cells [56], a human embryonic kidney cell-line, as a test cell system for the functional expression of recombinant OR [52].
For experiments, cells were plated in a 96-well format (white 96-well plate, Nunc, Roskilde, Denmark) with a density of 7500 cells per well for NxG 108CC15 cells and 12,000 cells per well for HEK-293 cells. On the next day, the transfection was performed by using the lipofection method with each 100 ng/well of the corresponding plasmid DNA as well as with 50 ng/well of the transport protein RTP1S [57], G protein subunit Gαolf [54,58], olfactory G protein subunit Gγ13 [59], and the pGloSensor™-22F [60] (Promega, Madison, USA) using Lipofectamine ® 2000 (#11668-027, Life Technologies, USA). The pGloSensor™-22F is a genetically engineered luciferase with a cAMP-binding pocket, which allows measuring a direct cAMP-dependent luminescence signal. As a control the transfection was performed with the vector plasmid pI2-dk(39aa rho-tag) (aa, amino acids) [53,54] which is lacking the coding information of an OR together with Gαolf, RTP1S, Gγ13, and cAMP-luciferase pGloSensor™-22F (mock). The amount of transfected plasmid DNA was equal in OR-transfected and mock-transfected cells.
Luminescence assay
Luminescence assays were performed 42 h post-transfection as reported previously [52]. For experiments without copper, the cells were loaded with a physiological salt buffer (pH 7.5) containing 140 mmol/L NaCl, 10 mmol/L HEPES, 5 mmol/L KCl, 1 mmol/L CaCl 2 , 10 mmol/L glucose, and 2% of beetle luciferin sodium salt (Promega, Madison, USA). For the luminescence measurements, the Glomax ® MULTI + detection system (Promega, Madison, USA) was used. After an incubation of the cells for 1 h in the dark, the basal luminescence signal of each well was recorded. Afterwards, the odorant, serially diluted in the physiological salt buffer, was applied to the cells. Odorant stock solutions were prepared in DMSO and diluted 1:1000 in the physiological salt buffer to obtain a final DMSO concentration of 0.1% DMSO on the cells. To keep all measurement conditions the same, the water-soluble substances were also dissolved in DMSO. For odorants which were only slightly soluble, we added Pluronic PE-10500 (BASF, Ludwigshafen, Germany) to the buffer. The final Pluronic PE-10500 concentration on the cells was 0.05%.
Real-time luminescence signals for each well were measured 4 min after the odorant application.
For the measurements with copper, we added a final concentration of 10 µmol/L of a 10 mmol/L CuCl 2 solution to the normal measurement buffer as described above. The assay was performed as reported above.
Data analysis of the cAMP-luminescence measurements
The raw luminescence data obtained from the Glomax ® MULTI + detection system were analyzed using Instinct Software (Promega, USA). Data points of basal level and data points after odorant application were each averaged. From each luminescence signal, the corresponding basal level was subtracted.
For concentration-response relations, the baseline-corrected data set was normalized to the maximum amplitude of the reference odorant-receptor pair. The data set for the mock control was subtracted and EC 50 values and curves were derived from fitting the function [61] to the data by nonlinear regression (SigmaPlot 10.0, Systat Software). All data are presented as mean ± SD.
Homology modeling and docking
Details on the homology model approach for OR2M3 and OR2W1: we used the default setting in MPI Bioinformatics Toolkit server (https ://toolk it.tuebi ngen.mpg.de/#/tools / hhpre d), which uses the Modeller program [62] to model the homology model. We built the homology model of OR2M3 using the X-ray structure of the M1 muscarinic receptor as a template (5CXV.pdb) [63]. The comparative protein modeling with available X-ray structures indicates a high sequence identity between the OR2M3 and the M1 receptor transmembrane helix regions. Figure S1 shows the sequence alignment of the human M1 muscarinic receptor (green) and human olfactory receptor OR2M3 (red) as obtained using the Multiple Sequence Viewer implemented in Maestro (Schrödinger Release 2016-3: Maestro, Schrödinger, LLC, New York, NY, 2016.). The TMH domains were obtained using the transmembrane hidden Markov Model (TMHMM) analysis, as applied to model OR5AN1 [64] and OR2T11 [31], using the TMHMM server (http://www.cbs.dtu.dk/ servi ces/TMHMM /) based on Bayesian analysis of a pool of transmembrane proteins with resolved structures. As shown in Figure S2, OR2M3 residues with a posterior TMH probability greater than 0.2 were assigned to the transmembrane domain. Similarly, the homology model of OR2W1 was built using the same template (5CXV.pdb), and the TMH regions were obtained by TMHMM analysis. Figure S3 shows the superposition of structures corresponding to the sequence alignment of TMH regions of OR2M3 (red) and OR2W1 (blue) with the human M1 muscarinic receptor (green). As shown in Figure S4, OR2W1 residues with a posterior TMH probability greater than 0.2 were assigned to the transmembrane domain. Docking setup All docking calculations were carried out in the Schrödinger Suite (Small-Molecule Drug Discovery Suite 2016-3, Schrödinger, LLC, New York, NY, 2016.). The initial coordinates of the homology model of the OR2W1 structure were obtained from the homology model as described in the homology model section. Glide SP (standard precision) protocol implemented in Schrödinger Suite was applied for docking (Schrödinger Release 2016-3: Glide, Schrödinger, LLC, New York, NY, 2016.). The receptor was checked for steric clashes as well as for correct protonation states in the protein. The protonation states of all titratable residues (pH = 7) are assigned using PROPKA calculations [65,66] implemented in the Schrodinger's Maestro v.9.3 software package (Schrödinger Release 2016-3: Maestro, Schrödinger, LLC, New York, NY, 2016.) and also by visual inspection. Then the receptor was optimized by applying the OPLS_2005 force field [67]. The ligand was docked into the OR2W1 homology model using the GLIDE module [68][69][70] implemented in the Schrodinger's Maestro v.9.3 software package. GlideScore (standard precision) was used to rank the different ligands. GlideScore is an empirical scoring function that guesses the ligand-binding free energy. It includes force field (electrostatic and van der Waals) contributions and terms rewarding or penalizing interactions known to influence ligand binding. As it simulates a binding free energy, more negative values represent tighter binders.
Molecular dynamic simulation
Molecular dynamic simulations were carried out using the CHARMM36 force field implemented in the NAMD2 software [71]. The best docked pose of 3-mercaptohexyl acetate was used as the initial structure. The docking method is described in the docking section. The initial model system was inserted into a water box. After equilibration, production run MD simulations were carried out for 2 ns within the NPT ensemble at 298 K and 1.0 atm using the Langevin piston for 62 ns simulation time for the system (Fig. S5). Electrostatic interactions were treated with the Particle Mesh Ewald (PME) method and van der Waals interactions were calculated using a switching distance of 10 Å and a cutoff of 12 Å. The time step integration was set to 1 fs.
QM/MM calculations
QM/MM calculations were performed on homology models using ONIOM method [72] as part of the Gaussian 09 software package [73]. The QM layer included the ligand, copper (if present), the copper/ligand-binding residues, and any waters in the binding pocket. For OR2M3, the QM residues were C202, C203, and T105 in site 1 and M118, C241, and H244 in site 2. DFT/M06-L [74,75], and multiple basis sets were used to describe the QM layer. The 6-31G(d) basis set [76] was applied to carbon, hydrogen, nitrogen, sulfur, and oxygen atoms, and the Stuttgart 8s7p6d2f and 6s5p3d2f ECP10MWB contracted pseudopotential basis set [77] was applied to the copper atom. The MM layer consisted of the remaining protein and was modeled with the AMBER96 force field [78].
Phylogenetic analysis
For sequence comparison, we used CLC Main Workbench 6.5. We used the same software to perform the ClustalW alignment of the transmembrane regions (TMH) 1-7 and the extracellular loop 2 of all human ORs, family 2 ORs and OR2M3, and 46 homolog receptors as well as 60 orthologs from mouse, rat, chimp, and dog of the three family 2 OR subfamilies M, T, and V (Table S5). We created sequence logos using WebLogo 2.8.2 [79,80]. The localization of the TMHs of human OR2M3 and OR2W1 was taken from HORDE [81]. The evolutionary history of ORs was inferred using the Neighbor-Joining method [82]. Trees are drawn to scale, with branch lengths in the same units as those of the evolutionary distances used to infer each phylogenetic tree. The evolutionary distances were computed using the Poisson correction method [83] and are in the units of the number of amino acid substitutions per site. All evolutionary analyses were conducted in MEGA7 [84].
Copper ions enhanced a thiol agonist action on narrowly tuned OR2M3 but not on broadly tuned OR2W1
Thiols are among the best KFO agonists of the two broadly tuned human odorant receptors OR1A1 and OR2W1 [5], whereas OR2M3 has recently been demonstrated to specifically respond to only 3-mercapto-2-methylpentan-1-ol out of some 190 KFOs [6]. However, the influence of transition metal ions on these thiol/receptor interactions has not been tested so far. We, therefore, examined the effects of different metal ions at a final concentration of 30 µmol/L on the activation of OR2M3 by its agonist 3-mercapto-2-methylpentan-1-ol, an important KFO in heated onions [6], using the cAMP-dependent luminescence-based GloSensor™ assay ( Fig. 1a, b, Fig. S6a, Table S6).
We next tested for the optimal Cu 2+ concentration. As originally suggested by Crabtree [27], the active form of copper involved in ligand coordination is likely Cu 1+ due to the naturally reducing environment in cells. Therefore, Cu 2+ added in our experiments was likely reduced to Cu 1+ . We found that adding Cu 2+ at a concentration of 10 µmol/L gave the highest potentiation of 3-mercapto-2-methylpentan-1-ol´s efficacy in activating OR2M3, which was about fourfold increased as compared to control conditions without Cu 2+ supplementation (Fig. 1c, d, Fig. S7, Table S7). Importantly, supplementation with Cu 2+ > 10 µmol/L decreased both 3-mercapto-2-methylpentan-1-ol-induced receptor signaling as well as odorant-independent constitutive activity of OR2M3 (Fig. 1c). Cu 2+ supplementation of 10 µmol/L, however, did not inhibit a basal activity of OR2M3 (Fig. 1c), and was, therefore, used in all experiments testing a potentiating effect of copper throughout the present study. Cu 2+ at 10 µmol/L had little effect on the EC 50 value of 3-mercapto-2-methylpentan-1-ol on OR2M3 (Table 1). Notably, the receptor/agonist pair OR2M3/3-mercapto-2-methylpentan-1-ol when measured in the absence of copper showed a Hill coefficient of 0.89 ± 0.41 (n = 4), whereas in the presence of copper (10 µmol/L), the Hill coefficient increased about twofold to 1.93 ± 0.71 (n = 4).
Cys179 of the HxxC[DE]-Motif in ECL 2 of ORs plays a copper-independent role for a receptor function
To challenge the hypothesis of a metal-coordinating role of the HxxC[DE] motif of ECL 2 in the majority of ORs posed by Wang et al. [28], we established variants of OR2M3 and OR2W1, changing the conserved cysteine at position 179 to an alanine by site-directed mutagenesis. For OR2M3, we further established the variant OR2M3 C 179 Y, which was described as single-nucleotide polymorphism (SNP) [85], and a variant where we changed the cysteine to a serine. Already in the absence of any copper supplementation, all of these receptor variants displayed a complete loss-of function when tested with their respective agonists ( Fig. 3b-e), suggesting a rather general role of at least Cys179 for the tertiary structure of ORs [86][87][88]. Moreover, we prepared SNP-based haplotypes: For position 105 3.33 , we changed the threonine to an isoleucine or to an alanine. Both, however, have a minor allele frequency (MAF) of only 0.008 [89]. Furthermore, we changed the glycine at position 109 3.37 to an arginine [85]. Since cysteine contains an S atom and is a polar amino acid, its free thiol OR1A1 (a, c, e), OR2W1 (b, d, f), and endogenously expressed GPCRs, adenosine receptors A2A/A2B (g), and sphingosine-1-phosphate receptor 4, S1P4 (h). Data were mock controlsubtracted, normalized to each receptor maximum amplitude as response to the respective substance measured in the absence of Cu 2+ , and shown as mean ± SD (n = 3-6). RLU relative luminescence unit. Curves represent best fits to the data in the absence (black) or presence (blue) of Cu 2+ , with EC 50 values given in Table 1 group can build S-S bonds after oxidation. Furthermore, the binding of copper or other metal ions such as zinc and iron often occurs at cysteine residues in metalloproteins [90][91][92][93]. Therefore, and since Cys112 3.40 is in the vicinity of positions 105 3.33 and 109 3.37 , we exchanged Cys112 3.40 to a serine or an alanine. We found that all OR variants with mutations at position 105 3.33 , 109 3.37 , or 112 3.40 were not functional anymore (Fig. 4b). However, the presence of copper 10 µmol/L rescued 3-mercapto-2-methylpentan-1-ol function in OR2M3, at least for mutations T 105 A, C 112 A, and C 112 S (Fig. 4b, Fig S8a, b, and Table 2). This suggested that other positions in OR2M3 were involved in a copper interaction.
Zhang et al. [32] showed for the murine receptor Olfr1019 that Cys203 5.42 is involved in binding copper.
OR2M3 contains two adjacent cysteines at positions 202 5.41 and 203 5.42 . We, therefore, changed these cysteines to either an alanine or to a serine. OR2M3 variants carrying mutations C 202 A, C 202 S, or C 203 A revealed a complete lossof-function, even in the presence of Cu 2+ , with a strongly diminished 3-mercapto-2-methylpentan-1-ol function in C 203 A (Fig. 4b, Fig. S8c, and Table 2).
We further investigated a putative functional role of the cysteines at positions 202 5.41 and 203 5.42 in OR2M3. Indeed, at least position 203 5.42 aligns with a putative odorant-binding pocket suggested by Man et al. [88]. Furthermore, we investigated the haplotype with one SNP, OR2M3 C 203 Y, which has an MAF of < 0.01 [89]. In our hands, OR2M3 C 203 Y showed a complete loss-of-function, in the absence or presence of Cu 2+ . Altogether, our results Our concentration-response relation of 3-mercapto-2-methylpentan-1-ol on OR2M3 wt in the presence of Cu 2+ revealed a Hill coefficient close to 2 (see Fig. 1d), suggesting positive cooperativity of at least two binding sites for Cu 2+ and/or 3-mercapto-2-methylpentan-1-ol. Indeed, within copper-dependent human OR2T11, two distinct copperbinding sites have been reported previously, constituted by positions Met56 2.39 , Met133 4.37 , Arg135 4.39 , and Cys138 4.42 , and positions Met115 3.46 , Cys238 6.33 , and His241 6.36 [31]. Positions Cys238 6.33 and His241 6.36 in OR2T11 correspond to positions Cys241 6.33 and His244 6.36 in OR2M3, respectively. Since positions Cys238 6.33 and His241 6.36 are part of the putative copper-binding CSSH(L) motif in OR2T11, which is close to the cytoplasmic region, similar to other candidate pentapeptides previously proposed for metalbinding sites at the end of TMH 6 [34], we mutated the corresponding positions Cys241 6.33 and His244 6.36 in OR2M3 by changing the respective amino acids to an alanine. In our hands, both OR2M3 variants were not functional anymore, in the absence or presence of Cu 2+ (Fig. 4b), suggesting both positions to be necessary for a potentiating effect of Cu 2+ on a 3-mercapto-2-methylpentan-1-ol function in OR2M3. [30][31][32] by site-directed mutagenesis in OR2M3. a Schematic snake diagram of OR2M3 with localization of mutated amino acid positions within TMH 3-6. Putative odorant interaction sites proposed by Man et al. [85] are given as red circles. b Effect of 3-mercapto-2-methylpentan-1-ol (20 µmol/L) on OR2M3 mutants, in the absence (black) or presence (blue) of Cu 2+ . Data were normalized to the OR2M3 wt signal in response to 3-mercapto-2-methyl-pentan-1-ol (20 µmol/L), measured in the absence of Cu 2+ . Shown are mean ± SD (n = 3). RLU relative luminescence unit, 3MMP 3-mercapto-2-methylpentan-1-ol. Concentration-response curves for all mutant receptors are given in Supplemental Figure S8, and EC 50 values are given in Table 2
SNPs in close vicinity of predicted copper/ odorant-binding positions affected the 3-mercapto-2-methylpentan-1-ol function of OR2M3
Previously, Man et al. [88] determined 22 amino acid positions, which constitute a putative, generalized, and conserved odorant-binding pocket within OR orthologs. We, therefore, investigated the effects of three SNPs that occur in the putative binding pocket of OR2M3 near proposed copper-interacting positions [30], by testing these haplotypes against 3-mercapto-2-methylpentan-1-ol in the GloSen-sor™ assay (Fig. S9). For the variant, OR2M3 M 206 I, we observed a complete loss-of-function (Fig. S9c). Compared to OR2M3 wt, OR2M3 Y 104 C displayed a gain of function with respect to the amplitude but a higher EC 50 value (1.02 ± 0.18 µmol/L, Fig. S9b), whereas OR2M3 I 207 L had a diminished amplitude and a higher EC 50 value (1.41 ± 0.04, Fig. S9d). All SNPs in OR2M3, however, have either no reported MAF, or a rather low MAF (< 0.01). Figure 5a shows the structural model of OR2M3 obtained using the X-ray crystal structure of the human M1 muscarinic receptor as a template (5CXV.pdb) [63]. The homology model provides valuable insights on the proposed odorant-binding site, including a highly conserved disulfide S-S bond thought to be critical for structural stability. The structural model of OR2M3 shows that the disulfide bond forms between Cys97 3.25 of TMH 3 and Cys179 of extracellular loop 2 (ECL 2) (Fig. 5a). Two binding sites for Cu(I) were identified in OR2M3 (Fig. 5, Fig. S10). The binding sites are supported by site-directed mutagenesis and activation profiles, showing a lack of response to ligand when mutating the key amino acid residues responsible for copper binding (Fig. S10).
Homology modeling and QM/MM studies revealed two copper-coordinating sites within narrowly tuned OR2M3
The first ligand/copper-binding site (site 1) shares similarities with the copper-binding site suggested for Olfr1509 (MOR244-3) [30] with one Cu(I) bound to His105 3.33 , Cys109 3.37 , and Asn202 5.42 ; the copper-binding site in Olfr1509 is close to the extracellular domain. In contrast, OR2M3 involves two binding sites. Site 1 has an accessible volume of 695.26 Å 3 for 3-mercapto-2-methylpentan-1-ol (Fig. 5b), and involves Thr105 3.33 of TMH 3, and residues Cys202 5.41 and Cys203 5.42 from TMH 5 (Fig. 5c). The Cu(I) S1 ion has a trigonal planar configuration with S C202 , S C203 and a weak interaction with N C203 , with distances of 2.18 Å, 2.18 Å and 2.78 Å, respectively, as shown in Fig. 5d, which indicates the QM/MM structural model for OR2M3. It is also important to note that water molecules form hydrogen bonds with residues Thr105 3. 33 Upon ligand (3-mercapto-2-methylpentan-1-ol) binding, the active sites undergo coordination rearrangements. Figure 5e shows the QM/MM structure of the binding site with the ligand. At site 1, the Cu(I) S1 ion has trigonal planar geometry with the S (thiolate form) of ligand, S C202 and S C203 . The distances between Cu(I) s1 and S (thiolate forms) of the ligand, S C202 and S C203 are 2.26 Å, 2.33 Å, and 2.38 Å, respectively. The distance between the Cu(I) S1 ion and N C203 elongated to 3.14 Å from 2.78 Å. While the S atom of the ligand coordinates with Cu(I), the ligand O (alcoholic) forms a strong H-bond with the HO of Thr105 3.33 with the distance of 1.80 Å, which elongates OH T105 -O W1 H-bond. We also find that water molecule W1 forms a new but weak H-bond with Cys202 5.41 , S C202 -HO W1 , with a distance of 2.38 Å.
Moreover, HO ligand also shows a strong The second binding site (site 2) includes residues Met118 3.46 of TMH 3, His244 6.36 , Cys241 6.33 of TMH 6, and a water molecule (W4) (Fig. S10a). Unlike site 1, Cu(I) S2 ion forms a tetrahedral configuration with S M118 , S C241 , N H244 , and O W4 with distances of 2.41 Å, 2.16 Å, 2.00 Å, and 2.36 Å, respectively (Fig. S10b). Water molecule W4 also shows an H-bond (N C241 -HO W4 ) with a distance of 2.18 Å. Upon ligand binding at site 2, Cu(I) S2 accommodates a distorted tetrahedral geometry with S ligand , S C241 , N H244 , and O W4 with distances of 2.29 Å, 2.18 Å, 2.02 Å, and 3.06 Å, respectively (Fig. S8c). The distance of the residue Met118 3. 46 and Cu ion extends to 4.54 Å from 2.41 Å. A strong H-bond is also observed between OH ligand and O W4 with a distance of 1.98 Å. In addition, the water W4 shows a weak H-bond interaction with the S (thiolate) of the ligand by the distance of 2.21 Å.
Modeling did not support evidence for a putative copper-binding site within the broadly tuned OR2W1
Sekharan et al. [30] proposed Cys109 3.37 of Olfr1509 to coordinate the copper ion in the receptor [30]. To investigate if we can induce a copper-enhancing effect also in OR2W1, we changed the respective amino acids of OR2W1 to the amino acids of Olfr1509 at the positions 105 3.33 and 109 3.37 . We performed the point mutation OR2W1 M 105 H, because histidine can act as a ligand of metal ion complexes and possibly can induce a copper-enhancing effect in OR2W1. We observed, however, a loss-of-function with all three tested ligands, suggesting that histidine prevents the formation of the functionally active network of contact sites in OR2W1 (Fig. 6b-d).
We then tried to induce a copper-enhancing effect by introducing a cysteine at position 109 3.37 . Again, we observed a loss-of function with all three tested ligands (Fig. 6b-d). Nevertheless, introducing an alanine at position 109 3.37 , the OR2W1 variant S 109 A was still functional, but displayed agonist-specific and Cu 2+ -dependent differences in potency and efficacy (Fig. 6 b-d, Fig. S8j-l, Table 3). Since the free thiol group of cysteine can build S-S bonds, we further investigated Cys112 3.40 in OR2W1 and exchanged the cysteine for an alanine. OR2W1 C 112 A did not respond to one of its agonists, in the absence or presence of Cu 2+ (Fig. 6b-d).
Our results so far suggested that in OR2M3, two cysteines at positions 202 5.41 and 203 5.42 coordinate copper in the ligand-binding pocket. Similarly, Cys203 5.42 has recently been shown to be involved in copper binding in Olfr1019 [32]. OR2W1, however, lacks cysteines at these positions. We, therefore, tested whether cysteines at positions 202 5.41 and 203 5.42 will introduce a copper-dependent ligand response in OR2W1. When changing the leucine at position 202 5.41 to a cysteine or to a serine, in our hands, both OR2W1 variants L 202 S or L 202 C were still functional, but displayed agonist-specific and Cu 2+ -dependent differences in potency and efficacy (Fig. 6b-d).
For 2-phenylethanethiol, both OR2W1 L 202 S and OR2W1 L 202 C displayed lower amplitudes in the presence of Cu 2+ , as compared to OR2W1 wt (Fig. 6b), and a lower potency (Fig. S8d, g, Table 3), although both potency and efficacy of 2-phenylethanethiol were already diminished in Fig. 6 Testing amino acid positions of proposed copper/odorant-binding pockets [30][31][32] by site-directed mutagenesis in OR2W1. a Schematic snake diagram of OR2W1 with localization of mutated amino acid positions within TMH 3-6. Putative odorant interaction sites proposed by Man et al. [85] are given as red circles. Effect of 2-phenylethanethiol (300 µmol/L) (b), 3-mercaptohexyl acetate (300 µmol/L) (c), and allyl phenyl acetate (300 µmol/L) (d) on OR2W1 mutants in the absence (black) or presence (blue) of Cu 2+ . Data were mock control-subtracted, normalized to the OR2W1 wt signal of each ligand, measured in the absence of Cu 2+ , and displayed as mean ± SD (n = 3). RLU relative luminescence unit, 2PHE 2-phenylethanethiol, 3MAc 3-mercaptohexyl acetate, APAc allyl phenyl acetate. Concentration-response curves for all mutant receptors are given in Supplemental Figure S8, and EC 50 values are given in Table 3 the absence of Cu 2+ . For the 'black currant'-like smelling 3-mercaptohexyl acetate, we observed increased amplitudes compared to OR2W1 wt for both OR2W1 L 202 S and OR2W1 L 202 C (Fig. S8e, h). The EC 50 values for OR2W1 L 202 S were higher as compared to the wild type, with or without supplemental copper, but were lower for OR2W1 L 202 C ( Table 3). The non-KFO allyl phenyl acetate revealed concentration-response relations for both OR2W1 Leu202 5.41 variants, with amplitudes reduced by half as compared to OR2W1 wt (Fig. S8f, i). Under both Cu 2+ conditions, and compared to OR2W1 wt, the EC 50 values of allyl phenyl acetate for OR2W1 L 202 S were smaller, as observed also for 3-mercaptohexyl acetate, but, however, were higher for OR2W1 L 202 C, as compared to 3-mercaptohexyl acetate (Table 3).
We further inserted the second cysteine at position 203 5.42 in OR2W1, and tested OR2W1 L 202 C/G 203 C with all three ligands. This variant, however, was not functional anymore ( Fig. 6b-d).
Our results additionally identified amino acids Cys241 6.33 and His244 6.36 as being necessary for a potentiating effect of copper on 3-mercapto-2-methylpentan-1-ol function in OR2M3. Indeed, these positions previously have been reported to coordinate copper in OR2T11 [31]. We mutated the first and last positions of the CSSH motif in OR2W1, Cys241 6.33 , and His244 6.36 , by changing each amino acid to an alanine, and combined each with the double-mutation OR2W1 L 202 C/G 203 C. In our hands, both OR2W1 variants, OR2W1 L 202 C/G 203 C/C 241 A and OR2W1 L 202 C/G 203 C/ H 244 A were not functional anymore for all three tested compounds, in the absence or presence of Cu 2+ (Fig. 6b-d).
For several odorant receptors, tyrosines at positions 252 6.44 and 259 6.51 have been shown to be involved in ligand binding [10,32,50,[94][95][96][97][98][99]. We, therefore, exchanged these tyrosines to alanines and tested the resulting OR2W1 variants against the three agonists 2-phenylethanethiol, 3-mercaptohexyl acetate, and allyl phenyl acetate. Both OR2W1 Y 252 A and OR2W1 Y 259 A, as well as the double mutant OR2W1 Y 252 A/Y 259 A were non-functional ( Fig. 6b-d). Figure 7a shows the homology model of OR2W1. Figure 7b shows the accessible volume of the ligand for OR2W1 which is 1138.07 Å 3 , compared to 695.26 Å 3 for OR2M3 (see Fig. 5b). We performed molecular dynamic simulation to see the dynamic stability of ligands in the binding site of OR2W1. The docking calculations show that the ligands, 3-mercaptohexyl acetate and allyl phenyl acetate, bind by forming an H-bond with Tyr259 6.51 in OR2W1 (Fig. 7c-h). The residue Tyr259 6.51 is on the top of the TM region and close to the extracellular loop. The 3-mercaptohexyl acetate ( Fig. 7d + g) and allyl phenyl acetate (Fig. 7e + h) show similar results. However, 2-phenylethanethiol does not show any H-bond with Tyr259 6.51 , but rather π-π stacking with Tyr252 6.44 (Fig. 7c + f). The binding site also consists of Met105 3.33 , Ser109 3.37 , and Cys112 3.40 , however, we did not find any H-bonding interactions with the ligand. Rather, these residues appear to be important for stabilizing the ligand-binding site (see Fig. 7c-e, H-bond between Met105 3.33 and Ser109 3.37 ).
Docking and molecular dynamic simulation of broadly tuned OR2W1 revealed a ligand-binding site about twice as big as in narrowly tuned OR2M3
It is known that dynamical binding modes determine agonistic and antagonistic ligand effects in GPCRs [100]. Our 62 ns stimulation of OR2W1 in a water box without a membrane revealed a stabilization to ~ 4.5 Å after the first 12 ns (Fig. S5), and for the ligand 3-mercaptohexyl acetate, a free-binding energy of − 22.67 ± 2.02 kcal/mol. In the last 50 ns of the simulation, the ligand hydrogen bonded with the Tyr259 6.51 side chain 64.33% of the time, the Val199 5.38 backbone 48.34%, and Gly203 5.42 only 0.16%. Thus, dynamic modeling confirmed Tyr259 6.51 as the major ligand-interaction site via an H-bond as proposed by our static model. Ligand-OR2W1 interactions for 2-phenylethanethiol (f), 3-mercaptohexyl acetate (g), or allyl phenyl acetate (h). Polar residues (blue), hydrophobic residues (green), negative charged residues (red), Glycine (beige), π-π stacking (green line), and H-bonding interactions (dashed magenta line)
Discussion
Thiols are important carriers of information-as key food odorants, determining the aroma of foods [6,21], as body odors [18,19], or as environmental odors [20]. The observation that thiols, compared to other volatiles, frequently display particularly low odor thresholds [23][24][25]101], sprouted various theories trying to explain this behavior, with the aim to gain an understanding of odorant information coding at the receptor level. Common to all of these theories is the association of a thiol-receptor interaction with participation of transition metal ions such as copper, zinc, or nickel [27-31, 46, 47]. However, so far, only a single human cognate thiol odorant/receptor pair was identified (2-methyl-2-propanethiol/OR2T11, [31]), which could be used to put these theories to the test.
In the present study, we now identified OR2M3 as a further copper-sensitive human receptor, which showed a three-to-sixfold potentiation of its specific ligand's efficacy by copper and silver ions (and colloidal silver), although this cognate receptor/ligand combination also functions in the absence of supplemented metal ions, as shown previously [6].
In an aqueous solution, Cu 2+ is more stable than Cu + (in spite of Cu + having a filled d-subshell), because the solvation energy of Cu 2+ is significantly larger than the solvation energy of Cu + and thus overcompensates the second ionization energy. For silver, however, the relative energies of the two oxidation states are switched, namely, Ag + is more stable than Ag 2+ . The reason for this is that the filled 4d shell of Ag + is not sufficiently effective at shielding the nuclear charge making the second ionization energy so high that is not compensated by the solvation energy of Ag 2+ . As a result, silver is usually found as Ag + in aqueous environments, forming rather unstable complexes with very low coordination numbers (e.g., 2). However, the incomplete solvation of Cu 2+ in the constrained cavity of the ligandbinding site of an OR might make Cu + the more stable form with a filled d-subshell. Therefore, it is natural to expect that silver could mimic copper as found by activation of OR2T11, which effect was also modeled computationally [31]. In addition, it is well known that both silver [102] and copper clusters bind thiolates [103], so they are expected to produce similar effects in the OR cavities that are sufficiently large as to bind metallic nanoparticles. In part, the significant effect of silver is due to the fact that, unlike copper, there is no background silver in the cell culture medium, so the effect of added silver appears to be larger. Because nanoparticulate silver is an environmental contaminant, our findings on the interaction of silver NP with ORs may be relevant to deleterious exposure of aquatic animals/fish/birds to environmental silver.
We recently identified OR2M3 as a highly specific, narrowly tuned receptor for the thiol KFO 3-mercapto-2-methylpentan-1-ol [6]. In contrast, for the other thiol-specific human receptor, OR2T11, its activation by 2-methyl-2-propanethiol has been previously identified to entirely depend on the presence of copper ions [31]. Li et al. [31] showed that OR2T11 responded to nine monothiols or α-mercaptothioethers, although this receptor has not been characterized to be narrowly or broadly tuned with respect to its natural ligand spectrum [31]. Thiols, however, have been shown to also activate broadly tuned receptors, i.e., OR1A1 and OR2W1 [5]. Notably, in the present study, we did not observe any potentiating effect of copper on their thiol agonists 2-phenylethanethiol and 3-mercaptohexyl acetate. In the case of 2-phenylethanethiol activating OR1A1, the presence of copper ions rather decreased its efficacy, suggesting a negative allosteric action of copper on this cognate ligand/receptor combination. In contrast, copper ions markedly reduced the potency of the same odorant in activating OR2W1, here suggesting an orthosteric competitive action of copper ions on a 2-phenylethanethiol/receptor interaction. The lack of any enhancing effect of copper on the efficacy or affinity of a homologous series of C 5 -C 8 aliphatic thiols on OR2W1 has recently been shown by Li et al. [31].
Our results support the notion that narrowly tuned receptors with a specificity for certain metal-coordinating thiols, e.g., OR2M3, have fewer degrees of freedom in accommodating multiple ligands in their binding pocket, which may be one causative factor that determines narrow tuning in these ORs. Indeed, our modeling study revealed that the accessible volume in OR2M3 for its ligand is smaller by a factor of 1.6 than the accessible volume in OR2W1 for its three investigated ligands. For non-metal-coordinating ORs, previous studies demonstrated receptor responses to depend on the molecular volume of an odorant, showing that affinity and/or efficacy became optimal when the molecular volume of an odorant matches the size of its binding pocket within the receptor [104,105]. Baud et al. [50] reported on two mouse receptors, Olfr73 and Olfr74, to be broadly and narrowly tuned, respectively [50]. Broadly tuned Olfr73 in their hands, however, had the smaller calculated accessible volume compared to narrowly tuned Olfr74, which they attributed to smaller ligand sizes [50].
Our model of OR2M3 predicted two amino acid positions that may form H-bonds with the ligand 3-mercapto-2-methylpentan-1-ol, Thr105 3.33 , and Cys202 5.41 . Of these, Thr105 3.33 has also ligand-binding function in human receptors OR1A1 (Ile105 3.33 ) [64], OR1G1 (Met105 3.33 ) [9], and OR3A1 (His108 3.33 ) [106], and in the mouse receptors Olfr73 (MOR174-9; mOR-EG) and Olfr74 (MOR144-4; mOR-EV) (Cys106 3.33 ) [50,98] (Fig. S13, Table 4). A ligand-binding function of position Cys202 5.41 was previously reported for human OR1G1 (Ile201 5.41 ) and OR7D4 (Ala202 5.41 ) [97] and for mouse Olfr544 (MOR42-3) (Thr205 5.41 ) [107] (summarized in Fig. S13 and Table 4). Of these two ligand-interacting amino acid positions in OR2M3, Thr105 3.33 overlaps with a modeled, generalized odorant-binding pocket in ORs, proposed by Man et al. [88]. For OR2W1, our model suggested two amino acid positions to be involved in ligand binding (summarized in Fig. S13 and Table 5), one of which (Tyr252 6.44 ) overlaps with the 22 amino acid residues proposed by Man et al. [88]. Depending on the polarity of a ligand, the binding pocket in OR2W1 supports hydrophobic contacts with non-polar amino acid residues, which has been suggested as the dominant mode of interaction between ligands and broadly tuned ORs, favoring multiple binding modes through opportunistic interactions [9]. Our data may very well be interpreted in line with such a concept of multiple ligand-specific binding modes, which may induce different OR conformations and signaling responses, and thus may be the mechanistic basis for ORs to be broadly tuned [9,50].
Beyond modeling the ligand-bound receptor, in silico docking of heavy metal ions into an OR model is a further challenge. The amino acid cysteine has a thiol function, and many cofactors in proteins and enzymes feature cysteinatemetal cofactors. A cysteine residue in ECL 2, Cys179, plays a major role in the tertiary structure of ORs by forming a [28]. In our hands, Cys179 mutants showed a complete loss-of-function, in the absence or presence of copper ions. Moreover, our docking-model did not suggest a direct interaction of Cys179 with the ligand or the copper ion, consonant with the idea of Cys179 rather being a structural requirement for ORs.
In the present study, our model of OR2M3, together with site-directed mutagenesis and functional testing of OR2M3 mutants, instead identified two positions, Cys202 5.41 and Cys203 5.42 , constituting copper-coordinating binding site 1 ('CC'). Another two positions, Cys241 6.33 and His244 6.36 , supposedly constitute copper-coordinating binding site 2 ('CSSH'). The latter one has been identified as a copper-coordinating site also in OR2T11 ('CSSHL') [31]. In the same study, another copper-coordinating site has been proposed in the cytoplasmic regions of TMH2 and TMH4 of OR2T11 [31]. In our study, the presence of two copper-binding sites in OR2M3 is corroborated by steeper concentration-response curves of its agonist 3-mercapto-2-methylpentan-1-ol in the presence of copper, with a Hill coefficient of 1.9, suggesting their cooperativity [108]. The observation that copper concentration-dependently inhibited its potentiation of a 3-mercapto-2-methylpentan-1-oldependent activation of OR2M3 may be due to the ability of copper to coordinate the thiol, removing it and making it unavailable to activate the receptor. Copper-coordinating site 2 ('CSSH') in our model overlaps with the 'ionic lock' region at the cytoplasmic end of TMH6 of GPCRs, involved in G protein interaction (for review, see [109,110]). Also in ORs, this motif has been suggested as a zinc-binding motif, involved in G protein interaction [34]. GPCRs are allosterically modulated receptors [111,112], often displaying constitutive activity [113,114]. For the first time, here, we show that copper concentration-dependently inhibited a constitutive activity of OR2M3 in the absence of ligand. This suggests that copper acts as an inverse agonist on OR2M3. To validate an inverse agonist action of copper on ORs, further studies, using the [ 35 S]GTP-gamma-S binding assay [115], may reveal the effect of copper on ORs' constitutive activation of their heterotrimeric G protein. Our results also support the notion of an allosteric, ligand-independent interaction of copper with site 2 ('CSSH') in this receptor. Our findings are in line with reports on copper and other transition metals as dynamic allosteric regulators of protein function at external allosteric sites [42]. Knowledge of the protein structure of a receptor is critical for an understanding of its ligand interactions. However, no high-resolution crystal structure of an OR has so far been reported. Given that ORs have only about 25% sequence identity with class A GPCRs, homology modeling of ORs may be of limited informative value. Nevertheless, the strategy of combining site-directed mutagenesis, functional experimental analysis, in silico homology modeling, and docking simulations has proven successful in uncovering mechanisms of odorant/ receptor interactions and OR structure-function relationships [10,30,31,94,96,98,107,[116][117][118]. A phylogenetic analysis will, therefore, add information on the relevance of conserved amino acid positions or motifs in ORs [96,[119][120][121][122].
In the present study, our phylogenetic analysis demonstrates that at least the cysteine and histidine of site 2 of the putative copper-binding motif ('CxxH') are conserved in all ORs. The entire 'CSSH' motif, however, is highly conserved in human family 2 ORs, and 100% conserved only within subfamilies 'M, T, V' of family 2 ORs, which harbor the closest human homologs of OR2M3 [6], and of OR2T11, the only other copper-sensitive human receptor reported, so far. In contrast, both cysteines of site 1 are not conserved over family 2 ORs or all human ORs. However, both sites together are 100% conserved only in human receptors of subfamilies 'M, T, V' of family 2 ORs, but also in their orthologs from, e.g., chimp, mouse, or cow. Our phylogenetic and mutational analysis, and our homology modeling/ docking studies, altogether suggests that the entire motif ('CC'/'CSSH') is necessary for a potentiating effect of copper, and predicts members from at least these three subfamilies of human ORs to be narrowly tuned, thiol-specific, and copper-modulated receptors. Further experiments are needed to identify the ligands for at least all family 2 ORs, and to clarify whether a copper-sensitive, specific detection of thiol odorants is idiosyncratic to human subfamilies M, T, and V of family 2 ORs, and their orthologs.
Recently, however, an enhancing effect of copper on the odorant activation of mouse receptors Olfr1509, Olfr1508, and Olfr1019 has been demonstrated [30][31][32], albeit these receptors lack site 1 (in our model: Cys202/Cys203 in TMH5, 'CC') and possess only a 'CxxH' site 2. Here, other QM/MM-and site-directed mutagenesis-based copper-coordinating amino acids have been proposed. The corresponding human orthologs are from families 4 and 5 of ORs, suggesting that different copper-binding sites within ORs may have developed in different phylogenetic clades.
Here, we show that the specific thiol function of human OR2M3 is modulated by copper ions. Our homology modeling/docking studies together with receptor functional expression studies suggest that this copper sensitivity is mediated by two copper-binding sites within narrowly tuned OR2M3. This putative copper-binding motif is exclusively found in subfamilies M, T, and V of family 2 ORs, and appears to be conserved across their mammalian orthologs, suggesting a conserved copper-sensitive and specific thiol function of these receptors. | 11,447.8 | 2019-08-21T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Fluoride Coatings on Magnesium Alloy Implants
After several years of research and development, it has been reported that magnesium alloys can be used as degradable metals in some medical device applications. Over the years, fluoride coatings have received increasing research attention for improving the corrosion resistance of magnesium. In this paper, different methods for preparing fluoride coatings and the characteristics of these coatings are reported for the first time. The influence of the preparation conditions of fluoride coatings, including the magnesium substrate, voltage, and electrolyte, on the coatings is discussed. Various properties of magnesium fluoride coatings are also summarized, with an emphasis on corrosion resistance, mechanical properties, and biocompatibility. We screened experiments and papers that planned the application of magnesium fluoride coatings in living organisms. We have selected the literature with the aim of enhancing the performance of in vivo implants for reading and further detailed classification. The authors searched PubMed, SCOPUS, Web of Science, and other databases for 688 relevant papers published between 2005 and 2021, citing 105 of them. The selected time range is the last 16 years. Furthermore, this paper systematically discusses future prospects and challenges related to the application of magnesium fluoride coatings to medical products.
Introduction
Recently, with the rapid increase in the number of tissue injury repair procedures, metals have been widely used for the replacement and regeneration of injured tissues owing to their high mechanical properties [1]. eir common applications include scaffolds [2,3], bone plates [4,5], bone nails [6], wound closing devices [7], artificial joint prostheses [8], and guided tissue/bone regeneration membranes [9]. Nonbiodegradable metals used in traditional metal implants include stainless steel, titanium, and cobalt-chromium alloys [7,10]. Despite their excellent biocompatibility and mechanical properties, they can cause inflammatory reactions because of the release of toxic ions, which often require secondary surgical removal [11,12]. Moreover, the stress shielding effect of conventional bone implants often impedes healing because of the disparity in elastic modulus between the conventional metals and bone [10].
Fortunately, as a biodegradable metal, magnesium is preferred as a biologically essential trace element, with an elastic modulus similar to bone in fracture healing, eliminating the need for a secondary surgical removal [13]. e ideal clinical biodegradable metals must be perfectly suited for the injured tissue reconstruction in a biologically nontoxic precondition, providing absolute mechanical protection in the early stages and gradually degrading at an acceptable rate as the tissue heals [7]. Despite the developments in the research on magnesium alloys over the past decades, clinical studies on magnesium alloys can be traced back to 1878; at that time, Edward C. Huse first used magnesium wires to ligate blood vessels [14]. Nevertheless, the current bottleneck limiting the clinical application of magnesium is its extremely rapid degradation rate in vivo, which may result in the accumulation of local air pockets, an alkalinization effect, an osmotic pressure increase, and even a rapid decrease in the mechanical strength of the implants [2,15]. Currently, there are two ways to control the degradation rate of magnesium: composition modification and alloy surface treatment. e properties of magnesium alloys can be influenced by changing the amount and percentage of alloying elements, like Al, Li, Ca, Y, Mn, Zn, Zr, and rare earth [16]. e ideal magnesium alloy coating has properties such as corrosion resistance, degradability, and biocompatibility for clinical applications [17]. Surface modifications are known to be classified according to the method of coating preparation, which include mechanical [18], physical [19], chemical [20], and biological or biomimetic. Chemical coating is formed by the reaction between the magnesium substrate and coating solution, which makes the chemical coating strongly bonded to the substrate [21]. Since the formation is based on chemical reactions, it is more sensitive to thermodynamics and kinetics [17]. Typical chemical coating techniques include chemical conversion, plasma electrolytic oxidation (PEO), thermal treatment, and electrodeposition [15]. Among them, chemical conversion is often used as a pretreatment [12]. PEO, also known as microarc oxidation (MAO), is the use of plasma arc discharge at the electrolyte/electrolyte interface to react with the electrolyte and sinter the substrate surface to form a coating [10]. e PEO layer is usually more stable than the chemical conversion layer, but its porous surface may lead to pit corrosion [12]. Fluoride coating, tightly bonded to the substrate and insoluble in water, is formed via chemical reactions between fluorine and magnesium by the specific methods listed above. e main degradation products, Mg 2 + and low concentrations F − have both been shown to enhance osteogenesis [5,22]. Furthermore, F − ions have been proved to have antibacterial properties in dentistry [23]. As a burgeoning coating, fluoride coating has been validated to improve the corrosion resistance of magnesium to a certain extent while also meeting the requirements of an ideal coating, such as self-degradability and biocompatibility, making it a promising coating [24][25][26][27].
Currently, there is only one review of immersed fluoride conversion coatings for medical magnesium alloys [28]; however, no review for fluoride coatings is available. erefore, this paper reviews the advances in fluoride coatings for medical magnesium alloys, with the aim of discussing the pros and cons of existing fluoride coatings from the perspectives of preparation methods, coating structures and properties, and challenges and suggestions for further research.
Growth of Fluorinated Coatings
e fabrication of a dense, homogeneous, and biocompatible fluorinated coating on the surface of magnesium alloys by chemical transformation is a widely used treatment to enhance the corrosion resistance of magnesium alloys [29,30], that is, HF acid immersion treatment.
Subsequently, Mg 2+ reacts with F − and OH − in the solution to form a compound on the surface of the substrate (equations (4)-(6)). Because Mg(OH) 2 is extremely unstable under acidic conditions, it can undergo an exchange reaction (equation (7)), in which the OH − within the Mg(OH) 2-x F x coating is replaced by F − (Figure 1) [31]. e above reaction was also accelerated by increasing the concentration of HF in the conversion solution [31].
Fluoride coatings have received more attention in recent years, particularly for methods such as immersion fluorination, microarc fluorination [32], and ultrasonic immersion fluorination [24] based on the composite fluoride coatings derived from the abovementioned methods, such as hydroxyapatite/magnesium fluoride composite coatings [33], fluoride-treated and sol-gel film composite coatings [34], and composite coatings with fluoride as a pretreatment, an electrolyte, or additives [35][36][37][38][39][40][41]; these composite and multilayer coatings are not discussed in detail in this paper because there are no strict standards for their conceptual classification.
Anodic Fluorination.
Anodic fluorination is the replacement of the normal electrolyte with an electrolyte containing the element fluorine on the basis of anodic oxidation. Anodic fluorination uses the metal as an anode and forms a porous coating on the metal surface by means of electrolytic oxidation. After AF treatment, the surface of the sample forms a coral-like and shale-like surface morphology. Compared with untreated specimens, the treated specimens performed better in corrosion resistance. A better coating impedance effect appeared at relatively low voltages, which is consistent with the experimental expectations. In the low-voltage treatment group, the corrosion resistance of AF10, AF30, and AF20 showed a high to low level. 10 V treated samples showed the lowest current density and relatively high corrosion voltage. e thickness of the magnesium fluoride film increases with the increase of the voltage, reaching a peak at AF60. However, the bond between the coating and the substrate is not strong enough, and the coating tends to peel off as the coating thickness increases. erefore, samples treated at 10 V have the best corrosion resistance [32].
In the same way as microarc fluoridation, the thickness and microstructure of the coating can be changed by varying the applied voltage under fixed electrolyte conditions. It is also more environmentally friendly and economical than microarc fluorination due to the lower applied voltage and lower electrolyte concentration [26].
Immersion Fluorination.
Immersion is a popular technique for preparing coatings. e desired properties can be obtained by modifying the composition of the deposited layer. e traditional method of immersion fluorination involves immersing magnesium in a certain concentration of HF solution at a specific temperature for a certain amount of time before removing it [42]. Table 1 shows the characteristics of different HF-coated magnesium alloys prepared under various parameters. A thin fluoride film with MgF 2 as the main component was formed on the surface of magnesium alloys [43]. e coating obtained by immersion fluorination can effectively decrease the degradation rate of magnesium alloys in vivo. Meanwhile, the coating showed good biocompatibility [44][45][46]. is method is appreciated for its simplicity, low cost, and easy control, while the coating formed is loose and porous and may easily peel off [47].
Bioinorganic Chemistry and Applications
Under different treatment conditions, the coating obtained may comprise Mg(OH) 2 , MgF 2 , and other substances, as shown in equations (8)- (10). Among them, the hydroxide in the coating was validated to have a negative impact on the corrosion resistance of the coating [54]. Since equations (8) and (9) have similar thermodynamic tendencies, it is assumed that both reactions will occur spontaneously and simultaneously when magnesium alloys are in contact with water and hydrofluoric acid. us, the rate of each reaction depends on the HF concentration [42]. When the acid concentration was too low, the Mg(OH) 2 level in the coating was too high to lose the protective effect of the coating. According to equation (10), a higher concentration of HF converted the Mg(OH) 2 generated during the treatment into [31] found that the layer obtained at 10 vol % HF (approximately 2.2 μm) was thicker and had higher F content than that obtained at 4 vol% HF (approximately 1.9 μm). e authors concluded that although the 10% HFtreated coating had more cracks, the higher F content of the coating might be a reasonable explanation for the corrosion resistance of the 10% HF-treated coating. However, an overly high concentration of HF may lead to thinning of the coating, which may be attributed to the fact that the dissolution rate of the magnesium substrate is faster than the generation rate of the conversion coating [55]. Additionally, treatment time has also been proven to affect the coating thickness [31,44], which in turn affects the corrosion resistance of magnesium alloys by changing the probability of defects or "active spots" on the surface of the magnesium substrate, that is, the number of through-holes [56]. Usually, the thickness curve rises with treatment time and eventually flattens out ( Figure 2) [31], but the formation rate of the coating gradually decreases, which may be related to the thickening of the coating that prevents the HF from reacting with the internal magnesium [57]. da Conceicao et al. [54] treated AZ31 at HF acid concentrations at a concentration gradient from 12 to 49 vol%. It was found that the coating thickness of the samples treated with high concentrations of HF acid was thinner and formed more slowly than those treated with low concentrations of HF acid, leading to a lower corrosion resistance due to the slower formation rate. e low concentration for a long time resulted in a high hydroxide content in the transformed layer, explaining the low corrosion resistance of the 12 vol% HFtreated coatings. Nevertheless, some studies have shown that magnesium alloys after alkaline pretreatment, which commonly refers to the reaction of magnesium alloys with high concentrations of NaOH to produce Mg(OH) 2 , can develop thicker MgF 2 coatings than those without the alkaline pretreatment [3,46,50]. Furthermore, the crystalline phase on the alloy surface is also an essential factor affecting the formation of fluorinated magnesium coatings. Casanova et al. [58] analyzed the morphology of fluoride conversion coatings synthesized on Elektron 21 and AZ91D alloys. In a hydrofluoric acid (HF) solution, AZ91D and Elektron 21 alloys form MgF2 coatings, which provide good corrosion protection, where the presence and nature of different intermetallic phases play a pivotal role in its growth. e preferential dissolution of the reactive β-phase (Mg17Al12) in the AZ91D alloy promoted the growth of MgF2 coatings. Conversely, the microstructure of Elektron 21 alloys was more homogeneous, and the intermetallic compound Mg12(NdxGd1-x) phase remained stable, allowing the formation of continuous and homogeneous coatings. e microstructure of fluoride conversion coatings on AZ31 in relation to the degradation mechanism was studied by Barajas et al. e authors performed chemical conversion coating of AZ31 Mg alloy at 4 and 10% HF concentration with an immersion time of 24-168 hours [31]. During the conversion process, most of the metal particles in the α-Mg matrix on the surface of AZ31 were dissolved, but SEM observation revealed undissolved metal particles, and then EDX analysis revealed that the undissolved metal particles corresponded to rare earth-containing dispersions (La, Ce, and Nd). It may be due to the fact that Al is active under HF treatment conditions, and the preferred phase dissolved easily during the transformation process is AlxMny particles.
Ultrasonic Immersion Fluorination.
One of the surface improvement methods that are both effective and environmentally friendly is HF. It enhances the corrosion and abrasion resistance of magnesium alloys by forming a thin and uniform fluoride coating that adheres to the alloy surface. However, this method is not applicable to clinical settings. Previous studies have shown that ultrasonic treatment of fluoride coatings can improve the corrosion resistance of Mg alloys and prepare denser and smoother coatings; this method is called ultrasonic immersion fluorination [24].
When immersed in an environment of 28 kHz ultrasound, the coatings of HF and UHF are identical in thickness and composition; however, UHF can reduce the porosity and cracks, exhibiting better corrosion resistance. e electrochemical tests showed that UHF had the highest electronic impedance and corrosion potential difference, as well as the lowest corrosion current density. Similarly, the mass loss test showed that the UHF-coated alloy exhibited a lower mass loss than the HF-coated and bare samples. erefore, the ultrasonic treatment of magnesium alloy with fluoride coating is promising as a biomaterial in various medical applications [24].
Lellouche et al. reported on the antimicrobial and antibiofilm activities of nanosized magnesium fluoride (MgF2) nanoparticles (NPs) synthesized in ionic liquid using microwave chemistry [59]. Compounds nanosized MgF 2 nanoparticles (MgF 2 NPs) by water-based synthesis of MgF 2 NPs using ultrasonic immersion. Ultrasonic chemical irradiation of aqueous solutions of ([Mg(Ac) 2 ·(H 2 O) 4 ]) containing hydrofluoric acid resulted in well-crystallized spherical MgF 2 NPs. Antimicrobial properties against two common bacteria (Escherichia coli and Staphylococcus aureus) were greatly improved. Using the ultrasonic chemical process described, the glass surface was coated, and the ultrasonically prepared magnesium fluoride crystals were shown to have an inhibitory effect on bacterial colonization within seven days.
MAF.
e treatment of magnesium alloys in highly concentrated fluoride solutions using the microarc oxidation technique, also known as MAF, has the advantages of short treatment time and almost no crack formation. When MAF was performed, ammonium hydrogen fluoride and hydrofluoric acid were selected as the electrolytes. e higher the concentration of fluoride ions in the electrolyte, the more corrosion-resistant the fluoride coating; thus, a high concentration of HF (46%) is preferred as the electrolyte [26,27]. In the electrolyte, a current is applied at a constant voltage for very short duration using magnesium alloy as the cathode and a graphite rod as the anode. According to the electrochemical and immersion tests, the best stability and corrosion resistance of the fluoride coating are achieved at 200 V, while too high voltage leads to the flaking of the coating [25]. e coatings prepared by MAF are dense and porous, with MgF 2 as the main component; further, the corrosion resistance of the alloy is determined by factors such as pore size and surface roughness of the coating. Compared to HF and UHF, the coating structure of MAF is much denser and forms a coral-like structure on the surface of the alloys, resulting in a higher surface roughness that is proportional to the voltage [32]. Cell proliferation was significantly more enhanced in the treated samples than that in the bare Mg alloys [25,60].
Properties
Magnesium alloys show promising biomedical applications owing to their biodegradability [61], with Young's modulus similar to that of bone, good biocompatibility, and osteogenesis. e ideal magnesium alloy implant maintains mechanical integrity during early implantation, provides absolute support, and eventually degrades as a bone defect or fracture repair without the requirement for secondary surgical removal [62]. In particular, magnesium, known to be one of the most essential substances in the human body, exists on human bone and soft tissue without obvious toxicity [25,[63][64][65][66][67] and is easily excreted in excess. e extremely high rate of magnesium alloy degradation in humans severely limits their clinical applications. Based on the different properties of magnesium alloy fluoride coatings, the following is a comprehensive review of the effect of fluoride coatings on magnesium alloys, regardless of the limiting preparation techniques and experimental types (in vivo/in vitro). We hope to offer some valuable suggestions for improving the corrosion resistance, mechanical properties, substrate bonding strength, biocompatibility, bone integration and osteogenic activity, and antimicrobial properties of magnesium alloy fluoride coatings.
Corrosion Resistance.
Poor corrosion resistance is a significant issue in magnesium implants. Electron microscopic fluoride films consist of fine particles, which improve problems such as voids and cracks on the metal surface.
us, fluoride coatings can improve corrosion resistance by surface modification. e fluoride coating of magnesium alloys demonstrated excellent corrosion resistance in in vitro immersion experiments. Li et al. [68] made screws and tensile specimens from magnesium alloys as substrates and HF to obtain HFcoated magnesium alloy samples. After immersing the HFcoated and bare magnesium alloy samples in a simulated body fluid (HBSS), the immersed screw samples were subjected to scanning electron microscopy (SEM) (Figure 3) [68] and mass loss detection. e calculations showed that the corrosion rate of the coated screw samples was only onequarter that of the uncoated samples because of the protection of the uniform and dense MgF 2 coating. ey also performed tensile tests and corrosion rate tests on tensile specimens after immersion, and the MgF 2 -coated samples showed a lower pitting corrosion rate than the bare samples, resulting in good mechanical properties even after one month of immersion.
In addition to HF-coated magnesium alloys, varying the parameters of different surface modification methods can also affect the corrosion resistance of magnesium alloys by changing the coating characteristics (Table 2). e majority of the findings indicate that the electrical parameters have the greatest influence on the coating morphology and phase composition [72][73][74][75][76][77][78][79][80]. Heydarian et al. [70] used magnesium alloy as the substrate; the coating generated at a high voltage maintained corrosion resistance for 28 days without significant substrate corrosion. e study also reported that applying higher voltages to the coatings was more conducive to increasing the thickness of the coatings, and the further incorporation of fluoride in the coatings resulted in an increase in the MgF 2 content in the inner layer of the coating, which contributed to the formation of coatings with stronger barrier properties.
Anodic polarization experiments were performed on untreated and 4.10 vol% HF immersion treated AZ31 magnesium alloy, and the electrochemical parameters were extracted as shown in Table 3 [31]; the majority of the coatings provided a scope of protection (Epit-Ecorr) to the metal substrate, and the fluoride coatings reduced the corrosion current density and enhanced the corrosion resistance of the alloy. Compared to previous studies [26], Dai et al. [32] used a low-voltage fluorination method to obtain coatings with controlled corrosion rates under safer conditions. is further confirms that MAF technology still has broad application prospects and research value in the use of magnesium alloy coatings. e above results show that the operating voltage has a significant influence on coating thickness.
Additionally, the preparation of fluoride coatings in an ultrasonic environment shows promise in the medical field. Sun et al. [24] used an AZ31 magnesium alloy for HF in a 28 kHz ultrasonic environment. e ultrasonic treatment of the coating allowed hydrogen to escape, resulting in a reduction in scratches and microporosity, as well as a significant increase in the corrosion resistance of the HFUcoating over the HF-coating. e electrochemical corrosion test results are represented in the curves shown in Figure 6 [24]. e HFU-coating had the lowest corrosion current density, highest corrosion potential, and highest electronic impedance, showing a noticeably higher corrosion resistance in the mass loss tests.
In summary, the formation conditions of the fluoride coating, such as voltage, current, and external conditions, determine its characterization and corrosion resistance.
Mechanical Property.
Magnesium alloys must have mechanical properties to meet the bone-healing process in the human body during degradation. e mechanical properties of medical magnesium alloy implants are critical for the success of fracture fixation and cardiovascular surgery [63]. e more widely used metal implants, such as titanium, stainless steel, and cobalt-chromium alloys, require secondary surgical removal. e higher Young's modulus leads to a mechanical mismatch between the bone and implant, triggering a stress shielding phenomenon, which causes reabsorption of the surrounding bone [81]. Compared to polymeric materials, magnesium alloys have better mechanical properties and Young's modulus (44 GPa) closer to natural human bone (7-25 GPa) [82]. e protective effect of fluoride coatings on the mechanical properties (compressive, tensile, and bending properties) of biodegradable magnesium alloys in recent years is reviewed as follows [83]. Drynda layer. Due to the small crack size (width <10 μm; length <250 μm), no large tensile stress is generated, and the Mg(OH) 2 formed is relatively dense, which can separate magnesium alloy from the electrolyte and delay the corrosion process.
Dvorsky et al. [52] measured the compressive, tensile, and flexural properties of different magnesium-based materials after HF, as shown in Figure 7 [52]. e mechanical properties of pure magnesium samples improved after fluorination, with the best mechanical properties achieved after 24 h of fluorination. e MgF 2 coating formed after 1 h of immersion was thin and provided only a slight improvement in the mechanical properties; after 96 h of immersion, the coating was thicker, and brittleness increased. For the WE43 magnesium alloy, the exact opposite result was observed; a significant deterioration of the mechanical properties as the immersion time increased, which could be relevant to the inhomogeneous fluoride layer and YF3 phase. erefore, the interaction between the substrate and the coating is also a vital factor affecting the mechanical properties.
Li et al. [68] compared the mechanical properties of Mg-Zn-Zr (MZZ) alloy samples after immersion in SBF solution for various durations before and after the fluorination treatment, as shown in Figure 8 [68]. e yield strength (YS), ultimate tensile strength (UTS), and elongation (EL) of the fluorinated samples were much higher than those of the bare sample from day 3 to day 20 of the immersion, whereas the maximum corrosion rate (CRmax) of the coated samples was only approximately 50% that of the bare sample. ese results indicate that the MgF 2 coating can mitigate the effects of pitting corrosion on the magnesium matrix and contribute to maintaining preferable mechanical integrity.
Bonding with the Substrate.
e prerequisite for a qualified coating to perform its excellent surface Figure 9 [85], and confirmed that the highest bonding strength was achieved at 50 s. Dai et al. [32] prepared fluoride coatings on magnesium alloy substrates. e surface morphologies of the generated coatings were compared at different voltages, and SEM images were obtained, as shown in Figure 10 [32]. Large areas of coating peeling appeared on the surface of the samples at voltages higher than 50 V. As stated in the study, the release of the plasma causes microporosity on the surface of the coating, leading to a coral-like appearance, which is required for the adhesion of the coating to the substrate. Excessively high voltages can roughen the coral-like structure, reducing adhesion. Furthermore, Heydarian et al. [70] used the PEO technique in their study to treat AZ91 magnesium alloy in an aluminate electrolyte. Comparing the magnesium fluoride coatings prepared at different voltages and observing the denseness and peeling of the coatings under SEM, it was confirmed that the voltage significantly affects the bond strength of the coating to the substrate. Consequently, it is possible to control the conditions during fluorination treatment to obtain fluoride coatings with better bond strength, which will have tremendous significance in clinical applications.
Biocompatibility.
Magnesium alloy has good biocompatibility as a medical implant material [22,68,[86][87][88]. Fluorine coating degrades and releases fluorine ions to surrounding tissues. A moderate amount of fluoride promotes teeth and bone growth and healing. In contrast, excessive fluoride in the body can lead to dental and skeletal fluorosis and affect the intellectual development of adolescents and the function of endocrine glands, damaging the gonads and other soft tissues such as the heart, liver, lungs, and kidneys. erefore, while using fluoride as an implant coating, the advantages of fluoride in enhancing bone quality and accelerating calcification should be exploited as much as possible to avoid any harm to the body. Extensive in vivo and in vitro experiments lay the foundation for the clinical application of magnesium fluoride implants.
e MgF 2 -coated alloy improves its own corrosion resistance while maintaining the advantages of noncytotoxicity, favoring cell adhesion and proliferation, and not causing inflammation. In vitro cytotoxicity tests confirmed that the fluoride-coated AZ31B alloy is not toxic to human bone marrow mesenchymal stem cells (BMMSC) [49]. Jo et al. [89] performed an in vitro cellular response examination of preosteoblasts using cell proliferation assays and alkaline phosphatase (ALP) assays, indicating that hydroxyapatite coatings with MgF 2 as an intermediate layer also enhanced the level of cell proliferation and differentiation. HA/MgF 2 -coated magnesium had higher corrosion resistance than bare magnesium and the bone-to-implant contact (BIC) ratio in the cortical bone region of the rabbit femur at 4 weeks after implantation. Durisin et al. [90] observed nonspecific inflammation and mucosal thickening in an in vivo study using a novel magnesium alloy scaffold placed in the paranasal sinus, confirming that the Mg-2 wt% Nd alloy scaffold coated with MgF 2 has excellent biocompatibility while retaining functionality. ese advantages make MgF 2 -coated magnesium alloys promising for longterm therapeutic applications in various medical fields. Regarding in vivo experiments, Constantin Carboneras et al. [91] observed the performance of nasal MgNd 2 implants coated with MgF 2 over 6 months and found slow histocompatible degradation of the implants without repeated bacterial infections. Drynda et al. [84] observed the biocompatibility of fluoride-coated magnesium-calcium alloy scaffolds in a subcutaneous mouse model, and none of the samples showed tissue inflammatory reactions or extensive proliferative effects compared to bare magnesium implants while improving corrosion resistance in vivo, suggesting that magnesium fluoride coating may be a good strategy to reduce biodegradation of magnesium-based alloys.
Bone
Integration and Osteogenic Activity. Jiang et al. [22] prepared an MgF 2 coating on an Mg-Zn-Zr alloy and implanted it into the femoral condyles of rabbits. e changes in corrosion resistance, biocompatibility, and osteogenic activity of the coated alloy were observed at the histological and micromorphological levels. It was concluded that the MgF 2 coating was effective in reducing the rate of in vivo degradation of the Mg-Zn-Zr alloy. e bone tissue and mineral content gradually increased, demonstrating that the MgF 2 /Mg-Zn-Zr alloy promotes the formation of new bone on the alloy surface in vivo. Furthermore, the biological properties of the coating exhibited excellent biocompatibility and bioactivity.
Sun et al. [92] conducted a similar work, coating degradable Mg-3Zn-0.8Zr cylinders with a Ca-P layer or an MgF 2 layer; an uncoated Mg-3Zn-0.8zr alloy was used as a control group. Both specimens were implanted in the bone marrow of the white rabbits. During postoperative observation, SEM results showed a large number of cells, ample fibrillar collagen, and Ca-P products on the surface of the MgF 2 -coated implants. Also, micro-CT results revealed a slight decrease in volume (23.85%) and an increase in new bone volume (new bone volume fraction of 11.56% and tissue mineral density of 248.81 mg/cm 3 ) in MgF 2 -coated implants after 3 months when compared to uncoated and Ca-P composite-coated implants. As the samples degraded, new bone trabeculae gradually formed, which was associated with a large number of active osteoblasts and osteocytes. e arrangement of newly formed bone trabeculae in the MgF 2coated samples (Figure 11(f )) [92] was much greater and (a) more compact than the rest of the specimens.
e bone trabeculae were well-structured and largely consistent with the original bone, which is in full accordance with Parfitt's study on the morphology of bone remodeling units [93].
Sun et al. [6] implanted fluorine-coated AZ31B magnesium alloy screws in rabbit mandibles and femurs and discussed how fluorine coating enhances corrosion resistance and promotes bone formation of AZ31B magnesium alloy at the histological and immunohistochemical levels ( Figure 12) [6]. Fluorine coating has been shown to enhance the corrosion resistance and bone formation of AZ31B magnesium alloy by upregulating type I collagen and BMP-2 expression (BMP-2 stimulates osteoclast differentiation and participates in bone tissue reconstruction) [94]. Nevertheless, due to the short observation time and complexity in vivo, further studies are required to clarify the exact mechanism by which degradation products affect osteogenesis.
Antibacterial Properties.
When fluoride-coated magnesium alloys are used as surgical implants, the antibacterial requirements of the implants are strict due to the complexity of antibiotic treatment and wounds and the repetitive nature of surgery [95,96]. ere are various methods to improve the antimicrobial properties of the surface, such as reducing the generation of surface biofilm by coating properties, special coating space structure, and adding antimicrobial elements to the coating to change the environment or physiological function in which bacteria are located. e antimicrobial research of fluorinated coatings mainly focuses on the porous structure to change the surface PH antimicrobial and fluorine release antimicrobial.
Ren et al. [97] investigated the behavior of pure Mg and AZ31 alloys against Escherichia coli and Staphylococcus aureus with and without surface coatings. is paper focuses on the effects of surface pH, porosity, cracking, and coating density on surface antibacterial ability. e antimicrobial ability of pure Mg is high because of the very rapid rate of degradation, resulting in a significant increase in the surrounding pH to 10. Alkaline environments are not conducive to the growth and reproduction of Escherichia coli and Staphylococcus aureus. Robinson et al. [98] suggest that the degradable nature of Mg in physiological solutions causes a rapid increase in the Mg 2+ concentration and the pH of the solution, with the latter supposedly being the cause of the bacterial inhibitory effect of Mg. Interestingly, if the Mg-based metal surface is covered with a porous layer, a relatively low degradation rate can not only be obtained but also acquire an antibacterial function to some extent. However, outside the fluorine-containing coatings of pure Mg and AZ31 alloys, the antimicrobial capacity is lost as the surface coating is too dense, slowing down the release of Mg 2+ and leaving the pH of the surrounding tissue almost unchanged.
Due to the degradable nature of fluoride coatings, their fluoride-releasing properties are unquestionable. In oral studies, fluoride has been combined with other substances to release fluoride to improve its antimicrobial properties and prevent secondary caries. Although there are few studies related to the antimicrobial properties of MgF 2 coatings associated with magnesium alloys, the study of fluoride releases to improve antimicrobial properties can provide a reference for the antimicrobial properties of fluorinated coatings. Zheng et al. [99] combined zirconia nanoparticles with fluorine (F-ZrO 2 ) and investigated the effect of fluorine content on surface colonization. As shown in Figure 13 [99], the number of colonies decreased significantly with the addition of fluorine, indicating that F-ZrO 2 has a significant antibacterial effect on Streptococcus pyogenes. Since the 20th century, fluoride has been shown to reduce the acid resistance of bacteria [100], and the application of fluoride-releasing materials has become a way to apply fluoride topically.
Challenges and Perspectives
Magnesium-based materials are limited in clinical applications because of the progressive decrease in mechanical properties caused by their fast degradation rate in the body fluid environment. Fluorination techniques are currently the most efficient and feasible solution for the surface modification of magnesium alloys. Although many studies have been reported on the use of magnesium and its alloys, more extensive studies are still necessary to better evaluate the potential of fluoride coatings. e mechanical properties of magnesium materials, as well as their increased resistance to corrosion changes, must be thoroughly evaluated. Further optimization of corrosionresistant fluoride coating technology is also a subject for further research. In addition, the effects of elemental fluorine entering human tissue fluids on biological organisms require extensive research data to support their safety.
Data Availability
All data, figures, and tables in this review paper are labeled with references.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.
Authors' Contributions
ChuanYao Zhai, Chun Yu Dai, and Xun Lv conceptualized the study, developed methodology, were responsible for formal analysis, and wrote the original draft. Biying Shi conceptualized the study and wrote the original draft. Yu Ru Li and Di Fan contributed to data curation and wrote the original draft. Yifan Yang validated and supervised the study. Professor Eui-Seok Lee, Professor Yunhan Sun, and Professor Heng Bo Jiang validated the study, responsible for resources, supervised the study, and responsible for project administration and funding acquisition. ChuanYao Zhai, Chun Yu Dai, and Xun Lv contributed equally to this work. | 7,785.4 | 2022-03-07T00:00:00.000 | [
"Materials Science",
"Medicine",
"Engineering"
] |
IN VITRO EVALUATION OF BIOCONTROL AGENTS AND FUNGICIDES ON WOOD DECAY FUNGI-GANODERMA ASSOCIATED WITH MORTALITY OF TREE LEGUMES
An experiment was conducted to isolate a number of biocontrol agentTrichoderma spp. from infected spawn packets of oyster mushroom at National Mushroom Development and Extension Centre, Savar, Dhaka, Bangladesh. These bio-control agents were used as antagonist against four wild wood decay fungi of Ganoderma, viz., G. lucidum-1, G. lucidum-2, G. lucidum-3, G. applanatum and two cultivated G. lucidum-4, G.lucidum-6 under in vitro condition. An in vitro trial of Trichoderma spp. against Ganoderma were performed by dual culture, by treating with volatile, non-volatile and naturally untreated metabolites of bio-control agents. In dual culture, all the Trichoderma species showed 70100% mycelia inhibition of G. lucidum-1 and G. lucidum-2, 55.6-100% inhibition of G. lucidum-3, 20-66.7% of G. applanatum, 100% of G. lucidum-5, 75-100% of G. lucidum-6. Effects of heat killed extracts of Trichoderma spp. on growth of G. lucidum-2 (wild) and G. lucidum-6 (cultivated) were also evaluated. Fungicides Bavistin and Dithane M-45 were also used to investigate the mycelial growth inhibition of Ganoderma spp.
Introduction
Tree legumes are important throughout the tropics as sources of forage, firewood, charcoal, green manure and timber (Hughes and Styles, 1989).Ganoderma spp.are important wooddecaying fungi, occurring on conifers and hardwoods across the world.They are known as white-rot fungi which able to decay lignin as well as cellulose (Adaskaveg and Gilbertson, 1994).Ganoderma species caused the root and stem rot diseases result in losses of crops and trees in worldwide (Miller et al., 1994).Seven year-old trees had 10-15% mortality at moist sites due to Ganoderma lucidum (Pathak, 1986).Stressed and damaged Canary Island date palms often become inflicted by Ganoderma applanatum.Large numbers of trees have been known to kill in ten-year-old plantations due to Ganoderma spp. in Peninsular Malaysia (Lee, 2000).Tree mortality generally increases with time in areas where the Ganoderma disease is already present.Control of root rot diseases is difficult as the pathogens survive on woody material in the soil.Green mould disease caused by Trichoderma spp.one of the serious problem of oyster mushroom and white button mushroom.It causes large economic losses to the mushroom growers (Hatvani et al., 2007).Present investigation was carried out to evaluate the potential of fungi as biological control agents (BCA) and fungicides against pathogenic Ganoderma to tree legumes.
The cultural and microscopic characteristics of Ganoderma lucidum was determined as according to Schwarze and Ferner (2003) and Fernando (2008).The efficacy of Trichoderma isolates were evaluated against Ganoderma (4 wild, 2 cultivated) by dual culture technique as described by Dennis and Webster (1971).The pathogens inoculated the precolonized agar plate method as described by Forley and Deacon (1985).The effect of released volatile metabolites of Trichoderma isolates on the mycelial growth of the Ganoderma spp.were evaluated as method described by Dennis and Webster (1971).The effect of non-volatile metabolites on tested fungi were evaluated as according to Kaur et al. (2006).Effects of natural untreated metabolites by dipping culture disc method was performed as mentioned by Ashrafuzzaman and Aminur (1992).There are different concentrations (30, 50 and 70 ppm) of fungicides, namely Bavistin and Dithane M-45 were used to see the mycelial growth inhibition of Ganoderma spp. on PDA medium using food poison technique.All of the inoculated and noninoculated plates were incubated at 28±2ºC and percent of mycelia inhibition was calculated as the formula given by Kaur et al. (2006).
Mycelial inhibition (%) =
Where, C=Radial growth of control plates T = Radial growth of treated plates
Inhibition of Ganoderma spp. by biocontrol agents
In vitro dual culture tests against wild Ganoderma spp.revealed that percent of inhibition range of Ganoderma lucidum-1, 2, 3 and G. applanatum were: 85-100%, 70-100%, 55.6-100%, 55-67.7%due to T. harzianum, T. koningii, T. viride (green strain), T. viride (yellow strain), respectively (Table 1).During present study, Trichoderma showed the overgrowth on pathogens in some case that indicates the mycoparasitic nature of Trichoderma spp.Trichoderma viride effectively inhibited the growth of G. lucidum under in vitro condition.Cultivated G. lucidum-4 and 6 were inhibited 75-100% by Trichoderma spp. at 7 days after incubation during present study (Table 1).In our study, Trichoderma showed overgrowth on pathogens, which indicates the mycoparasitic nature of Trichoderma spp.Similarly, in dual culture technique, the maximum suppression of Ganoderma applanatum (72%) and G. lucidum (75%) over control was noted with Trichoderma harzianum (Srinivasulu and Raghava, 2009).Idris et al. (2008) also recognized Trichoderma spp. as well-known antagonists to many plant pathogenic Ganoderma spp. in oil palm.Trichoderma viride effectively inhibited the growth of G. lucidum under in vitro condition (Lingan et al., 2007).Trichoderma atroviride was also consistently and highly competitive against most wood decay fungi (Schubert et al., 2008).Red root disease of rubber (Ganoderma psuedoferreum) was inhibited by Trichoderma spp.(Ogbebor et al., 2010).The mycelial growth of G. lucidum was inhibited successfully by T. viride, T. harzianum and T. virens with 66.55%, 63.99% and 62.12%, respectively after 96 hrs of incubation (Chakrabarty et al., 2013).It has been revealed that Trichoderma spp.coiled round the hyphae of Ganoderma spp.both sparsely and intensely which was followed by penetration of Trichoderma spp.into the hyphae of Ganoderma spp., finally, lysis of the host mycelium was noticed (Srinivasulu and Raghava, 2009).
Effect of volatile, non-volatile and natural untreated metabolites
The current study confirmed that the volatile metabolites had a fungistatic rather than a fungicidal effect.Volatile metabolites secreted by Trichoderma spp.showed significant effect in controlling Ganoderma spp.The range of inhibition of Ganoderma lucidum-1, 2, 3 and G.
(Table 2).Volatile metabolites of T. viride showed the maximum inhibition than other isolates.In the present study, the average inhibition was recorded as 0-33.3% in Ganoderma spp.by nonvolatile compound and T. viride was found more effective than others (Table 2).Present results are supported by earlier workers.Trichoderma viride, T. hamatum and T. harzianum were reported to be very effective in producing volatile and non-volatile metabolites against Ganoderma lucidum and G. applanatum (Srinivasulu and Raghava, 2009).Bruce et al. (2000) cited that volatile metabolites of T. viride having significant effect on wood decay fungi.Idris et al. (2008) reported 318 isolates of Trichoderma and tested against pathogenic Ganoderma.Effect of natural untreated metabolites of Trichoderma spp.
showed variable inhibitory effects on studied organisms.T. viride (green strain) showed the maximum inhibition in test fungus except G. applanatum (Table 2).There is lack of information regarding the effect of natural untreated metabolites on Ganoderma spp.
There is no literature available in this regard by previous workers.During present investigation, the aggressiveness of Trichoderma spp.studied varied more or less to previous mentioned workers.This might be due to difference in site of isolation.In literature, Trichoderma spp.were collected from soil rhizosphere but in the present study isolates were collected from spent mushroom compost.
Effect of fungicides on Ganoderma spp.
In vitro fungicidal effects on studied organisms found very significant.Bavistin showed complete mycelial inhibition in case of all selected organisms at 30, 50 and 70 ppm concentrations (Table 4) while Dithane M-45 was not satisfactory as compared to Bavistin.Present results are in conformity with the previous findings.Data recorded after 7 of incubation; Data represent as mean value of three replications; Column having the same letters do not differ significantly at 5% level of significance; G1 = Ganoderma lucidum-1, G2 = G. lucidum-2, G3 = G. lucidum-3, G4 = G. applanatum.
It can be concluded that both biocontrol agent-Trichoderma and Bavistin were found to be effective to control Ganoderma infection.Therefore, either utilization of Trichoderma or Bavistin is preferable to control stem and root rot of higher plant like tree legume.
Table 1 .
An in vitro mycelial growth inhibition (%) of Ganoderma spp.by four Trichoderma spp. in dual culture technique at 32±2ºC temperature.
Table 2 .
An in vitro mycelial growth inhibition (%) of Ganoderma spp.by four Trichoderma spp. at 28±2ºC temperature due to volatile, non-volatile and naturally untreated metabolites.
Table 3 .
Effects of heat killed extracts of Trichoderma spp. on mycelia growth of G.lucidum-2 (wild) and G. lucidum-6 (cultivated) at three different temperatures. | 1,827 | 2017-02-27T00:00:00.000 | [
"Biology",
"Agricultural And Food Sciences"
] |
A Switched-Capacitor Low-Pass Filter with Dynamic Switching Bias OP Amplifiers
A R T I C L E I N F O A B S T R A C T Article history: Received : 28 September, 2017 Accepted : 8 November, 2017 Online : 18 November, 2017 A switched capacitor low-pass filter employing folded-cascode CMOS OP Amps with a dynamic switching bias circuit capable of processing video signals, which enables low power consumption, and operation in wide bandwidths and low power supply voltages, is proposed. In this filter, charge transfer operations through two-phase clock pulses during the on-state period of the OP Amps and a non-charge transfer operation during their remaining off-state period are separated. Through simulations, it was shown that the lowpass filter with an OP Amp switching duty ratio of 50 % is able to operate at a 14.3 MHz high-speed dynamic switching rate, allowing processing video signals, and a dissipated power of 68 % of that observed in the static operation of the OP Amps and a full charge transfer operation without separation of a cycle period. The gain below -31 dB in the frequency response, which is suitable, was obtained at over 6 MHz within a stop-band. Especially high attenuation in 5 MHz was achieved under the optimized condition of load capacitances (4 pF) of OP Amps.
Introduction
The switched capacitor (SC) techniques are appropriate for realizing various filters that can be integrated in monolithic ICs (Integrated Circuits) from using the CMOS (Complementary Metal-Oxide-Semiconductor) technology. In conventional active RC filters, monolithic ICs cannot be realized from using resistors. On the contrary, the CMOS SC techniques suitable for realizing analog signal processing ICs, have promising use in video signal bandwidth circuits in particular, because these can replace resistors to switched capacitor pairs with small capacitances. It has been demonstrated that SC techniques using CMOS operational amplifiers (OP Amps) are useful for implementing analog functions such as filtering [1][2][3][4][5]. Although CMOS OP Amps are suitable for such filter ICs, the use of several OP Amps results in large power consumption. Especially, the power consumption of OP amplifiers in high speed operation becomes large because they have wideband properties. Therefore, the use of them is currently limited to the use in low-frequency passband of at most a few hundred kHz (that is, applications of low speed signal processing, such as analog voice signals).
Until now, several approaches have been considered to decrease the power consumptions of OP Amps, including the development of ICs that work at low power supply voltages [6]. A clocked current bias scheme for folded-cascade OP Amps suitable for achieving a wide dynamic range has typically been proposed to decrease the power consumption of the OP Amp itself [7,8]. Because the circuit requires complicated four-phase bias-current control pulses and biasing circuits, it results in a large layout area and is not suitable for the high-speed operation. A control method using power supply switching has been proposed for audio signal processing as another approach in decreasing the power consumption of OP Amps [9]. Because large capacitors for the power supply terminals are intrinsically loaded, the switching speed is limited to a low speed of 1 MHz at most. Therefore, this type of control circuit is not suitable for application to video signal processing ICs, which are required to operate at over 10 MHz switching frequency.
Amp, suitable for processing video signals, have not been developed yet.
In this paper, a configuration of SC Butterworth low-pass filter (LPF) with DSBFC OP Amps [11] as an example of application of the DSBFC OP Amp is proposed, which enables low power consumption and is suitable for achieving wide bandwidths and operation in low power supply voltages, and its availability is also evaluated for the performance of frequency response and power dissipation. Further, the effect of OP Amp load capacitances on the frequency response is evaluated. This paper is an extension of work originally presented in LASCAS 2017 [11].
SC Filter Theory
The discrete-time transfer function of second-order SC infinite impulse response (IIR) LPF is shown as follows using the ztransform. (1) Here, K, b k , and z -1 represent the gain constant, the filter coefficient in a recursive loop, and the one-step delaying operation, respectively. All of operation circuits are composed of active sampled data processing circuits with a sampling circuit, switching circuits and capacitors.
SC Filter Circuit Design
A second-order IIR LPF with the Butterworth frequency characteristic was designed because it is easy to design due to its flat gain characteristic in the passband. The Butterworth LPF is also superior to a Chebyshev filter for processing video signals owing to its ripple less characteristic within the passband. The filter order of second was selected to achieve a gain of -30 dB at a stop-band over 6 MHz. The other design condition was set as follows. That is, a sampling frequency fs=14.3 MHz, which is equal to four times as much as the NTSC color sub-carrier frequency 3.58 MHz, and a cutoff frequency fc=2 MHz, respectively, were chosen, that enable the LPF to process video signals. Under this condition, the inverted discrete-time transfer function is given by The circuit configuration realizing this transfer function is shown in Figure 1. In order to enable easily to determine the capacitance value of each capacitor, the coefficient of A is set to be equal to that of B. Capacitors can be basically divided into two groups. In one group (C, D, E, and G), charges are supplied to OP Amp 1. In another group (A, B, and I), charges are supplied to OP Amp 2. Even if a capacitance of each capacitor group is multiplied by constant times, the transfer function does not change. Therefore, capacitances of integral capacitors B and D are here selected as a reference capacitance in each group and each coefficient of B and D is normalized to 1. At this time, 1 as every normalized coefficient of A, B, and D is obtained because A and B are the same coefficient. In Figure 1, when the coefficients of A, B, and D are normalized to 1, other coefficients are determined as follows. Here, b1=-0.8252 and b2=0.2946. When the smallest coefficient of I=0.1174 is replaced as a reference capacitance of 0.5 pF, each capacitance in the SC IIR LPF IC is set in proportion to the above coefficients as shown in Figure 1. Because its input signal is desirable to be maintained by a sample/hold circuit for stabilizing, this sampling circuit is also applied in the SC LPF. At this time, the transfer function is multiplied by the following zero-order hold function due to a sample-hold effect.
Here, Ts represents a cycle period of sampling and switching pulses. Therefore, when the function of (2) is replaced using z=e ωT s , the magnitude of the transfer function of the second-order SC LPF considering the sample-hold effect is given by (4).
|H(jω)|=
The theoretical frequency response including the sample-hold effect is shown in Figure 2. The SC LPF configuration was designed referencing a SC biquad circuit with integrators in the A configuration of the DSBFC OP Amp enabling low power consumption, which is different from conventional ordinary OP Amps [10], is shown in Figure 3. With respect to the DSBFC OP Amps, the same CMOS channel width / length as that in the DSBFC OP Amp shown in the reference [10] was employed. The sampling switch was designed to a channel width / channel length W/L=35/2.5 (μm/μm) for each of p-MOSFET and n-MOSFET. The holding capacitor C1 has a small capacitance of 1 pF. CMOS switches with a W/L=25/2.5 (μm/μm) are turned on and off by non-overlapping two-phase clock pulses φ1 and φ2, swinging from -2.5 V to 2.5 V. These sampling and CMOS switches are designed to have a balanced structure with each equal length and width of p-channel and n-channel MOSFETs (component of these CMOS switches) to delete a feed-through phenomenon, which is caused by gate clock pulses due to a capacitive coupling between gate and CMOS-switch output terminals. Major CMOS process parameters are given as a gate insulating film thickness tox=50 nm, an n-MOSFET threshold voltage VTn=0.6V, and a p-MOSFET threshold voltage VTp=-0.6 V.
The operation principle of this LPF is simply described in the following. The output signal Vo1 of OP Amp 1 is obtained as an additional output of an integrated signal of Vin using a negative integrator (D, G SC circuit, and OP Amp 1), an integrated signal of Vout using a negative integrator (D, C SC circuit, and OP Amp 1), and a signal multiplied Vout by E/D. The output signal Vout is an additional output of an integrated signal of Vo1 using a positive integrator (A SC circuit, B, and OP Amp 2), and a signal multiplied Vin by I/B. Vout is basically fed back to an input of OP Amp 1 like this. Vin is also integrated twice and added after being decreased by an appropriate capacitance ratio. Due to these integration using positive / negative integrators, addition and feedback operations, the function of LPF is achieved.
The operation waveforms of the SC LPF are shown in Figure 4. In this SC LPF, charge transfer operations through the clock Figure 4: Operation waveforms of the 2 nd -order SC LPF pulses φ1 and φ2 are achieved during the on-state period of the DSBFC OP Amps. The off-state period TB (the remaining period of the one cycle period Ts) is separately provided to realize low power dissipation for the SC LPF. An input signal is sampled during the sampling phase of φSH (10 ns) and the first period of clock phase φ1, while its corresponding charge is stored on the holding capacitor C1, and is transferred to an output terminal Vout, charging all capacitors. The voltage at the off transition of φSH is kept on C1 during the remaining period of clock phase φ1. During subsequent clock phase φ2, each charge of two capacitors C and G is discharged and each charge of remaining capacitors is redistributed. These charge transfer operations are achieved during the on-state period of the OP Amps. During this period, the OP Amps turn on by setting a bias voltage of VB at an appropriate level enabling M3 and M4 to operate in the saturation region, and operate normally as operational amplifiers. φB is set to low just before φ1 changes to high.
Subsequently, φB becomes 2.5 V at the off-state transition of the OP Amps, at the same time φ2 is switched to off. During this off period TB, the OP Amps turn off by setting VB at nearly -2.5 V enabling M1 to operate in a low impedance and M3 in a high impedance. Therefore, during this off period, the OP Amps do not dissipate at all. When TB is relatively long as compared to the one cycle period Ts, the power dissipation is expected to become lower than that observed in ordinary static operation for the SC LPF using conventional OP Amps. If half GB (Gain Bandwidth product) OP Amps for the SC LPF are used in the static operation (without DSB operation), rise and fall times to stable states of filter output signals will increase to much larger than twofold, because slowly changing transition occurs at the end of transition. Therefore, an expected proper filter performance will not be able to be obtained when such OP Amps are used.
Simulation Results
The performance of the SC LPF was investigated by simulation using the SPICE (Simulation Program with Integrated Circuit Emphasis) program package. Operation waveforms for an input signal of 1 MHz with an amplitude of 0.3 V and an output load capacitance of 1 pF are shown in Figure 5. For the passband frequency signal, the same level output signal as the input one was obtained. The frequency response of the SC LPF is shown in Figure 6 (a) in the dynamic switching operation of the DSBFC OP Amps and (b) in the on-state (static operation) of the DSBFC obtained depending on a phase between the input signal and the sampling pulse. The response was near the theoretical one from 100 kHz up to near 5 MHz, in the high frequency range over 6 MHz, it deteriorated due to a sampling phase effect. The gain below -33 dB was obtained at over 6 MHz within a stop-band. This was superior to that in the static operation. Therefore, it is thought that there is almost no gain deterioration caused by employing the DSB operation of the OP Amps. Though the stop-band gain in the second order LPF is not low enough as shown above, it is expected that the roll-off in the frequency response will become steeper by increasing the filter order, and so the stop-band gain will greatly decrease. Therefore, the achievement of a wide stop-band with a high attenuation will become possible.
Power dissipation vs. OP-Amp switching duty ratio with φ1=φ2=15 ns is shown in Figure 7. The power dissipation of the SC LPF itself excluding that of external drive circuits for φSH, φ1, φ2, and φB decreased in proportion to the off period TB of the Op Amps as expected. In the operation mode of TB=35 ns (=50 % switching duty ratio) and φ1=φ2=15 ns, the power dissipation of the SC LPF (32.9 mW) was decreased to 68 % as compared to that in the static operation of the OP Amps (48.5 mW). In the full charge transfer operation without separation of the one cycle period Ts (φ1=φ2=30 ns), the power dissipation of the LPF was 48.5 mW, the same as that in the static operation of the OP Amps with the above separated charge transfer mode. Most of the power dissipation of the SC LPF corresponds to total power consumption dissipated in the OP Amps themselves. The power consumption of the external drive circuits was nearly 13~14 mW. Thus, even when two DSBFC OP Amps are applied to the SC LPF, the dynamic operation of the DSBFC OP Amps enabling low power dissipation as compared to their static operation is also useful for reducing the power dissipation of SC LPF. When the SC LPF is operated at a lower dynamic switching rate, because it enables TB/Ts to become larger than 50 %, the power dissipation of the SC LPF is expected to decrease in proportion to TB still more. This means that the SC LPF with high-speed DSB operation OP Amps is advantageous compared to the SC LPF using static operation OP Amps with a lower GB. When the filter order is increased, a SC LPF with DSB OP Amps uses OP Amps of the number of filter order. Therefore, the power dissipation of this SC LPF is expected to increase in proportion to its filter order. The power dissipation for the reported fifth-order SC LPF employing conventional CMOS OP Amps with a 5-V power supply and a 0.35-μm CMOS technology was 125 mW as shown in the performance comparison of Table 1 [13]. If a fifthorder SC LPF using the DSB OP Amps with the sampling frequency fs=14.3 MHz is achieved, its power dissipation will be estimated to be 82.3 mW. Considering that the power consumption of OP Amps parts in each LPF is dominant, a comparison between these values is possible. Obviously, the estimated power dissipation of a revised version of the proposed SC LPF with DSB OP Amps is much less than that of the above conventional fifth-order SC LPF.
Effect of Load Capacitances
Because the DSBFC OP Amp switches dynamically, its output becomes a quasi-floating state during the off-state period. In the off-state period of the OP Amp, though MOSFETs M5, M8, and M15 turn to the off-state completely, MOSFETs M11 and M12 become the on-state strongly because over the threshold voltage between each gate and source is applied. At this time MOSFETs M6, M7, M9, M10, M13, and M14 become the on-state weakly. The output terminal Vo of the OP Amp is set to a voltage depending on the load capacitance through the capacitive coupling between the drain and the gate of the MOSFET M13. Therefore, when a large output swing in Vo occurs at the off-state transition, there is a fear that the output voltage during the subsequent on-state period of the OP Amp suffers the influence of this transition. So, a dynamic offset voltage Voff (the difference of the on-state and the off-state output voltages of the OP Amp) at the off-state transition of the OP Amp vs. load capacitance CL was tested and is shown in Figure 8. Obviously, the dynamic offset voltage depends on the load capacitance and decreases as its load capacitance becomes large because CL compared to the drain-gate capacitance in M13 becomes large. The change of Voff against CL of the SC LPF resembles to that of the OP Amp (Figure 9). This means that Voff of the SC LPF is mainly determined by the OP Amp's dynamic off-state transition. Gain vs. OP Amp load capacitance (of two OP Amps) for the SC LPF is shown in Figure 10. In this case, the phase between the input signal Vin and the sampling signal φSH was fixed to a constant value in each input signal to avoid the sampling phase effect. The gain became the minimum at a load capacitance of nearly 4~5 pF for the input signal frequency of 5 MHz. The reason is explained in the following. In small load capacitances, Voff is not only large, but also its variation is not negligible depending on the on-state output voltage. This phenomenon causes the on-state output voltage's difficulty to slightly (bringing about the SC LPF gain's deterioration). In large load capacitances, it becomes also hard to reach its steady state within 15 ns (causing the gain's deterioration) due to OP Amp's bandwidth deterioration although Voff and its variation are very insignificant. However, at an optimum capacitance of nearly 4 pF, its gain deterioration becomes slight due to small offset transition with slight offset voltage variation and fast transition to the steady state. For other input signals of low frequencies (1, 2 MHz), the variation effect of Voff is negligible due to its large output signal (the change of gains is hardly seen) although its variation depending on the output signal voltage is relatively large in small load capacitances. Of course, since Voff and its variation for these low frequencies are negligibly small in large load capacitances, the change of gains with CL is dominated by only OP Amp's bandwidth deterioration. For the 6.5 MHz input signal, though the gain is basically determined due to the sampling phase effect, it is slightly changing with CL due to OP Amp's bandwidth deterioration.
Under the optimized load capacitance of 4 pF, the frequency characteristic of gain for the SC LPF was tested. As shown in Figure 11, its high frequency gain of near 5 MHz improved drastically (over 4.3 dB compared to the gain in the load capacitance CL of 1 pF). Though its gain increased slightly (up to -31 dB) in 6.5 MHz, the amount of its gain increase is slight. On the contrary, when CL is increased to 10 pF larger than the optimized value, the frequency response of the SC LPF deteriorated in high frequencies over 5 MHz (Figure 12). Thus we can see that the SC LPF gain characteristics can be improved by the optimization of load capacitances of DSB OP Amps. Typical characteristics are listed in Table 2.
Conclusions
A switched capacitor low-pass filter employing foldedcascode CMOS OP Amps with a dynamic switching bias circuit capable of processing video signals, which enables low power consumption, operation in wide bandwidths and low power supply voltages, was proposed and its performance was evaluated. In this SC LPF, charge transfer operations through two-phase clock pulses during the on-state period of the OP Amps and non-charge transfer operation during the remaining off-state period of the OP Amps were separated. Through SPICE simulations, it was shown that the SC LPF is able to operate at a 14.3 MHz high-speed dynamic switching rate, allowing processing video signals, and a dissipated power of 68% of that observed in the static operation of the OP Amps and the full charge transfer mode without separation of the one cycle period. The power consumption in the SC LPF body except for the external drive circuits was that of OP Amps. When rearranging these results, it became clear that a lowerdissipated-power SC LPF employing DSB folded-cascode CMOS OP Amps compared to conventional SC LPFs with static operation OP Amps can be realized. The gain below -31 dB in the frequency response, which is suitable, was also obtained at over 6 MHz within a stop-band. Especially high attenuation in 5 MHz was achieved under the optimized condition of load capacitances (4 pF) of OP Amps.
Thus, the dynamic charge transfer operation during the on-state period of the OP Amps and non-charge transfer operation during their off-state period is useful for high speed operation, and reducing the power dissipation of the SC LPF. This circuit should be useful for the realization of low-power wide-band signal processing ICs including over one of multi-order low-pass, highpass and band-pass filters. The DSB circuit achieving such operation can be applied to not only folded-cascode but telescopic, two-stage, and rail-to-rail OP Amps. | 4,979.8 | 2017-11-01T00:00:00.000 | [
"Engineering"
] |
Routing in the brain
As mapping the genome was the great biological challenge a generation ago, so today is mapping brain network dynamics, thanks in part to President Obama's BRAIN initiative (Insel et al., 2013). Factors influencing the emergence of network dynamics, both in the brain and in other networks, can be roughly divided into three classes: those pertaining to node dynamics; those pertaining to topology (connectivity); and those pertaining to routing (how signals are passed across the network). But while single neuron dynamics are reasonably well understood, and while researchers have begun to elucidate key aspects of network topology in brains, very little work has been devoted to possible routing schemes in the brain (Graham and Rockmore, 2011). Indeed, brain networks must possess a systematic routing scheme, but current methods and models often make implicit assumptions about routing—or ignore it altogether.
As mapping the genome was the great biological challenge a generation ago, so today is mapping brain network dynamics, thanks in part to President Obama's BRAIN initiative (Insel et al., 2013). Factors influencing the emergence of network dynamics, both in the brain and in other networks, can be roughly divided into three classes: those pertaining to node dynamics; those pertaining to topology (connectivity); and those pertaining to routing (how signals are passed across the network). But while single neuron dynamics are reasonably well understood, and while researchers have begun to elucidate key aspects of network topology in brains, very little work has been devoted to possible routing schemes in the brain (Graham and Rockmore, 2011). Indeed, brain networks must possess a systematic routing scheme, but current methods and models often make implicit assumptions about routing-or ignore it altogether.
Routing involves the control of paths that information can take across a network. Given that physical networks have finite limits on links, bandwidth, and memory, the role of routing is to allocate paths such that one or more communication goals are met (e.g., speed, fidelity, fault-tolerance, cost, etc.). Routing is of clear importance for brains: interpreting sensory information, memory access, decision making, and many other core brain functions require that messages can be flexibly sent and received by many nodes at widely separated locations on the network, in response to changing demands. Now, a paper by Mišić et al. (2014) has simulated communication across a comprehensive macaque cortex anatomical model (CoCoMac: Stephan et al., 2001;Kötter, 2004). Importantly, this study makes explicit assumptions about routing, something that has not previously been done with respect to such detailed connectivity data. The intriguing results of the paper-and the questions regarding routing the paper raises-deserve attention. Mišić et al. (2014) compare simulated activity on the CoCoMac network with activity on two surrogate network topologies: a generic small world (where any node can communicate with any other over a few "hops") and a "rich club" (a variety of small world wherein hub nodes have disproportionally dense interconnection and high numbers of shortest paths). Small world structure is recognized as a crucial feature of neural networks, but given evidence of rich club-like topology in cortex (Zamora-López et al., 2010;Van Den Heuvel and Sporns, 2011;Harriger et al., 2012), determining the degree to which cortex shows dynamics characteristic of rich clubs is an important question. However, dynamics depend on topology and routing, so the simulation necessarily involves a routing model. Mišić et al.'s (2014) surrogate networks were matched to CoCoMac in terms of relevant parameters (nodes, edges, degree, etc.). Sending and receiving nodes, as well as paths between them, were randomly chosen, with new signals introduced according to a Poisson process. Randomized and latticized versions of the networks served as controls.
The results provide evidence that the anatomical network comes closest to the synthetic rich club network in performance, but also shares properties with the small world network (See Figure 1). Mišić et al. (2014) further show that posterior cingulate cortex/precuneus and medial temporal cortex demonstrate congestion characteristic of rich club hub nodes, which matches these regions' proposed roles in integrative functions. The authors also note that "under-congested nodes are areas associated with making eye movements, tracking and acting toward objects in space and fusing visual and proprioceptive information" (Mišić et al., 2014). Thus, there are tantalizing hints of regional or sub-graph variation in network dynamics that correspond to functional demands (albeit in the absence of natural inputs to the system).
While these findings are compelling, the assumptions made in Mišić et al.'s (2014) routing model are also important. Their model employs message-switched routing, meaning that each signal or "message" is passed along in its entirety from node to node. This scheme is akin to traditional postal systems. But because each node can receive inputs from many other nodes at the same time, messages must "wait their turn" to be passed along. Thus, in message-switched electronic networks, nodes have finite memory buffers to store messages in the queue. There is a high danger of congestion across such networks. Mišić et al.'s (2014) model includes buffers, and the authors provide evidence that buffer size is not a critical parameter: results were not qualitatively different when buffer size was varied over two orders of magnitude. However, the system appears globally inefficient, which may be due to the choice of routing system: transit times and throughput in the simulations declined in tandem with increased load (see Figure 1). This behavior suggests that message switching is not a good match for system demands.
The inefficiency of message-switched architectures is inconsistent with the comparatively rapid and efficient communication typical of real neural networks. Nor is it clear how message switching could be accomplished in real neurons: membrane potential could conceivably store some information about messages in the queue, but it could hardly possess the precision necessary to buffer many "bursty" spike trains, let alone spike timing information.
What alternative routing schemes are plausible in cortex? A circuit-switched network, which is typified by telephone switchboard systems, is another possibility, and is perhaps the default assumption for many modelers and experimentalists. Here, an exclusive path is established between sender and receiver. This system has the advantage of high throughput even under heavy load, and it is this quality that led to its historical dominance in communication systems.
However, such systems are unlikely in brains for four principal reasons: (1) Establishing a path is slow. Sending nodes must first ask the switchboard to provide the connections, and then receiving nodes must send a return signal to acknowledge the connection has been made. (2) The system is inefficient when communication is sparse or intermittent because bandwidth along the path is retained whether or not information is sent along this path. (3) Reorganizing the network is difficult.
Because a central operator generally controls the allocation of paths, blockage or destruction of switchboards can lead to network-wide slowdown or blackout. (4) There is neither enough space nor resources in the cranium to support the all-to-all connectivity that would be required to allow exclusive paths between each sender and each receiver.
Thus, it is telling that Mišić et al. (2014) did not countenance the possibility of circuit switching in their simulations. Nevertheless, prominent large-scale cortical models today-despite great power and sophistication-still employ fundamental aspects of circuit switching networks, such as static, centralized routing control (e.g., the DARPA-supported model of Cassidy et al., 2013). The decades-long dominance of the "computer metaphor" in neuroscience may be the inspiration for such models (and they may indeed be appropriate for some local circuits), but it should be clear today that new modes of thinking are necessary to understand brain networks more generally, and cortical networks in particular. A more promising model for routing in cortex is packet switching, the scheme used on the Internet. Here, messages are chopped into small packets, each labeled with the recipient's address and with what portion of the message that packet contains. The message is reassembled once all constituent packets arrive. Crucially, each packet can take a different route to the destination, allowing the system to dynamically reroute traffic around congested parts of the network. Because activity is distributed in this fashion across a topologically distributed network (and because activity is sparse and bursty), the system functions with high efficiency, and without the need for substantial memory buffers at each node. It is thus a more realistic scheme given the properties of real neurons and neural networks. As described in greater detail by Graham and Rockmore (2011), packet switching has appealing parallels with cortical signaling, for example in (1) Its ability to dynamically reroute traffic, as cortex does following lesion; (2) Its capacity for different "applications" (e.g., email, http, etc.) to run concurrently on the same system, as distinct modalities and signaling systems do in cortex; (3) The inherent hierarchy of the network protocol stack, which mirrors hierarchical organization within and across cortex.
In addition, our evolving understanding of communication in the brain has intriguing parallels with the notion of packet switching. For example, as we begin to unravel the role of glia in neural signaling, there are hints that these cells could act as the routers (Möller et al., 2007). Of course, the "Internet metaphor" is inexact and it remains to be seen how aspects of this technology could be realized in the brain. For example, addressing would be costly given the relatively small amount of information carried by spikes. However, if most messages travel short distances on the network, addresses may require only a few extra bits. In this case, addresses could be carried by spike timing, while message "content" could be carried by spike rate (Graham and Rockmore, 2011).
Or consider the problem of how a given node can "sense" downstream congestion and reroute signals appropriately. The Internet achieves this in part because a given node (router) receives lists of short paths to popular destinations, which are updated and propagated largely by hub servers (e.g., ISPs). The brain does not appear capable of this. However, the Internet metaphor offers other potential solutions. To detect congestion, the Internet concurrently uses a feedback system involving "acks": recipient nodes send small feedback messages to the sender "acknowledging" receipt of a tranche of packets. If the sending node does not receive timely acks, it resends lost packets. Analogously in the brain, corticothalamic feedback could conceivably return information about congestion to nodes lower in the hierarchy, which could in turn modify their signaling to compensate if necessary.
Interestingly, Mišić et al. (2014) acknowledge that packet switching is "physiologically plausible" and is a better match to the sparse communication typical of cortex. One therefore hopes these authors and others will investigate packet switching on CoCoMac. In any case, despite the limitations of messageswitched architectures, Mišić et al. (2014) provide a useful reference point and inspiration for future studies of routing in the brain.
But there is some degree of irony that, in the absence of large-scale shifts among neuroscientists away from the computer metaphor, computer engineers are themselves beginning to imagine the brain as a packet switched network, rather than an array of transistors. Steve Furber and colleagues (Khan et al., 2008) have built massive processing architectures for neural network simulation that are fundamentally organized around packet-switched routing, which, in addition to granting advantages described above, can be run with low energy costs (Sharp et al., 2012). Therefore, the time is right for neuroscientists to revisit their assumptions, to take seriously the problem of routing in the brain, and to investigate the possibility that the brain may be more like the Internet than it is like a postal system, a telephone switchboard-or a computer. | 2,690.8 | 2014-03-23T00:00:00.000 | [
"Biology"
] |
Defects reduction of Ge epitaxial film in a germanium-on-insulator wafer by annealing in oxygen ambient
A method to remove the misfit dislocations and reduce the threading dislocations density (TDD) in the germanium (Ge) epilayer growth on a silicon (Si) substrate is presented. The Ge epitaxial film is grown directly on the Si (001) donor wafer using a “three-step growth” approach in a reduced pressure chemical vapour deposition. The Ge epilayer is then bonded and transferred to another Si (001) handle wafer to form a germanium-on-insulator (GOI) substrate. The misfit dislocations, which are initially hidden along the Ge / Si interface, are now accessible from the top surface. These misfit dislocations are then removed by annealing the GOI substrate. After the annealing, the TDD of the Ge epilayer can be reduced by at least two orders of magnitude to < 5 × 10 6 cm − 2 . C 2015 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License. [http: // dx.doi.org 10.1063 1.4905487]
A method to remove the misfit dislocations and reduce the threading dislocations density (TDD) in the germanium (Ge) epilayer growth on a silicon (Si) substrate is presented. The Ge epitaxial film is grown directly on the Si (001) donor wafer using a "three-step growth" approach in a reduced pressure chemical vapour deposition. The Ge epilayer is then bonded and transferred to another Si (001) handle wafer to form a germanium-on-insulator (GOI) substrate. The misfit dislocations, which are initially hidden along the Ge/Si interface, are now accessible from the top surface. These misfit dislocations are then removed by annealing the GOI substrate. After the annealing, the TDD of the Ge epilayer can be reduced by at least two orders of magnitude to <5 × 10 6 High quality Germanium (Ge) has been studied extensively since the late 1980s. Ge is suitable for photo-detector application for wavelength >1.2 µm as well as laser application. In addition, Ge has higher carrier mobility than that of silicon (Si), making it a suitable candidate to augment Si. Furthermore, Ge has a lattice constant that is perfectly matched to gallium arsenide (GaAs) (0.07% at 300 K), which can be used as a buffer layer for integration of GaAs based devices on Si substrate. [1][2][3][4] One of the important parameters in determining the device worthiness of epitaxially deposited layers is the epilayers' threading dislocation density (TDD). Due to a large lattice mismatch between Ge and Si, a large number of misfit dislocations (MD) and TDD, on the order of 10 10 cm −2 , may be generated in the heterostructure when Ge is grown directly on Si substrate.
There are reports on high quality of Ge epitaxial layer grown on Si with TDD >5 × 10 6 cm −2 . 5,6 Subsequently, the same group has combined the chemical mechanical planarization (CMP) and re-growth method to annihilate the dislocations to provide a lower TDD of ∼2 × 10 6 cm −2 . 7 However, both of the methods require a thick (∼10 µm) graded SiGe buffer layer prior to Ge deposition. 7 Thin buffer layer has also been reported by growing a thin Si 0.5 Ge 0.5 buffer layer (∼10 nm) followed by two-step Ge growth and annealing. However, the reported TDD is <1 × 10 7 cm −2 . 8 Another approach is to deposit Ge directly on Si substrate and then introduce annealing step during and/or after the Ge growth to reduce the TDD. [9][10][11][12][13][14][15] However, this approach results in a much higher TDD of >10 7 cm −2 (Refs. [16][17][18] and severe Si/Ge inter-mixing at the in growth interface. In this letter, a method combining direct Ge on Si growth, wafer bonding, and layer transfer for germanium-on-insulator (GOI) substrate fabrication is discussed. This method has resulted in a E-mail<EMAIL_ADDRESS>Telephone: +65-6790-5636.
significant improvement in the TDD of the Ge film after proper heat treatment. The GOI platform not only can be used as a "passive" buffer layer for III-V materials integration on Si but also has applications as active layer in advanced complementary metal-oxide-semiconductor (CMOS) circuit and silicon-based photonics. A three-step Ge growth was introduced to grow Ge epilayer directly on a Si donor wafer. The three steps in the growth sequence were: (i) low temperature growth at 400 • C to obtain a rather smooth and continuous Ge seed layer as growth template; (ii) low to high temperature ramping from 400 • C to 600 • C at a ramping rate of 6.5 • C/min; (iii) high temperature growth at 600 • C. The details of the buffer-less Ge on Si growth can be found in Refs. [16][17][18]. After that, aluminum oxide (Al 2 O 3 ) with thickness of ∼10 nm was deposited on both Ge/Si (donor wafer) and Si (001) substrate (handle wafer) by the atomic layer deposition (ALD) method. Al 2 O 3 was chosen due to its higher thermal conductivity than that of SiO 2 (30 W m −1 K −1 vs. 1.4 W m −1 K −1 ). Prior to bonding, both wafers were subjected to O 2 plasma exposure for 15 s, rinsed with deionized water and then spin-dried. The Al 2 O 3 containing surfaces of the two wafers were then brought into contact. After bonding, the wafer pair was annealed at 300 • C in an atmospheric N 2 ambient for 3 h to further enhance the bond strength.
The bonded pair of wafer was sent for grinding to thin the donor Si wafer down to ∼50 µm. ProTEK ® B3-25 was spin coated on the backside of the handle wafer to act as a protection layer during the tetramethyl-ammonium hydroxide (TMAH) etching of the Si. The remaining 50 µm of Si was then removed by submerging the bonded pair of wafer into the TMAH solution at 80 • C and etch stop on the Ge layer. The ProTEK B3-25 protective coating was removed in O 2 plasma with a power of 800 W. The details of the fabrication process can be found in the Ref. 19.
The GOI substrate was then subjected to annealing at 850 • C in O 2 environment for 4 h. After that, the sample was etched in HF solution (49% HF:H 2 O = 1:20, by volume) for 30 s to remove the oxidized Ge layer.
The qualities of the Ge epitaxial film on the GOI substrate after annealing were characterized by various techniques. The transmission electron microscopy (TEM; Philips CM200) with operating voltage of 200 kV was used to study the dislocations along the Ge/Si interface as well as the threading dislocations on the Ge surface. The strain and quality of the Ge film is measured by Raman using a WITec confocal Raman microscope alpha 300. The excitation laser wavelength of 532 nm was used with both the focus length and objective lens magnification at 80 cm and 100×, respectively. To further confirm the cyrstallinity and strain level of the Ge epilayer, high resolution x-ray diffraction (HRXRD) was collected using PANalytical X'Pert PRO. Rocking curve based on Si (004) was used in the HRXRD measurement.
The cross-sectional bright field TEM images in Fig. 1 show the cross-sectional view of the GOI before and after annealing. Fig. 1(a) shows the GOI substrate after layer transfer. As can be seen, the misfit dislocations, which are previously confined along the Ge/Si interface, are now accessible from the top surface. This provides the ease to remove the exposed misfit dislocations by chemical mechanical polishing (CMP) or annealing. In this study, O 2 annealing is chosen because it serves two purposes: (i) oxidation of the Si/Ge intermixed layer and Ge layer to remove the misfit dislocations and (ii) removal of the threading dislocations, once the misfit dislocations are eliminated. Hence, from Fig. 1(b), it shows that the misfit dislocations are removed and the threading dislocations are reduced as predicted. In addition, the thickness of Ge film is also reduced by ∼300 nm after O 2 annealing due to the oxide formation on the Si/Ge intermixed layer and top Ge layer.
The TDD can be determined from the plan-view TEM by estimating the dislocations in a given area at a number of locations across the samples as shown in Figs. 2(a) and 2(b). The estimated TDD is 7.69 ± 0.583 × 10 8 cm −2 and 6.41 ± 0.548 × 10 7 cm −2 , for samples before and after annealing, respectively. The TDD value is estimated based on an average of 20 plan-view TEM images for accuracy. Due to the limitation of the TEM, image with a smaller magnification is impossible. Hence, the TDD values are over estimated.
To quantify the TDD with lower magnification images, field emission scanning electron microscope (FESEM) is used. The samples are etched in iodine solution for 1 s to delineate the threading dislocations. Since the dislocations are etched much faster in the etchant, etch pits can be observed. Before annealing, the density of threading dislocations is so high that etch pits are observed across the entire sample, and some of the etch pits merge together and form larger pits, as shown in Fig. 2(c). After annealing, the sample exhibits much lower etch pit density, most of them have square shape as shown in Fig. 2(d). Within some of these square etch regions, one can observe circular pits. The estimated etch pit density (EPD) of the GOI sample before and after O 2 annealing is 5.2 ± 0.45 × 10 8 cm −2 and 2.5 ± 0.4 × 10 6 cm −2 , respectively. It is clearly shown that the EPD is reduced by two orders of magnitude after O 2 annealing compared to the unannealed sample. The reason why the defected regions have a square shape is because of the circular pits are located on the crosshatch lines which are often oriented along the two orthogonal ⟨110⟩ directions. 20 The crosshatch pattern is often observed in a low misfit system (below 2%), or in samples with low TDD (10 6 cm −2 ), which is applicable in this case. To reduce the total TDD, fusion and annihilation reactions are important. When two threading dislocations (TDs) fuse (e.g., two nearly coplanar 60 • dislocations interact to form an edge dislocation, a 2 [110] + a 2 1 01 → a 2 [011]), they produce a single resultant TD. Although formation of this type of edge dislocation threading segment reduces the threading dislocation density by 50%, the resulting sessile edge dislocation has little chance of ever exiting the system. Thus, it is a permanent threading dislocation. Annihilation, however, is only possible for TDs with opposite sign. These reactions can take place when the distance between interacting dislocations become smaller than the characteristic cross-section of a specific reaction. 21 Both the motion and reaction (fusion and annihilation) of the TDs can be assisted by external and internal factors such as temperature, film growth geometry, internal and externally imposed stress, and point defects.
For every misfit dislocation, there are always two corresponding TDs at the end of the misfit, which must thread to a free surface. Hence, it is also possible to reduce the TDD by spreading the misfits out so that when the threads glide, they can easily thread to the edge of the wafer and not interact with any other dislocations.
In our method, since the top surface that contains most of the misfit dislocations is consumed during oxidation, the threading dislocations are not necessarily forming a loop. In addition, the remaining TDs inside the film are not sessile and able to move readily (as they are not constraint by the misfit) closer to another TD, so that annihilation can be occurred under thermal treatment.
Raman spectroscopy is used to determine the strain level of the Ge epitaxial film. In Fig. 3, no signal originated from the Si-Ge vibration mode is observed after O 2 annealing, indicating that the Si from the Si/Ge intermixed layer is removed. From the inset of Fig. 3, a blue shift of Ge-Ge vibration peak position is clearly observed from 295.58 cm −1 (without annealing) to 301.72 cm −1 (after O 2 annealing). The Ge-Ge peak for GOI after O 2 annealing is very close to the bulk Ge reference (peak at 301.09 cm −1 ), indicating the Ge film of the GOI is nearly stress free.
HRXRD is used to determine the strain level of the Ge epitaxial film as well. The XRD analysis in Fig. 4 shows that the Ge epilayer for GOI before annealing is shifted to the right with reference to the Ge bulk substrate as a result of a tensile strain (∼0.35%). In addition, the Ge signal curve is asymmetric and shows a clear shoulder at the side towards higher incidence angles. This is due to Ge/Si intermixing at the interface during thermal processing that perturbs the abrupt interface, which results in an intermediate Si 1−x Ge x layer. However, the Ge epilayer for GOI after annealing is left shifted and moved closer to the Ge bulk substrate, indicating that a much lesser tensile strain (0.07%) in the Ge epilayer. This observation is consistent with the Raman analysis. Furthermore, FIG. 3. Raman spectroscopy illustrates (a) the alloy composition and the strain of the Ge epilayers on GOI sample before and after annealing with reference to bulk Ge, (b) the enlarged picture of Ge-Ge vibration peak. the Ge signal curve is symmetrical, which suggests that the intermediate Si 1−x Ge x layer is removed after annealing. The linear coefficients of thermal expansion (CTEs) of Si, Ge, and Al 2 O 3 are 2.6, 5.9, and 8.1 ppm/ • C, respectively. 22,23 When the Ge is grown on the Si substrate at high temperature, the Ge film is essentially stress-free. During cooling down to room temperature, the Ge layer tends to shrink more than Si as the Ge has higher CTE than that of Si. Since the Ge layer is constraint by the Si substrate, it experiences tensile strain. There are two root-causes that can be used to explain the stress-free state of the Ge film on the GOI wafer after annealing.
(i) The amorphous nature of the Al 2 O 3 , deposited by ALD at low temperature, acts as a stress buffer layer to accommodate the stress generated due to the CTE mismatch between the materials. 24 COMSOL modeling is used to further verify this explanation. The fitting parameters are shown in Table I. The simulation results are shown in Fig. 5. When the Ge epilayer on Si substrate is cooled from high temperature, the Ge epilayer experiences a higher tensile stress than Si substrate as shown in Figs. 5(a) and 5(b). After annealing, the Ge film in the GOI sample has similar or lower stress level than that of Si substrate indicating that the Ge epilayer is not constraint by Si and almost stress-free as shown in Figs. 5(c) and 5(d). In addition, the Al 2 O 3 layer has the highest stress among the various layers after annealing. This confirms that the Al 2 O 3 layer acts as a stress buffer to accommodate the stress induced due to the CTE mismatch between the materials. (ii) Since the Si donor wafer is removed and subsequently the Ge/Si intermixed layer is consumed during annealing in O 2 , the Ge layer is no longer constraint by the Si and therefore it is able to assume a nearly stress-free state. In summary, annealing at 850 • C in O 2 environment improves the overall quality of the Ge epitaxial film on the GOI substrate. The TDD is reduced by at least two orders of magnitude to <5 × 10 6 cm −2 due to the removal of misfit dislocations. In addition, the Ge film on the GOI substrate is nearly stress free after annealing. Hence, good quality of the Ge epilayer could be useful for any subsequent III-V materials integration and devices fabrication. Moreover, the GOI platform will benefit the future semiconductor technology because of the reduction in parasitic and also short channel effects. | 3,760.4 | 2015-01-07T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
High‐Efficiency Broadband Achromatic Metadevice for Spin‐to‐Orbital Angular Momentum Conversion of Light in the Near‐Infrared
Spin‐orbital angular momentum conversion (SOC) of light has found applications in classical and quantum optics. However, the existing SOC elements suffer severe restrictions on broadband integrated applications at miniature scales, due to bulky configurations, single function, and failing to control the dispersion. Herein, a high‐efficiency broadband achromatic method for independently and elaborately engineering the dispersion and the SOC of light based on a cascaded metasurface device is proposed. The metadevice is capable of efficiently decoupling the SOC from the modulation of dispersion with high‐broadband focusing efficiency up to 75%. For the proof of concept, the generation of high‐efficiency achromatic‐focused and spin‐controlled optical vortices with switchable topological charge ( lσ=+1=1$l^{\sigma = &amp;amp;amp;amp;amp;plus; 1} = 1$ and lσ=−1=2$l^{\sigma = - 1} = 2$ ) is successfully demonstrated. The presence of achromatically and highly concentrated optical vortices with tunable photonic angular momentum using spin as an optical knob makes the proposed ultracompact and multifunctional metadevice a promising platform for optical micromanipulation at nanoscale dimensions.
Introduction
Angular momentum (AM) of light includes spin angular momentum (SAM) and orbital angular momentum (OAM).It's well known that circularly polarized (CP) light carries SAM of AEℏ per photon depending on its handedness (ℏ is reduced Planck's constant). [1,2]Optical vortex (OV) beams with helical wavefronts evolution in the form of exp(ilφ) around the azimuth, where l is the topological charge and can take any integer value, also carrying OAM of lℏ per photon. [3,4]The twisted phase profile results in a doughnut-shaped intensity profile.Such hollow beams carrying OAM have attracted much attention due to numerous applications in classical and quantum optics, including optical micromanipulation, [5,6] optical communication, [7,8] super-resolution and edge-enhanced detection, [9,10] and quantum entanglement. [11,12]ypically, in order to integrate the twisted helical mode into the wavefront of a paraxial beam for the generation of OV beams, spiral phase plates, [13] spatial light modulators, [14] pitch-fork holograms, [15] or laser mode conversion [16] are used.Furthermore, to achieve circularly polarized OV beams carrying a total angular momentum (TAM) L = (σ þ l) ℏ per photon (σ = AE1), [17] the geometric phase-based q-plates are much more desired elements due to offering a direct connection between the SAM and OAM via SAM-to-OAM conversion (SOC).However, the output OAM is constrained to be a conjugate value of AE2qℏ per photon. [18,19]The chromatic aberration, single Spin-orbital angular momentum conversion (SOC) of light has found applications in classical and quantum optics.However, the existing SOC elements suffer severe restrictions on broadband integrated applications at miniature scales, due to bulky configurations, single function, and failing to control the dispersion.Herein, a highefficiency broadband achromatic method for independently and elaborately engineering the dispersion and the SOC of light based on a cascaded metasurface device is proposed.The metadevice is capable of efficiently decoupling the SOC from the modulation of dispersion with high-broadband focusing efficiency up to 75%.For the proof of concept, the generation of high-efficiency achromaticfocused and spin-controlled optical vortices with switchable topological charge (l σ¼þ1 ¼ 1 and l σ¼À1 ¼ 2) is successfully demonstrated.The presence of achromatically and highly concentrated optical vortices with tunable photonic angular momentum using spin as an optical knob makes the proposed ultracompact and multifunctional metadevice a promising platform for optical micromanipulation at nanoscale dimensions.function, and bulky configurations restrict their working bandwidth and use in emerging integrated optics operating at nanoscale dimensions. [20][39] The initial endeavors for miniature OV generators to achieve a helical beam with one specific topological charge for linearly and CP light are based on geometric phase-based metasurface devices (metadevices). [21,40]Remarkable monochromatic metadevices for control of SOC has been proposed by engineering asymmetrical birefringent meta-atoms. [41]owever, such metadevices generally yield a propagating OV instead of a focusing OV with concentrated photons at a predefined focal plane, which is highly desired for optical trapping and manipulation [42] and optical detection. [43]Therefore, their applications are seriously constrained when highly focused OV beams carrying different AMs are needed. [44,45][55][56][57][58] Recently, polarization-controlled BAMs have been well demonstrated, [59] but fail to connect the SAM and OAM for on-demand control of the SOC of light.Up to now, the broadband achromatic focusing for CP OV beams with switchable AM driven by the SOC has remained elusive.The major hurdle is the lack of a general methodology to efficiently decouple the SOC from the modulation of phase dispersion, to flexibly and independently engineer the different SOC states for multiplexing.
[62] In this work, we propose a general and high-efficiency broadband achromatic method to independently engineer the dispersion and SOC of light for the first time.The design concept stems from a general approach in which cascaded metasurfaces can simultaneously implement achromatic focusing and spin-multiplexing functionality.For the proof of concept, we successfully accomplish the BAM for control of the SOC through elaborately cascading a spincontrolled metasurface and an irregular achromatic metasurface with focusing efficiency up to 90%.The efficient decoupling between the SOC and the control of dispersion ensures the realization of broadband achromatic-focused OV beam with tunable topological charge number.The broadband efficiency of the BAM is up to 60%.The simulated results robustly confirm our approach.Since the proposed BAM is made of all-dielectric silicon materials, its fabrication is compatible with the existing complementary metal-oxide semiconductor platform and may find applications in ultracompact and chip-scale miniature devices, such as optical tweezers operating at nanoscale dimensions. [48,63,64] Results and Discussion 2.1.Principle of Broadband Achromatic Modulation of the SOC with Cascaded Metasurfaces Figure 1a schematically illustrates the broadband achromatic focusing for the SOC of light via cascaded metasurfaces.It's capable of converting different CP beams into the focused OV beams with distinct and spin-controlled topological charge number on the predesigned focal plane.To introduce the focused chromatic aberration correction effect into broadband OV beams carrying spin-controlled AMs (See Note S1, Supporting Information), the desired phase spectrums for the SOC should be functions of spatial coordinate ðx, yÞ and angular frequency ω. φ jσ,li ðr, θ, ωÞ where F 0 , r 0 , and l σ are focal length, reference position, and topological charge number for different spin intrinsic state, respectively.r ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi x 2 þ y 2 p and θ ¼ arctan y x are radial and azimuth coordinates.ω and c are angular frequency and the speed of the light, respectively.jσ, li represents the CP OV beams with different AMs.A single hybrid phase-based metasurface can be capable of realizing monochromatic spin-multiplexing focusing, but generally fails to modulate broadband dispersion due to low polarization conversion efficiency.[43,59] The chromatic dispersion mainly arises from the resonant phase dispersion of the building meta-atoms and the intrinsic dispersion of the materials that are used to construct the meta-atoms.To achieve the desired broadband focused OV phase spectrums in Equation ( 1) and focused broadband incident beams achromatically, the phase-dispersion term used for achromatically focusing should be decoupled from the dispersionless spin-multiplexing spiral profiles for the SOC in Equation (1).By the means of method of separation of variables, Equation ( 1) can be rewritten as Herein, a high-efficiency broadband achromatic polarizationinsensitive metalens is proposed through elaborately engineering the phase-dispersion spectrum φðr, ωÞ in Equation ( 2) based on irregular metasurface (denoted as M1 shown in Figure 1b).The dispersionless spin-multiplexing spiral profiles, φ jσ¼1,l¼1i ðθÞ and φ jσ¼À1,l¼2i ðθÞ can be realized via a single spin-multiplexing metasurface (denoted as M2) with high-broadband polarizationconversion efficiency.By cascading M1 and M2 into a compact BAM with the desired focused spiral profiles, as shown in Figure 1b, high polarization-conversion and focusing efficiency ensure the successful realization of broadband achromaticfocused OV beams with switchable AMs driven by the SOC of light.More details are discussed in Sections 2.2 and 2.3, respectively.
High-Efficiency Broadband Achromatism with Irregular Metasurfaces
As shown in Figure 2a, the broadband achromatic irregular cylindrical metalens is composed of circular silicon nanopillars with the fixed radius of r ¼ 200 nm but variable lattice constants P x (shown in the inset of Figure 2b).Compared with regular metasurfaces (generally constructed by nanopillar phase shifters with varying dimensions and identical lattice constant), the designed irregular metasurface is capable of avoiding resonance effects to guarantee high-broadband transmission efficiency and linear phase dispersion.First, we perform a parameter sweep of the nanopillars by varying lattice constants P x for x-polarized incidence in a continuous-wavelength range from 1.35 to 1.65 μm using finite-difference time-domain method (simulated details shown in Experimental Section).Figure 2b,c shows the extracted transmittance and the phase spectrum as a function of P x ranging from 200 to 1200 nm, from which the phase dispersion can be obtained for broadband achromatic focusing shown in Figure 2a.For comparison, the phase and transmission spectra for the regular meta-atoms (with the same height of H = 1400 nm and optimal lattice constant of 615 nm) are provided in Figure S1, Supporting Information.The near-unity broadband transmission and full 2π phase spectrum confirm the high broadband performance of the proposed irregular metasurface, when compared with the regular ones.
By combining Equation (1-2), one can obtain the mapping between the phase dispersion imposed by the subwavelength meta-atom with specified lattice constant P x and the modulated phase dispersion φðr, ωÞ at each pixel coordinate in the irregular metasurface.Subsequently, the cylindrical broadband achromatic metalens with the diameter of D = 20 μm and numerical aperture (NA) of 0.24 is constructed for x-polarization incidence.As shown in Figure 2d,e, optimal meta-atoms with specified phase compensation can be chosen to construct the broadband achromatic metasurface by the means of optimized filtering strategy (See Note S2, Supporting Information).The strong guiding mode confinement in the nanopillar and the linear phase dispersion (shown in the inset of Figure 2e) indicate that the wavefront manipulation with the irregular meta-atoms is a localized and nonresonate effect.Shown in Figure 2f, near-unity broadband transmission of the selected meta-atoms further reveals the validity of high-efficiency achromatic methodology via the irregular metasurface configuration.
Figure 2g shows the simulated intensity distributions for the metalens with NA = 0.24 along the axial plane, in which the dashed line marks the position of the focal plane.The simulations confirm the broadband achromatic focusing behavior of the irregular metalens.Besides, we have also designed an achromatic metalens with NA = 0.16 for confirming the validity of our achromatic approach (more simulated results are provided in Figure S2, Supporting Information).Through analyzing the intensity distributions of x-z cross sections for each sampled wavelength, Figure 2h summarizes the focal lengths and indicates small derivations from À2.5% to 2.5% relative to the mean focal length at each sampled wavelength.The results further confirm that the irregular metalens successfully implements achromatic focusing over a continuous wavelength from 1.35 to 1.65 μm.In addition, we also characterized the quality of the focal spots for each sampled wavelength (the x-cuts intensities across the focal spots can be seen in Figure S2, Supporting Information).The extracted full width at half maxima (FWHMs) for all the sampled wavelengths are summarized in Figure 2i.The results exhibit the realization of the nearly diffraction-limited focal spots.The focusing efficiency is shown in Figure 2j.The focusing efficiencies of irregular metalens over 85% across the entire designed bandwidth identify the realization of high-efficiency achromatic focusing, compared with that of the regular metalens constructed by the meta-atoms from the library shown in Figure S1, Supporting Information.Here, the focusing efficiency of the metalens is defined as the ratio of the optical power passing through a circular aperture (with radius 2-3 times the FWHM spanning the center of the focal spot) to the incident power.
Following the proposed design method for phase-dispersion engineering, we designed a circular and broadband achromatic polarization-insensitive metalens (D = 20 μm, NA = 0.24) with rotational symmetry for achromatically focusing OV beams with different spin-orbit-conversion states.Figure 3a shows the schematic of the polarization-insensitive broadband achromatic metalens and the simulated intensity distribution profiles along the x-z cross sections at each sampled wavelength.The corresponding intensity distributions of the x-y cross sections along the focal plane (the black dashed lines shown in Figure 3a) at each sampled wavelength are shown in Figure 3b.The results indicate the realization of achromatic and nearly diffraction-limited focusing with highly symmetric focal spots at each sampled wavelength.The simulated focal lengths for different polarizations are plotted in Figure 3c, which shows a small derivation (maximum variation of 2% relative to the mean focal length) across the entire designed bandwidth.Figure 3d shows the extracted FWHMs for all the sampled wavelengths.As shown in Figure 3e, the focusing efficiencies over 75% across the entire designed bandwidth further reveal the high-efficiency achromatic focusing, albeit with the reductions due to relatively strong side lobes compared to cylindrical metalens.The nearly identical focal lengths and working efficiencies for different polarizations robustly indicate realization of high-efficiency polarizationinsensitive broadband achromatic focusing via the proposed irregular metasurface configuration (shown in the inset in Figure 1b).
High-Efficiency BAM for SAM-to-OAM Conversion with Switchable AM
To achieve the SOC of light, the key issue is the design of metaatoms capable of independently controlling the two output spin eigenstates (σ ¼ AE1) and accurately introducing the helicalphase profiles φ jσ,li ðθÞ defined by Equation ( 2).In this work, the required spin-multiplexing metasurface consisting of birefringent meta-atoms is in the employment of engineering the SOC.In order to independently impart phase profiles on the two output spin eigenstates respectively, the Jones matrix Jðx, yÞ that describes light-metasurface interaction can be expressed as where jσ ¼ AE1i ¼ 1 AEi represent the Jones vectors of the spin eigenstates.
For the spin-multiplexing metasurface M2 consisting of silicon elliptical nanopillars shown in the inset of Figure 4b, the optical response of the birefringence meta-atoms can be described by the Jones matrix as Jðx, yÞ ¼ RðÀϕðx, yÞÞ e iΦ x ðx, yÞ 0 0 e iΦ y ðx, yÞ Rðϕðx, yÞÞ where Φ x ðx, yÞ and Φ y ðx, yÞ denote the spatial propagation phases for the birefringence meta-atom at ðx, yÞ under xand y-polarizations along two symmetry axes.ϕðx, yÞ denotes the orientational angle of the meta-atom, which determines the geometric phase shift.R is a 2  2 rotation matrix which can be expressed as RðϕÞ ¼ cos ϕ sin ϕ À sin ϕ cos ϕ .Based on FDTD simulations, we calculate the phase maps of the meta-atoms across the entire designed bandwidth ranging from 1450 to 1650 nm (simulated details shown in methods sections).Figure 4a,b shows the simulated phase shifts Φ x and Φ y of the elliptical meta-atoms as a function of the semimajor axis (R x ) and ) Phase maps of nanopillars as a function of the major-axis R x and minor-axis R y for xand y-polarizations, respectively.c) Polarization conversion efficiencies across the entire designed bandwidth for the selected meta-atoms.The inset represents the structural configuration of the meta-atom.d) The realized and required phase profiles for tunable SOC states with different topological charge numbers.e) Phase shifts for x-and y-polarized incidences for the selected and optimal meta-atoms used in the design of the SOC meta-device with switchable topological charge number.semiminor axis (R y ) under xand y-polarized incidence with wavelength of 1500 nm, respectively.The high polarization conversion efficiencies (average value of 85%) across the entire designed bandwidth are shown in Figure 4c, which ensures the realization of high-performance broadband SOC metadevice with switchable topological charge number.By combining Equation (1-4), spin-decoupled broadband phase control can be achieved (detail derivations shown in Note S3, Supporting Information).The most appropriate set ðRx, Ry, ϕÞ of the meta-atom at the corresponding pixel position can be selected from the meta-atom library (shown in Figure 4a, b) by minimizing an error function defined as the maximum error between the required phase and the simulated phase profile based on Equation (S10) in Note S3, Supporting Information.Based on the strategy of broadband spin-decoupled dispesionless phase control, we have designed the broadband spinmultiplexing metasurface M2 (as shown in Figure 1b).Figure 4d shows the realized phase profiles conducted by M2 operating at the two SOC states, jσ ¼ þ1, l ¼ 1i and jσ ¼ À1, l ¼ 2i.They are in agreement with the theoretically required ones (shown the bottom in Figure 4d).The results indicate that the spin-multiplexing metasurface can perform the SOC operation on circular-polarized incidence.To satisfy the half-wave plates condition from Equation (S10), the meta-atoms with phase difference of close to π between the xand y-polarization can be selected, shown in Figure 4e.
Focused OV beams with tunable properties in multiple dimensions are highly desirable in modern photonics, particularly for wide operating bandwidth and tunable topological charges.As schematically depicted in Figure 5a, the ultracompact BAM with the diameter of D = 20 μm has been successfully accomplished through elaborately cascading the proposed M1 and M2 (the thickness of the intermediate spacer between the two layers is 2 μm).The realized spiral phase distributions with the M2 and the focused OV phase profiles are demonstrated in Figure S4, Supporting Information.The high achromatic focusing performance and efficiency ensure the efficient decoupling between the focused phase dispersion and SOC defined by Equation (1).As a result, the BAM is capable of implementing the generation of broadband achromatic focused OV beam with switchable AM driven by the SOC of light.It can convert the broadband near-infrared beams into the OV beam with photonic spin-controlled topological charge number, and the photons are achromatically concentrated at the predesigned focus.
We perform 3D-finite-difference time-domain (FDTD) simulations for characterizing the performance of the designed BAM. Figure 5b,d shows the simulated intensity distributions of x-z cross sections at each sampled wavelength for different SOC states, jσ ¼ þ1, l ¼ 1i and jσ ¼ À1, l ¼ 2i, respectively.It can be seen that hollow-shaped OV beams with centers on almost the same focal lengths are generated, respectively.The simulated focal lengths are almost constant (maximum variation 1.5% relative to the mean focal length) across the designed bandwidth ranging from 1450 to 1650 nm, as shown in Figure 5f.We have also simulated the focal lengths sampled at different thicknesses of the intermediate spacer for comparison in Figure S4, Supporting Information.The results confirm the design of the BAM in our work.To better illustrate the focused OV beams driven by the SOC of light, Figure 5c,e shows the corresponding focal plane intensity profiles along the dotted lines shown in Figure 5b,d.The doughnut-shaped focal spots with significant SOC-dependent sizes reveal that the proposed BAM successfully implements the achromatically focusing and tunable SOC functionality (more simulations can be seen in Figure S3, Supporting Information).Besides, the focused helical phase profiles in the central region of the focus further validate the achromatic focused OV beam with switchable AM on-demand driven by the SOC of light (shown in the insets of Figure 5c,e).As shown in Figure 5f, the broadband focusing efficiency up to 60% robustly confirms the realization of broadband high-efficiency BAM, which promises potential applications in integrated optical trapping at nanoscale dimensions. [63,64]Here, the efficiency of the BAM is defined as the ratio of the optical power of the focused CP beam to that of the incidence with opposite helicity.
Conclusion
In summary, we demonstrated a general high-efficiency broadband achromatic methodology to implement spin-controlled multifunctional achromatic metadevice for switchable spin-orbitconversion states based on the cascaded metasurface platform.Through efficiently decoupling the SOC from the modulation of phase dispersion, the metadevice is capable of simultaneously implementing achromatic focusing and SOC-multiplexing functionality.With this platform, we successfully accomplish the generation of broadband achromatic focused OV beam with switchable SOC state and on-demand topological charge number.The broadband efficiency up to 60% further validates our achromatic approach for control of the dispersion and SOC of light.We believe that the achromatic focused and tunable structured light driven by photonic SOC proposed here will pave the promising way for AM-based classical and quantum optical applications.It may provide interesting research directions for deeply understanding light-matter interaction and light control at nanoscale dimensions, benefiting from the ultracompact nature, high focusing efficiency, and easy integration.
Experimental Section
All numerical simulations were conducted using 3D FDTD simulations.To obtain the database of the building blocks used in metadevice design, periodic boundary conditions were applied along the x-and y-axes and perfectly matched layers (PML) was applied along the z-axis (direction of light propagation).For obtaining the phase maps Φ x and Φ y in Section 2.3, periodic arrays were illuminated with x-and y-polarized plane waves within the desired bandwidth, respectively.Also, we adopted the PML boundary condition for all boundaries to carry out the 3D FDTD simulations for the BAMs.
Figure 1 .
Figure 1.Schematic illustration of broadband achromatic modulation of the SOC with cascaded metasurfaces.a) Schematic of the BAM operating in transmission mode.Near-infrared beams with different spin states are normally incident on the cascaded metasurfaces.The transmitted beam can be converted OV beam with different AMs and achromatically focused at the prescribed focal plane.b) The broadband spin-multiplexing metasurface for performing the SOC can be designed utilizing elliptical birefringent nanopillars with different dimensions (D x and D y ) and orientation angles (ϕ).Broadband achromatic polarization-insensitive metalens is composed of nanopillars with identical dimensions but variable lattice constant (P x ), which we refer to as an irregular metasurface.
Figure 2 .
Figure 2. High-efficiency broadband achromatic modulation with irregular metasurfaces.a) Schematic of the irregular broadband achromatic metasurface operating in a transmission mode.The designed irregular metasurface can efficiently engineer the dispersion and achromatically focus a broadband nearinfrared beam on predesigned focus.b, c) Phase and transmittance databases of irregular meta-atoms.The meta-atom is made of a circular amorphous silicon nanopillar (with fixed radius r = 200 nm and height H = 1400 nm) deposited on a silica rectangular lattice (with varying P x and fixed P y = 250 nm)shown in the inset in c. d) The realized and required phase distributions for focusing.e) The realized and required group delay phase spectrum for controlling the dispersion.The inset shows the guiding-mode profiles and the corresponding phase spectra of the selected meta-atoms.f ) The transmission spectrum of the selected meta-atoms for the design of the irregular achromatic metasurface.g) Simulated intensity distributions along x-z cross sections for each sampled wavelength.h) The extracted focal length functioning versus wavelength at different NAs.i) The FWHM of the focal spots vary with the wavelengths sampled at different NA.j) The focusing efficiency of the irregular metalens for comparison with that of the regular metalens.
Figure 3 .
Figure 3. Polarization-insensitive broadband achromatic focusing with irregular metasurface.a) Schematic of the irregular broadband achromatic metalens and the simulated intensity distributions along x-z cross sections at each sampled wavelength.The inset shows the top view of the 2D-circular metalens.The black dashed lines indicate the position of the predesigned focal plane.b) The intensity distribution profiles of the focal spots for each sampled wavelength.c) Focal lengths for different polarization incidence at each sampled wavelength.d) The FWHMs of the focal spots.e) Broadband transmission and achromatic focusing efficiency for different polarizations.
Figure 4 .
Figure 4. SAM-to-OAM with tunable topological charge number.a, b) Phase maps of nanopillars as a function of the major-axis R x and minor-axis R y for xand y-polarizations, respectively.c) Polarization conversion efficiencies across the entire designed bandwidth for the selected meta-atoms.The inset represents the structural configuration of the meta-atom.d) The realized and required phase profiles for tunable SOC states with different topological charge numbers.e) Phase shifts for x-and y-polarized incidences for the selected and optimal meta-atoms used in the design of the SOC meta-device with switchable topological charge number.
Figure 5 .
Figure 5. BAM for engineering the SOC with tunable topological charge number driven by the SOC.a) Schematic principle of the broadband achromatic SOC metadevice.The broadband achromatic SOC metadevice can convert different broadband CP beams into the achromatic focused OV beams with distinct and spin-controlled topological charge number on the predesigned focal plane.b,d) Simulated intensity distribution profiles along x-z cross sections for different SOC states jσ ¼ þ1, l ¼ 1i and jσ ¼ À1, l ¼ 2i, respectively.c,e) The corresponding focal plane intensity profiles for the different SOC states.The insets show the spin-controlled spiral phase distributions with tunable topological charge number for the achromatic focused OV. f ) The broadband efficiency and focal length shifts for the designed bandwidth. | 5,556.6 | 2024-02-13T00:00:00.000 | [
"Physics",
"Engineering"
] |
Strong quantum nonlocality and unextendibility without entanglement in N -partite systems with odd N
A set of orthogonal product states is strongly nonlocal if it is locally irreducible in every bipartition, which shows the phenomenon of strong quantum nonlocality without entanglement [Phys. Rev. Lett. 122 , 040403 (2019)]. Although such a phenomenon has been shown to any three-, four, and five-partite systems, the existence of strongly nonlocal orthogonal product sets in multipartite systems remains unknown. In this paper, by using a general decomposition of the N -dimensional hypercubes, we present strongly nonlocal orthogonal product sets in N -partite systems for all odd N ≥ 3 . Based on this decomposition, we give explicit constructions of unextendible product bases in N -partite systems for odd N ≥ 3 . Furthermore, we apply our results to quantum secret sharing, uncompletable product bases, and PPT entangled states.
Introduction
Quantum nonlocality is one of the most fundamental property in quantum world.Entangled states show Bell nonlocality for violating Bell-type inequalities [1,2].However, besides Bell-type nonlocality, there is a different kind of nonlocality which arises from local indistinguishability.A set of orthogonal states is locally indistinguishable if it is impossible to distinguish them under local operations and classical communications (LOCC).Bennett et al. showed the phenomenon of quantum nonlocality without entanglement, by presenting a locally indistinguishable orthogonal product basis (OPB) in C 3 ⊗ C 3 [3].Later, quantum nonlocality based on local indistinguishability has received much attention .
Recently, a stronger version of local indistinguishability was introduced by Halder et al. -local irreducibility [33].A set of orthogonal states is locally irreducible if it is not possible to eliminate one or more states from the set by orthogonality-preserving local measurements.Moreover, a set of orthogonal states is strongly nonlocal if it is locally irreducible in every bipartition.They also showed the phenomenon of strong quantum nonlocality without entanglement, by presenting two strongly nonlocal OPBs in C 3 ⊗ C 3 ⊗ C 3 and C 4 ⊗ C 4 ⊗ C 4 , respectively.Then strongly nonlocal orthogonal product sets (OPSs) and orthogonal entangled sets (OESs) were also widely investigated [31,[34][35][36][37][38][39][40][41][42][43][44].However, the phenomenon of strong quantum nonlocality without entanglement has been limited to three-, four-, and five-partite systems up to now [45].It is difficult to show this phenomenon in N -partite systems.This is because the main construction of strongly nonlocal OPSs relies on the decomposition of N -dimensional hypercubes [45], and when N is large, the decomposition can be more complex.In this paper, we will give a general decomposition of the N -dimensional hypercubes for odd N ≥ 3, and construct strongly nonlocal OPSs in N -partite systems for odd N ≥ 3.
An unextendible product basis (UPB) is a set of orthonormal product states whose complementary space has no product states [46].UPBs can be used to construct bound entangled states [7,46], Bell-type inequalities without quantum violation [47,48] and fermionic systems [49].UPBs are also connected to quantum nonlocality and strong quantum nonlocality [8,37,38].By using tile structures, Shi et al. gave some explicit constructions of UPBs in C m ⊗ C n [50].Then by the decomposition of three-and four-dimensional hypercubes, the authors of [37,51] showed some UPBs in three-and four-partite systems.In this paper, based on the decomposition N -dimensional hypercubes for odd N ≥ 3, we give explicit constructions of UPBs in N -partite systems for odd N ≥ 3.
Here is some brief outlines of some underlying motivations of our work.Strong quantum nonlocality can be used for quantum secret sharing.Suppose that information is encoded into a strongly nonlocal OPS in an N -partite system, and sent to N players, where the N players can only communicate classically and perform orthogonality-preserving local measurements.Then then the original information cannot be perfectly recovered by the N players, even if k (k < N ) players collude with each other.Thus it is important to construct strongly nonlocal OPSs in N -partite systems, which is the first motivation of this work.An OPS is uncompletable if it cannot be extended to a fully OPB.In 2003, DiVincenzo et al. proposed an open question, whether there eixsts a UPB which is uncompletable in every bipartition [7].Recently, Shi et al. showed such a UPB exists in abitrary three-, and fourpartite system [52].However, the existence of such a UPB in arbitrary N -partite system remains unknown.This is because there are few explicit constructions of UPBs in Npartite systems.We show that our UPBs will hopefully solve this problem for odd N ≥ 3.This is the second motivation of this work.A mixed state is a PPT state if it is positive under partial transpose (PPT).PPT entangled states corresponds to bound entangled states, where no pure entanglement can be distilled [53].The normalized projector on the orthogonal complement of the subspace spanned by a UPB is a PPT entangled state [46].If a mixed state is a PPT entangled state across every bipartition, then it shows that the set of states separable across every bipartition is a proper subset of the set of states PPT across every bipartition [54].We will show that our UPBs in N -partite systems with N = 5 can be used to construct mixed states which are PPT entangled states in every bipartition.This is the third motivation of this work.
This paper is organized as follows.In Sec. 2, we introduce strong quantum nonlocality and UPBs.In Sec. 3, we give a decomposition of the hypercube Z N 3 , and construct an OPS from this decomposition.We also give a construction of a strongly nonlocal OPS from the decomposition of the hypercube.In Sec. 4, we introduce another main result, a construction of UPBs in (C 3 ) ⊗N .More general results N -partite systems for odd N ≥ 3 can be found in Sec. 5.In Sec. 6, we give some applications of our results.Finally, we conclude in Sec. 7.
Preliminaries
In the section, we will introduce the concepts of strong quantum nonlocality and UPBs.To simplify the notation, we do not normalize states and operators.The positive operatorvalued measure (POVM) is a set of positive semi-definite operators {E m = M † m M m } acting on H, a Hilbert space, with a property that m M † m M m = I H , where I H is an identity operator on H.If the set is a POVM, we call each E m as a POVM element.We focus on this measurement in this paper, and we regard the POVM measurement as trivial measurement if all the POVM elements E m are proportional to the identity operator.
In particular, we consider a specific local measurement called orthogonality-preserving local measurement (OPLM).It is performed to distinguish multipartite orthogonal states, and it is defined with the post-measurement states remaining to be orthogonal.
Definition 1 [33] A set of multipartite orthogonal states is locally irreducible if it is not possible to eliminate one or more states from the set by OPLMs.Furthermore, a set of multipartite orthogonal states is strongly nonlocal if it is locally irreducible for every bipartition of the subsystems.
For example, the Bell basis {Ψ i } 4 i=1 is locally irreducible, where If A 1 performs an OPLM {E = M † M }, where E can be written as a 2×2 matrix (a i,j ) i,j∈Z 2 under the basis we have a 0,1 = a 1,0 = 0.It means that arbitrary OPLM performed by A 1 is trvial.By the symmetry of the Bell basis, arbitrary OPLM performed by A 2 is also trvial.Thus, the Bell basis {|Ψ i ⟩} 4 i=1 is locally irreducible.There is a simple method for showing strong quantum nonlocality.
Lemma 1 [38] Let S := {|ψ j ⟩} be a set of orthogonal states in a multipartite system ⊗ N i=1 H A i .For each i = 1, 2, . . ., N , define B i = {A 1 A 2 . . .A N } \ {A i } be the joint party of all but the ith party.Then the set S is strongly nonlocal if the following condition holds for any 1 ≤ i ≤ N : if party B i performs any OPLM, then the OPLM is trivial.
Next, we introduce the concept of UPBs.
Definition 2 [46] A set of orthogonal product states {|ψ i ⟩} is an unextendible product basis (UPB) if the orthogonal complement of Span{|ψ i ⟩} has non-zero dimension and contains no product state.
For example, let where where Z d := {0, 1, 2, . . ., d − 1} and repeat N times.We assume that {|j⟩} j∈Z d is the computational basis of C d , then the computational basis of (C d ) ⊗N is (2) In the space Z N d , when there is no ambiguity, we denote the vector (j 1 , j 2 , . . ., j N ) together with the single set {(j 1 , j 2 , . . ., j N )} as There are d N vectors in Z N d , and all vectors in d in the hypercube.For a general product state |ψ⟩ ∈ (C d ) ⊗N , there exists a unique subset We denote |E| := Then by these subcubes, the authors constructed an OPB ∪ 9 i=1 B i in (C 3 ) ⊗3 , where [51].The authors of Ref. [45] showed that the OPS 7 i=1 |B i ⟩ ∪ |B 9 ⟩ is strongly nonlocal, which corresponds to the outermost layer of Fig. 1.They also gave a similar decomposition for the 5-dimensional hypercube and showed that the OPS from the outermost layer is strongly nonlocal [45].
Before giving the rigorous mathematical construction of the generalized case, one can look back to Figure 1 -what makes this decomposition special?There are plenty of different ways to decompose a cube, but not all of them can provide an OPS with nonlocal or UPB property.The crucial part of the construction in Figure 1 is that: we cannot get a subcube other than Z n 3 by combining some of the subcubes in this partition.We will explain it via the high dimensional construction and the generalized theorems.
Next, we generalize the decomposition of Fig. 1 to any N -dimensional hypercube Z N 3 and odd N ≥ 3.
Construction. Let B
Next, assume that K ⊆ {A 1 , A 2 , . . ., A N } and |K| is even.In particular, |K| = 0 implies that K = ∅.For any fixed K, we can construct two subcubes as follows.Let are defined recursively as in Table 1.
We denote Then |Λ| = 2 N −1 .We also denote where |B| = 2 N + 1, and To show that the set B of 2 N + 1 subcubes is a decomposition of Z N 3 , we need to show some important properties of this set.These properties are also useful for comprehension and further discussion on the nonlocal OPS and UPB.The proofs of the following two lemmas can be found in Appendix B and C.
Lemma 2
The set B of 2 N + 1 subcubes is invariant under the cyclic permutation of the N parties.
By the cyclic property, we always assume that
In other words, if we regard the set {0, 2} N as the "vectors at the corners", the lemma above claims that these corner vectors distribute evenly in our construction.
For better understanding, we give the decomposition of Z 3 3 in Table 2.We also give the decomposition of Z 5 3 in Table 3 in Appendix A.
Remark 3
We can check the behaviour of Lemma 2 and Lemma 3 under the case Z 3 3 .From Table 2, one can see that the construction contains all ξη0, η0ξ, 0ξη and ηξ2, ξ2η, 2ηξ in the chart, which satisfies Lemma 2. And from Figure 1, we can see each B i occupies a corner vector in the cube, which explains Lemma 3. Roughly speaking, we can think of this abstract construction as a delicate partition, where each subcube in the partition grows from a corner, and the whole pattern is symmetric in a certain sense of rotation.
Next we can associate the partition with the tensor product space.For each subcube, we can construct an OPS in (C 3 ) ⊗N .More specifically, for the subcube is an OPS of size 2 |K| in (C 3 ) ⊗N .Further, we let By Lemma 3, we can construct an OPB in (C 3 ) ⊗N .
Lemma 4 For odd N , the set ) Then we are ready to introduce the first main result in this paper: The proof of the theorem is in Appendix D. It mainly use two important lemmas in Ref. [38], which allow us to discuss the POVM matrix directly from the construction of OPS.Comparing Theorem 1 and Lemma 4, we can deduce that the OPB in Lemma 4 would be a strongly nonlocal OPS after removing the state |B 0 ⟩.More general result can be found in Sec. 5.
UPB in (C 3 ) ⊗N
In this section, we will introduce another main result in this paper.We construct a UPB in (C 3 ) ⊗N for odd N .First, we consider the simple case The authors of Ref. [51] showed that 6 i=1 (B i \ {|ψ i (0, 0)⟩}) ∪ {|S⟩} given by Eqs. ( 6) and ( 17) is a UPB in (C 3 ) ⊗3 .Inspired by this idea, we next show a UPB in (C 3 ) ⊗N .
For any subcube P K = P , we define (18) and the state Also, we define the "stopper" state Now we can give a UPB in (C 3 ) ⊗N .
Theorem 2 In (C 3 ) ⊗N where N is odd, the set of orthogonal product states The proof is given via contradiction.We would like to explain a sketch proof for the space Z 3 3 , where the partition for this specific example is given in Figure 1 and Table 2.The proof is arranged as follows: (i) If |ψ⟩ is a product state, its support set E ⊆ Z 3 3 must be a subcube.(ii) If |ψ⟩ is orthogonal to U, then E must be a union of some subcubes in Table 2.
(iii) It's easy to check from Figure 1 that it is not possible to satisfy both (i) and (ii) unless E = Z 3 3 .(iv) Show that |ψ⟩ is proportional to |S⟩, which means that all coefficients in the natural linear expansion of |ψ⟩ have the same value.
The detailed proof can be found in Appendix E. We note that the general proof for (iii) and (iv) would be delicate because of the complexity of the partition, and we can see from step (iv) that the stopper state |S⟩ is inevitable.When d is larger than 3, the construction needs to be generalized.A layer-by-layer generalization of this UPB can be found in Sec. 5.
The construction for general system
We have showed strongly nonlocal OPSs and UPBs in (C 3 ) ⊗N for odd N .In this section, we will give a general result in First, we need to give a decomposition of the hypercube Then we show a strongly nonlocal OPS in Without loss of generality, we always assume that 3
The decomposition of a general cube
We define δ = ⌊ d 1 −1 2 ⌋ many layers L 1 , L 2 , . . ., L δ , where each layer is defined as the difference set We also denote the central block as B 0 , which is defined as Then it is easy to see that Next, we will give a further decomposition of each layer L k .The decomposition of L k is similar to the decomposition of Z N 3 , and see Eq.
Therefore, we have the following lemma.
Strongly nonlocal OPS in
where |η k ⟩ A j , |ξ k ⟩ A j are the Fourier basis spanned by |m⟩ A j where κ j = e 2π √ −1 d j −2k+1 is a root of unity.We also denote and where |β j ⟩ A j is the Fourier basis spanned by |m⟩ A j . That is where τ j = e 2π √ −1 d j −2δ is another root of unity.Now, we have the following lemma.
We can also give a strongly nonlocal OPS in The proof of Theorem 3 is similar to Theorem 1.
UPB in
Also, for each element P k,K (and also B 0 ) in the decomposition, we denote that and the product state We also denote Then we are ready to generalize Theorem 2.
is odd, (which corresponds to the center vector in the cube).And hence there is no such term |B 0 ⟩\{|B + 0 ⟩} in the result in Section 4. The proof is similar to Theorem 2. A subspace of is a completely entangled subspace if it has no non-zero product state, and the maximum dimension of the completely entangled subspace is . The orthogonal complement of the subspace spanned by a UPB is a completely entangled subspace.Then our UPB of size N is odd and N ≥ 3) can be used to construct a completely entangled subspace with dimension 2 N ⌊ d 1 −1 2 ⌋, which is smaller than the maximum dimension.
Applications
In this section, we consider some applications of our results in quantum secret sharing, uncompletable product bases, and PPT entangled states.
Quantum secret sharing.
It is known that non-orthogonal states cannot be perfectly distinguished [56].Therefore, to perfectly distinguish a set of multipartite orthogonal states, it is necessary to use OPLMs.Suppose that information is encoded into a strongly nonlocal orthogonal product set (OPS) in an N -partite system and then sent to N players.If the N players want to recover the original information, then they must perform OPLMs.Note that it is impossible to eliminate one or more states from the strongly nonlocal OPS by OPLMs for every bipartition of the subsystems.If k (k < N ) players collude with each other, which means they can perform joint OPLMs, then the original information cannot be perfectly recovered by the N players.To perfectly recover the original information, the N players can perform a global OPLM.
Uncompletable product bases in every bipartition.
An OPS in N i=1 H i is an uncompletable product basis (UCPB) if it cannot be extended to an OPB in N i=1 H i [7].It is known that a UPB must be a UCPB, while the converse is not true in general.In 2003, DiVincenzo et al. proposed an open question, whether there exists a UPB which is a UCPB for every bipartition of the subsystems.Recently, Shi et al. answered this open question by showing such a UPB in arbitrary three-,and four-partite system [52].
Since U is invariant under the cyclic permutation of the N parties (Lemma 2), there is only one case of bipartition when N = 3, A 1 |A 2 A 3 .When N = 5, there are in total three kinds of different bipartitions, N kinds of different bipartitions.When the number of parties N grows, there would be much more different cases.
Despite of the complexity of the bipartition types, we believe that our construction U gives a UCPB for every bipartition of the subsystems for any odd N ≥ 3.For every bipartition of the subsystems, as U is a UPB, it suffices to prove that the orthogonal complement H ⊥ U has no basis of orthogonal product states in every biparition.By definition of U in Theorem 31, we have Dim H ⊥ U = 2 N and In other words, H ⊥ U consists of linear combinations of |B + 0 ⟩ and |P + K ⟩ (2 N + 1 vectors in total) such that the linear combination is perpendicular to |S⟩.Thus the problem became to prove there is no basis of orthogonal product states in such space in every bipartition, with the help of properties of U.
We present the conjecture and a possible proof procedure as follows.
Conjecture 4 For any odd N ≥ 3, the construction U defined in Theorem 2 is an UCPB for every bipartition of the subsystems.
Approach.For any fixed bipartition of N parties, we propose a possible proof idea with the following steps (i) Suppose d = 3 and there is a orthogonal product basis {|ψ i ⟩} 2 N i=1 of H ⊥ U under the bipartition.Then there is at least one |ψ j ⟩ contains |B + 0 ⟩ in the linear combination of |B + 0 ⟩ and |P + K ⟩'s with non-zero coefficient.Otherwise, for the complete orthogonal basis U ∪ {|ψ i ⟩} 2 N i=1 , |B + 0 ⟩ is perpendicular to every vector of this basis except |S⟩, which implies |B + 0 ⟩ ∝ |S⟩.This is a contradiction.
(ii) Consider |ψ j ⟩ containing |B + 0 ⟩.Similar to the proof of UPB, we might be able to prove the support set of |ψ j ⟩ must be Z N 3 .
(iii) Because |ψ j ⟩ should be a product state under the certain bipartition, given that the support set of |ψ j ⟩ is Z N 3 , we might be able to prove that all the coefficients (under the linear combination in |B + 0 ⟩ and |P + K ⟩'s) of |ψ j ⟩ are the same.This implies |ψ j ⟩ ∝ |S⟩, which contradicts to ⟨S|ψ j ⟩ = 0.
(iv) Finally, similar to the proof in Section 5, we can generalize the result to larger d by recursion.
Although we can apply the approach above to check the result technically with some small party numbers such as N = 5, it is much challenging to deduce the general result because the large number of bipartition types and different shapes in the cube partition construction.
PPT entangled states in every biparition.
A mixed state is called a PPT state if it is positive under partial transposition (PPT).PPT entangled states corresponds to bound entangled states, where no pure entanglement can be distilled [53].UPBs can be used to construct PPT entangled states, that is, the normalized projector ρ H ⊥ U on the orthogonal complement H ⊥ U of the subspace spanned by a UPB U is a PPT entangled state [46].For our normalized UPB Next we show that ρ H ⊥ U is in fact a PPT entangled state in every bipartition.Let S be a proper subset of {A 1 , A 2 , • • • , A N }, then under the partial transpose of the parties S, I is invariant, and U is transformed to another OPS U ′ .It means that ρ H ⊥ U ′ is a mixed state, and it must be positive.Thus ρ H ⊥ U is a PPT state in every bipartition.Furthermore, from the above discussion, any biproduct state and this implies that ρ H ⊥ U is a PPT entangled state in every biparition.Note that the existence of such states indicate that the set of mixed states which is separable in every bipartition is a proper subset of the set of mixed states which is PPT in every bipartition [54].
Conclusion and Discussion
In this paper, based on the decomposition of N -dimensional hypercubes for odd N ≥ 3, we constructed strongly nonlocal orthogonal product sets and unextendible product bases in For applications, we showed that our strongly nonlocal orthogonal product sets can be used for quantum secret sharing, and our UPBs might be used to construct uncompletable product bases in every biparition, and PPT entangled states in every biparition.
During the reviewing process of this paper, strongly nonlocal orthogonal product sets in with even N were found in [57].Combining our results, the phenomenon of strong quantum nonlocality without entanglement exists in arbitrary Npartite system.There are some interesting problems left.The first question is whether one can show that our unextendible product bases are strongly nonlocal for N ≥ 5? Note that, when N = 3, strongly nonlocal unextendible product bases has been shown in [37].What is the minimum size of the strongly nonlocal orthogonal product sets?How to prove our Conjecture 4?
where k 1 < k 2 < . . .< k 2i .By Eq. (9) and Table 1, we know that P Thus, {η} and {ξ} alternatively appear in P . There are four cases.1.If A 1 / ∈ K, A N / ∈ K, by Eq. ( 9) and Table 1, we have 2. If A 1 ∈ K, A N / ∈ K, by Eq. ( 9) and Table 1, we have ) and Table 1, we have ), and Table 1, we have after a cyclic permutation of the parties {A 1 , A 2 , . . ., A N }.Then to prove B is invariant under cyclic permutations, it suffices to prove that P ′ K ′ ∈ B. We can define K ′ as the image of K after the permutation.By the cyclic property of P K in Fig. 2, the components of P ′ K ′ satisfies Table 1.Then we have Thus the set B of 2 N + 1 subcubes is invariant under the cyclic permutation of the N parties.
C The proof of Lemma 3
Proof.
First, we need to show that B contains 3 N vectors.For each subcube P K = P vectors.Next, for any vector {j There are two possible cases.
Without loss of generality, we assume that j k = 0. Then we can find P
and determine whether
In this way, we can find the subcube P K such that {j Thus the 2 N + 1 subcubes of B is a decomposition of Z N 3 .From the above proof, we know that each P K contains exactly one vector in {0, 2} N .
D The proof of Theorem 1
Before proving the theorem, we need to introduce some notations and two useful lemmas given in [38].Given an n-dimensional Hilbert space, say H n , we can take an orthonormal basis {|0⟩, |1⟩, • • • , |n − 1⟩}.Then consider an operator E on H n .It will have a matrix representation under the basis, and we also denote it as E. When there is no ambiguity, we will not distinguish the notation of the operator and the matrix.In other words, S E T is a sub-matrix of E, with S, T indicating the chosen row coordinates and column coordinates.In particular, when S = T , we simplify the notation as E S := S E S .Moreover, we say that an orthogonal set {|ψ i ⟩} i∈Zs is spanned by S ⊆ {|0⟩, |1⟩, • • • , |n − 1⟩}, if any |ψ i ⟩ can be linearly generated by states from S. [38]) Let an n × n matrix E = (a i,j ) i,j∈Zn be the matrix representation of an operator E = M † M under the basis B := {|0⟩, |1⟩, . . ., |n−1⟩}.Given two nonempty disjoint subsets S and T of B, assume that {|ψ i ⟩} s−1 i=0 , {|ϕ j ⟩} t−1 j=0 are two orthogonal sets spanned by S and T respectively, where s = |S|, and t = |T |.If ⟨ψ i |E|ϕ j ⟩ = 0 for any i ∈ Z s , j ∈ Z t (we call these zero conditions), then S E T = 0 and T E S = 0. Lemma 8 (Block Trivial Lemma [38]) Let an n × n matrix E = (a i,j ) i,j∈Zn be the matrix representation of an operator E = M † M under the basis B := {|0⟩, |1⟩, . . ., |n−1⟩}.Given a nonempty subset S := {|u 0 ⟩, |u 1 ⟩, . . ., |u s−1 ⟩} of B, let {|ψ j ⟩} s−1 j=0 be an orthogonal set spanned by S. Assume that ⟨ψ i |E|ψ j ⟩ = 0 for any i ̸ = j ∈ Z s .If there exists a state |u t ⟩ ∈ S, such that {|ut⟩} E S\{|ut⟩} = 0 and ⟨u t |ψ j ⟩ ̸ = 0 for any j ∈ Z s , then E S ∝ I S .(Note that if we consider {|ψ j ⟩} s−1 j=0 as the Fourier basis, i.e. |ψ j ⟩ = s−1 i=0 w ij s |u i ⟩ for j ∈ Z s , then it must have ⟨u t |ψ j ⟩ ̸ = 0 for any j ∈ Z s ).
Proof.
By Lemmas 1 and 2, we only need to show that A 2 , . . ., A N can only perform a trivial OPLM.Assume that A 2 , . . ., A N come together to perform an OPLM {E = M † M }, where E = (a i 2 i 3 ...i N ,j 2 j 3 ..
For any vector Then for two different |ψ⟩, |ϕ⟩ ∈ O, we have
(11).For the set K∈Λ P∈{C,D} {P K } given in Eq. (11), we can obtain a decomposition K∈Λ P∈{C,D} {P k,K } of L k by the following replacements:
Table 1 .
The cyclic property of P K for P ∈ {C, D}, where P By the four cases and Eq.(33), we know that P through Table1for P ∈ {C, D}.It means that P K forms a cycle for P ∈ {C, D}, where P (A i ) K through Table 1 for 1 ≤ i ≤ N − 1,and P
Table 1 .
See Fig.2for this phenomenon.For any P K by Table 1, we can determine that P = {ξ} A k+1 and A k+1 ∈ K.By Fig. 2, we can repeat this process N times through Table 1.Then we can determine P | 6,972 | 2022-03-28T00:00:00.000 | [
"Mathematics",
"Physics"
] |
New Multicomponent Reaction for the Direct Synthesis of β-Aryl-γ-nitroesters Promoted by Hydrotalcite-Derived Mixed Oxides as Heterogeneous Catalyst
A new approach based on multicomponent/domino combined reactions for the synthesis of γ-nitroesters promoted by a mixed aluminium-magnesium oxides derived from hydrotalcite-like material was developed. Different γ-nitroesters were synthesized in 15-95% yield using Meldrum’s acid, aromatic aldehydes, nitromethane and different alcohols as reagents and solvents. The γ-aminobutyric acid derivatives, Phenibut and Baclofen, were prepared in 63 and 61% overall yield, respectively, through a two steps synthetic strategy. A mechanistic pathway was proposed based on the gas chromatography mass spectrometry (GC-MS) and electrospray ionization mass spectrometry (ESI-MS) experiments.
Introduction
Gamma aminobutyric acid (GABA) and L-glutamic acid are the two major neurotransmitters that regulate neuronal activity in the brain.While L-glutamic acid is a neurotransmitter that induces an excitatory effect, GABA acts as the major inhibitory neurotransmitter. 1 The widespread presence of GABA and L-glutamic acid in the brain is related to various functions of the central nervous system (CNS), including novice chemical abuse disorders, making them two of the most promising targets for the development of neuropsychiatric drugs. 2 Due to their fundamental role in neurotransmission, these systems are involved in a range of commercially available drugs, such as Phenibut, Baclofen, Gabapentin, and Pregabalin. 3 Multicomponent reactions (MCRs) have been used as a versatile synthetic method for the preparation of complex molecules from available starting materials via a single pathway. 4][10] In addition, the design and development of environment-friendly catalysts have been the subject of intense research. 113][14][15] The HT-like compounds exhibit dual basic/acid properties 16,17 and can be useful as bifunctional catalysts in different organic transformations. 18,19s a part of our ongoing efforts in the field of MCR, [20][21][22][23][24] herein we disclose our studies on the development of a new multicomponent/domino combined approach for the direct synthesis of γ-nitroesters (5) from the reaction of Meldrum's acid (1), aromatic aldehydes 2, nitromethane (3) and an alcohol 4 as the solvent, promoted by calcined hydrotalcite-like compounds.
][27] Thus, the development of multicomponent methodologies permitting the rapid access to this class of compounds is of great interest.To our best knowledge, this tetracomponent synthesis of γ-nitroesters was not reported in the literature.
Results and Discussion
The multicomponent/domino combined approach Previous studies from our laboratory demonstrated the ability of HT to promote the direct Michael addition of 1,3-dicarbonyl compounds to nitrostyrenes, resulting in the formation of β-aryl-γ-nitrocarbonyl compounds.We also observed that HT combined with Meldrum's acid resulted in the formation of γ-nitroesters directly. 25he activity of HT was attributed to the increase in the basicity of this material by the formation of mixed magnesium and aluminium oxides after thermal treatment that we called HT [Calc.] . 26couraged by the fact that HT is able to promote the Henry reactions, 27 we hypothesized that HT [Calc.]could promote the in situ preparation of nitrostyrenes through a Knoevenagel-type reaction from aldehydes and nitromethane.Once the nitrostyrene is present in the reaction media, the 1,4-addition of Meldrum's acid takes place, performing a one-pot multicomponent synthesis of γ-nitroesters.
According to this assumption, our approach was based on a new MCR/domino combined reaction of Meldrum's acid (1), aromatic aldehydes 2, nitromethane (3) and an alcohol 4 as the solvent.In this process, the calcined hydrotalcite (HT [Calc.]), used as catalyst, proved to be essential to afford the γ-nitroesters 5 (Scheme 1).
To our satisfaction, the reactions carried out in EtOH afforded the γ-nitroesters 5a-j in reasonable to good yields.The results are shown in Table 1.
The results show the multicomponent process occurred in a single step promoted by HT [Calc.]and was effective for both electron-withdrawing or electron-donating groups attached to the aromatic ring ( The behavior of the multicomponent reaction in the presence of ethyl malonate instead of Meldrun's acid was also investigated.In this case, the main product was identified as a γ-nitro-dicarbonylic compound formed in only a 30% yield.This result indicates the combination of Meldrum's acid and HT is essential to the preparation of γ-nitroesters in one step.As only ethyl esters were produced, we speculate that this was due to the use of ethanol as the solvent in the multicomponent process.To prove this assumption, a set of reactions were carried out in the presence of different alcohols, with the goal of producing different alkyl esters.The MCRs were performed in methanol (4b), isopropanol (4c), n-butanol (4d), tert-butanol (4e), benzyl alcohol (4f), allyl alcohol (4g) and propargyl alcohol (4h) as shown in the Scheme 1.The results are shown in Table 2.
In all cases, the γ-nitroesters 5m-u were formed in reasonable to good yields.These reactions proved the direct participation of the alcoholic solvent, which also acted as a reagent.The influence of each type of alcohol is not totally clear.For example, low yields of γ-nitroesters were observed in n-BuOH or tert-BuOH (Table 2, entries 5 and 6, respectively).The chemical structures of γ-nitroesters 5a-u prepared from different aromatic aldehydes are depicted in the Figure 1.Speculating about the reaction mechanism, we presumed that the alcoholic solvent is able to promote the opening and transesterification of the Meldrum's acid moiety in the Michael adduct initially formed. 25The adduct 6a was not isolated during the multicomponent process.Thus, we focused our efforts on its isolation and characterization.Assuming the nitrostyrene 7a is being formed during the course of the multicomponent process, we performed two sets of specific reactions using Meldrum's acid and nitrostyrene (Scheme 2).
The first set was performed in presence of HT [Calc.] in nonalcoholic solvents because γ-nitroesters 5a were produced in the presence of EtOH.In the second set, an amine basecatalyzed conjugated addition was used in ethanol as the solvent.Under both of these new conditions, we were able to isolate the intermediate 6a in good yields.Unfortunately, attempts to purify 6a by column chromatography were unsuccessful due to the degradation of the product on silica.
Thus, the yields of these reactions were inferred by 1 H nuclear magnetic resonance (NMR) spectra from the crude mixture after carefully removing the HT [Calc.]from the crude mixture of organics.The results are shown in the Table 3.
As seen in Table 3, the HT [Calc.]was able to promote the Michael addition of Meldrum's acid to the nitrostyrene in different non-protic solvents such as CH 2 Cl 2 , CH 3 CN, and tetrahydrofuran (THF) to afford compound 6a in good yields (entries 1, 2, and 3, respectively).The use of tertiary amines as basic catalysts also furnished 6 in both non-protic (entry 4) and protic solvents (entries 5-10).It is important to note that even with EtOH under reflux conditions (entries 7 and 9, respectively) no degradation of 6a was observed, revealing that the transformation from 6a to 5a does not occur through the use of basic media containing simple amines.To ensure that the transformation could be made independently, 6a was reacted with EtOH in the presence of HT [Calc.]for 24 h under reflux conditions.After this time, the HT [Calc.]was filtered off, and the volatiles were removed under vacuum.The crude product was purified by column chromatography to afford the 5a in a 90% yield, confirming the pivotal action of HT [Calc.] in this step (Scheme 3).
We believe that the presence of metal centers of HT [Calc.]could be act as weak Lewis acid sites coordinating with the oxygen base Lewis site of Meldrum's acid moiety, aiding the starting the process to transformation of 6a into 5a.Thus, this could evidence a dual acid/base properties of the HT [Calc.] .This idea was supported by the ESI-MS studies and will be discussed later.
A tentative mechanistic pathway by GC-MS studies
Once the formation of nitrostyrene 7a was suggested during the course of the reaction, we decided to investigate the ability of HT in the formation of 7a by Henry reaction, followed by a direct dehydration step.The reaction was performed from aldehyde 2a and nitromethane (3), in the presence of HT [Calc.]and EtOH under reflux as shown in Scheme 4.
The reaction was carefully monitored through the gas chromatography-mass spectrometry (GC-MS) analysis.After 30 min, the chromatogram showed signals corresponding to benzaldehyde (Rt = 4.29 min, m/z 106), nitrostyrene (7a) (Rt = 10.49min, m/z 149), and nitroaldol 8a (Rt = 11.02min, m/z 167) (Figure 2a).Analysis of aliquots after 1.0 and 1.5 h of reaction time showed a gradual increase in the formation of 7a, confirming its formation under the conditions of the MCR protocol.
The nitroester 5a was the main product.Because the GC-MS analysis was useful in monitoring this step, we decided to extend the method to follow the course of the MCR process to identify other possible intermediates.A new experiment was started, and the first analysis was made 10 min after all of the components had been mixed.At this time, the chromatogram did not reveal the presence of 7a.Instead, a signal at Rt = 16.08 min (m/z 232) was observed.This signal was characterized as being the benzylidene adduct 9a, most likely formed from the Knoevenagel-type condensation of Meldrum's acid (1) and benzaldehyde (Figure 2B).After 1.0 h, the nitrostyrene (7a) at Rt = 10.50 min (m/z 149) and nitroaldol 8a at Rt = 11.20 min (m/z 167) were detected.The desired nitroester 5a was also identified at Rt = 14.65 min (m/z 237) as well as the benzylidene adduct 9a at Rt = 16.14 min (m/z 232) (Figure 2C).Other aliquots were analyzed after 3.0, 5.0, 8.0, and 11.0 h, respectively.From these chromatograms, we observed gradual decreases in the signals of 7a and 9a and an increase in the signal of γ-nitroester 5a.
After 24 h, the GC-MS analysis revealed that the γ-nitroester 5a was the principal component as well as the almost complete consumption of all of the reagents (Figure 2D).
As we postulated, the formation of adduct 6a occurred via the Michael addiction of Meldrum's acid to nitrostyrene, and we decided to confirm that the benzylidene derivative was also involved in the formation of γ-nitroesters.Thus, 9a was prepared independently by the condensation of Meldrum's acid and benzaldehyde in presence of HT [Calc.]under refluxing EtOH.The benzylidene adduct was isolated in a 98% yield.Next, nitromethane was added to benzylidene in the presence of HT [Calc.] in EtOH as the solvent.After 6 hours of reflux, we obtained the respective γ-nitroester 5a in a 95% yield after purification by column chromatography (Scheme 5).
Based on these results, we postulated a combined multicomponent/domino reaction sequence to explain the formation of γ-nitroesters 5a-u.However, it seems that two different competitive MCRs can explain the formation of the common intermediate 6a.One of them occurs via Knoevenagel-type condensation of Meldrum's acid and benzaldehyde to afford the benzylidene intermediate 9a.The second one also starts with a Knoevenagel-type condensation of nitromethane and benzaldehyde to form the nitrostyrene 7a.The conversion of common intermediate 6a in the γ-nitroester occurs through a one pot domino process, with the loss of acetone and CO 2 along with including an esterification step (Scheme 6).
Investigation of the mechanistic pathway by ESI-MS/MS studies
In this study, we describe the results of the mechanistic investigation of the MCR synthesis of γ-nitroesters by ESI-MS/MS.To accomplish this task, model experiments were performed using Meldrum's acid (1, 1 mmol), benzaldehyde (2a, 1 mmol) and nitromethane (3, 5 mmol) in conditions showed in Table 1, for the synthesis of nitroesters.The monitoring of the reaction was realized by the removal of aliquots (100 µL) every 5 minutes.
The aliquots were diluted in 1.5 mL of the solvent used in the reaction medium (MeOH) and filtered for direct injection into the ion source of the mass spectrometer using a syringe pump at a flow rate of 15 µL min -1 .
The monitoring by ESI-MS/MS showed one major pathway in the multicomponent process.In the first event, the Knoevenagel condensation of benzaldehyde and Meldrum's acid leads to formation of benzylidene intermediate 9a of m/z 255 ([M + Na] + ) (Figure 3A).After 30 minutes of reaction, the benzylidene acts as a Michael acceptor for the addition of nitromethane (3), affording the Michael adduct 6a of m/z 316 ([M + Na] + ) (Figure 3B).The spectrum obtained after 60 minutes in the presence of HT [Calc.](50 mg) and MeOH (3 mL) under reflux already shows the presence of the nitroester 5a of m/z 246 ([M + Na] + ) formed from a domino transformation process of 6a into nitroester 5a (Figure 3C).
To prove the statement above, the Michael adduct 6a was submitted to an opening reaction.At t 1 = 5 min, it was possible to identify the presence of two species of m/z 214 ([M + Na] + ) and m/z 258 ([M + Na] + ), corresponding to ketene derivatives 10 and 11, respectively, both as sodiated species (Figure 4A). 39heme 5. Preparation of 5a from benzylidene intermediate 9a.Scheme 6.A rational mechanism for the multicomponent synthesis of γ-nitroesters.
After 20 minutes, the presence of a new species of m/z 290 ([M + Na] + ) was observed, which was identified as the malonic acid methyl half-ester intermediate 12 (Figure 4B).After 1 hour, the main product observed was 5a of m/z 246 ([M + Na] + ), formed through a decarboxylation reaction (Figure 4C).
According to the experiments carried out, we rationalized that the reaction occurs initially by Knoevenagel condensation to form 9a, followed by Michael addiction of Meldrum's acid to produce the adduct 6a.These steps were related to the multicomponent process.The opening of Meldrum's acid moiety of 6a can occur by a possible two pathways to form directly the half-ester 12 or via ketene intermediates wards the opening of Meldrum's moiety occur via ketene intermediate 11.Both intermediates 11 and 12 can be transformed into the γ-nitroesters 5a through further simple loss of CO 2 and/or MeOH addition.The opening process of Meldrum's acid moiety of 6a and their subsequent transformation into de γ-nitroester 5a were related to a domino process (Scheme 7).
To demonstrate the potential application of the γ-nitroesters in the synthesis of the GABA-derivatives, the Phenibut ( 16) and Baclofen (17) were prepared in two steps from the γ-nitroesters 5a and 5b, respectively.The reduction of the nitro group in the presence of NaBH 4 /NiCl 2 •6H 2 O in ethanol led directly to the isolation of the lactams 14 and 15 in yields of 82 and 87%, respectively. 42he transformation of the respective lactams into the GABA-derivatives was achieved in presence of HCl 6 mol L -1 , under reflux for 12 hours. 43fterwards, Phenibut was isolated in an 88% yield, and Baclofen was produced in an 89% yield, both in the chloridrate form.Thus, Phenibut and Bacofen were expeditiously synthesized from Meldrum's acid in only 3 steps in overall yields of 63 and 61%, respectively (Scheme 8).
Conclusions
In the present work, a new multicomponent reaction between Meldrum's acid, aromatic aldehydes, nitromethane and alcoholic solvents was developed to afford the direct synthesis of β-aryl-γ-nitroesters 5a-u with yields ranging between 15 and 95%.It was demonstrated that the hydrotalcite-derived metal oxides as heterogeneous catalyst, plays a role promoting the one pot single-step process.The use of GC and ESI-MS analysis for monitoring the course of the reactions revealed the convergent mechanistic pathways towards the formation of a common intermediate 6 in a multicomponent process, while its transformation into the γ-nitroesters occurs through a domino process.Thus, the γ-nitroesters 5a and 5b produce by this way were easily converted into the lipophilic GABA derivatives Phenibut (16) and Baclofen (17) in 3 steps in 63 and 61% overall yields, respectively.
Experimental
All solvents for the routine isolation of products and chromatography were of reagent grade.Flash chromatography was performed using silica gel (230-400 mesh).All reactions were monitored by thin-layer chromatography on 0.25 mm silica plates (60F-254) and visualized with UV light or iodine. 1 H NMR and 13 C NMR spectra were recorded either on a 300, 75 or 400, 100 MHz spectrometers, respectively.Chemical shifts (d) are reported in ppm relative to tetramethylsilane (TMS).The multiplicity of signals is expressed as: s (singlet); d (doublet); dd (double doublet); t (triplet); q (quartet) and m (multiplet), and the coupling constant 3 J is expressed in hertz (Hz).Infrared (IR) spectra were recorded on a Varian 640-IR spectrometer and are expressed in cm -1 in the range of 4000-400 cm -1 .Melting points were measured on Olympus BX41 microscope equipped with a Mettler-Toledo FP82HT hotplate.ESI-MS and ESI-MS/MS experiments in the positive ion-mode were performed on a high-resolution hybrid quadrupole (Q) and orthogonal timeof-flight (TOF) mass spectrometer (Q-TOF Micro, Waters-Micromass, UK) with a constant nebulizer temperature of 100 °C and a capillary voltage of 3.0 V.The cone and extractor potentials were set to 15 and 4.5 V, respectively, General procedure for the preparation of the HT and HT [Calc.]catalyst 44 Hydrotalcite (HT, Mg/Al ratio 3:1) was synthesized by a co-precipitation method at ambient conditions under variable pH values.An aqueous solution (50 mL) containing Mg(NO 3 ) 2 •6H 2 O (0.09 mol) and Al(NO 3 ) 3 •9H 2 O (0.03 mol) was slowly added (2 h) to a second solution (100 mL) containing NaHCO 3 (0.25 mol) under vigorous stirring at 80 °C, and it was continuously stirred for additional 2 h at the same temperature.The precipitate formed was filtered and washed with deionized water until the pH of the filtrate was 7.Then, the precipitate was dried in the oven at 105 °C for 12 h, and finally, it was macerated to produce a white-powder.The obtained HT powder was calcined in a conventional oven at 450 °C for 4 h to afford a new white powder that was called hydrotalcite calcined.The specific surface area (BET multipoint technique) of HT [Calc.]was obtained from samples previously degassed at 120 °C under vacuum for 10 h by N 2 adsorption desorption isotherms using a Tristar 3020 Kr Micromeritics equipment.The specific surface area was estimated as 130 ± 5 m 2 g -1 .The X-ray diffraction (XRD) measurements were carried out using a Siemens D-500 powder diffractometer.Data were collected with Cu Kα radiation with a wavelength of General procedure for the multicomponent synthesis of γ-nitroesters 5a-r A mixture of aromatic aldehydes 2a-j (1 mmol), Meldrum's acid (1, 1.0 mmol), nitromethane (3, 5.0 mmol) and hydrotalcite (0.05 g) was stirred at reflux for 24 h in an alcoholic solvent (1.0 mL).Afterwards, the resulting mixture was filtered through Celite using CH 2 Cl 2 as the eluent.The filtrate was concentrated and purified by column chromatography on silica gel using a gradient of hexanes and ethyl acetate as the eluent to give the γ-nitroesters 5a-r.
Ethyl 4-nitro-3-phenylbutanoate (5a) 36 mixture was filtered through Celite using CH 2 Cl 2 as the eluent.The filtrate was concentrated and purified by column chromatography on silica gel using a gradient of hexanes and ethyl acetate as the eluent to give the γ-nitroesters 5s-u.
General procedure for the synthesis of Michael adduct 6a
Nitrostyrene (1 mmol), Meldrum's acid (1.1 mmol) and Et 3 N (1.0 mmol) in CH 2 Cl 2 were added into a round bottom flask.The mixture was submitted to magnetic stirred at room temperature overnight.The crude product was diluted in 5.0 mL of CH 2 Cl 2 and washed with HCl (5%, 3 × 5.0 mL).The organic phases were dried with MgSO 4 , filtered and evaporated under vacuum afford adduct 6a.
Figure 2 .
Figure 2. (A) GC-MS after 30 min of Henry reaction in the presence of HT [Calc.] .Detection of intermediates 7a and 8a; (B) GC-MS after 10 min of the MCR process.Detection of the benzylidene adduct 9a; (C) GC-MS after 1.0 hour of the MCR process.Detection of intermediates 7a, 8a and 9a as well as the final product the nitroester 5a; (D) GC-MS after 24 hours of the MCR process.
Table 2 .
Synthesis of 5m-u in the presence of different alcohols a See Table1, entry 1; b synthesized by the opening reaction of 6a.Figure1.Structurally diverse γ-nitroesters prepared via the new tetracomponent reaction. | 4,593.4 | 2016-01-01T00:00:00.000 | [
"Chemistry"
] |
Molecular Editing of Pyrroles via a Skeletal Recasting Strategy
Heterocyclic scaffolds are commonly found in numerous biologically active molecules, therapeutic agents, and agrochemicals. To probe chemical space around heterocycles, many powerful molecular editing strategies have been devised. Versatile C–H functionalization strategies allow for peripheral modifications of heterocyclic motifs, often being specific and taking place at multiple sites. The past few years have seen the quick emergence of exciting “single-atom skeletal editing” strategies, through one-atom deletion or addition, enabling ring contraction/expansion and structural diversification, as well as scaffold hopping. The construction of heterocycles via deconstruction of simple heterocycles is unknown. Herein, we disclose a new molecular editing method which we name the skeletal recasting strategy. Specifically, by tapping on the 1,3-dipolar property of azoalkenes, we recast simple pyrroles to fully substituted pyrroles, through a simple phosphoric acid-promoted one-pot reaction consisting of dearomative deconstruction and rearomative reconstruction steps. The reaction allows for easy access to synthetically challenging tetra-substituted pyrroles which are otherwise difficult to synthesize. Furthermore, we construct N–N axial chirality on our pyrrole products, as well as accomplish a facile synthesis of the anticancer drug, Sutent. The potential application of this method to other heterocycles has also been demonstrated.
■ INTRODUCTION
Heterocyclic compounds are among the most significant structural scaffolds in medicinal chemistry and drug discovery, 1−4 consequently, their selective functionalization is of crucial importance.−11 For structural editing of heterocyclic compounds at different peripheral sites, multistep synthetic manipulations are usually required (Figure 1A).−26 In this tactic, the structures of heterocycles are synthetically "edited" through one-atom deletion or addition of the core molecular framework, to achieve desired transformations, thus offering powerful tools in drug discovery.In medicinal chemistry, maintaining the molecular core/skeleton of a lead compound would be ideal in the lead optimization process, which enables structural interrogation/modification in an efficient and productive manner.When the heterocyclic structures are concerned, the ability of maintaining the ring size while allowing synthetic manipulations to take place at various ring sites represents an ideal approach in molecular editing.Toward this end, we wondered if we could disrupt a simple heterocycle with a carefully chosen molecular perturbator, resulting in ring deconstruction to form an advanced intermediate, which will then incorporate the perturbator moieties and recast back to form the same type of heterocyclic ring with more structural complexity.We term this the skeletal recasting strategy (Figure 1C).
In our proof-of-concept study, we decided to focus on pyrrole and its derivatives.−33 In fact, pyrrole synthesis dates back to the 19th century, and the most well-known methods include the Knorr, 34−36 Paal−Knorr, 37,38 and Hantzsch 39 reactions, which are still being commonly practiced nowadays.−43 On the other hand, synthesis of such structurally encumbered scaffolds represents a major synthetic challenge, presenting a bottleneck in the development of pyrrole-containing agrochemicals and pharmaceuticals.There are some reports on the synthesis of multisubstituted pyrroles; 44−53 nevertheless, the reported methods often require specific prefunctionalized substrates, being somewhat less general (Figure 2B).Therefore, we became interested in devising an efficient approach to access fully substituted pyrroles, aiming to use our proposed skeletal recasting strategy.
When the synthesis of complex pyrroles is concerned, simple pyrroles are arguably ideal starting materials, as they are cheap and readily accessible.The presence of a nitrogen atom makes multipositions of pyrrole nucleophilic, which in combination with the installation of different substituents on the pyrrole core will make synthetic manipulations more versatile.We reckon the key in our proposed skeletal recasting strategy is to introduce a suitable molecular perturbator, which will first trigger the pyrrole deconstruction via dearomatization and then reconstruct the pyrrole ring through rearomatization at a later stage.Consequently, we set the following criteria: (1) a dipole molecule that enables deconstruction/reconstruction sequence and ( 2) contains (at least) a nitrogen atom to facilitate reforming pyrrole ring.We reasoned that azoalkenes 54−59 may serve as a suitable molecular perturbator.In a recent study, we showed that azoalkenes could serve as a valuable CCN 1,3dipole due to a facile hydrazine-enamine tautomerization. 58In a projected reaction, we hypothesize that an acid-promoted nucleophilic attack of pyrrole on the azoalkene leads to dearomatization of the pyrrole ring.Subsequently, with the participation of azoalkene nitrogen, and an acid-promoted C−N bond cleavage, a rearomatization reaction may take place.Lastly, hydrolysis and elimination would yield a fully substituted pyrrole (Figure 2C).Herein, we introduce a new molecular editing strategy, termed skeletal recasting, for one-step facile synthesis of fully substituted pyrroles from simple pyrroles.
■ RESULTS AND DISCUSSION
To start our investigation, we chose 2,5-dimethyl-1H-pyrrole 1a and azoalkene 2a as model substrates and examined the potential skeletal recasting reaction (Table 1).To our delight, the projected reaction proceeded smoothly to yield recast pyrrole in the presence of acid catalysts (entries 1−5).Among all the acid catalysts examined, Cat. 5 gave the best results.Catalyst is essential for the reaction, without which no reaction was observed (entry 6).Varying equivalence of azoalkene 2a had no influence on the reaction (entries 7 and 8).A quick solvent screening revealed that chloroform was the solvent of choice (entries 9−13).When the catalyst loading was further lowered to 10 mol %, comparable results were obtained.Under the optimized reaction conditions, the recast tetra-substituted 3a was obtained in 90% yield (entry 14).
With the optimized reaction conditions in hand, we explored the scope of azoalkene substrates (Figure 3).The tolerance of the reaction to the ester moieties appended to the C�C double bond of azoalkenes was first evaluated.A broad range of esters, such as methyl (3b), ethyl (3a), i-propyl (3c), t-butyl (3d), ibutyl (3e), benzyl (3f), and allyl (3g) esters, were all found to be suitable, and the tetra-substituted pyrroles were obtained in good yields.Subsequently, the ester groups at the azoalkene Nterminal were varied, and benzyl ester (3h), p-methoxybenzyl ester (3i), and t-butyl ester (3j) all worked well.Both ester moieties in the azoalkene structure can be changed at the same time, and the results remained excellent (3k and 3l).In the reaction, the R 2 group in azoalkene substrates ends up at the C5position of pyrrole products, and modification of R 2 offers great flexibility in accessing diverse 5-substituted pyrrole scaffolds.Indeed, the alkyl chain lengths could be varied from methyl, ethyl, n-propyl, to n-butyl, meanwhile, different esters could be installed at the two ester sites of the azoalkene structures, and decent yields were constantly obtained (3m−3r).Interestingly, when azoalkenes with an alkyl substituent bearing a terminal C�C bond or a phenyl substituent were employed, and the corresponding pyrroles (3s and 3t) were obtained, such modifications not only add in great structural diversity to the pyrrole products but also make synthetic manipulations of the products more feasible.
Next, the applicability of this method to different pyrrole starting materials was evaluated (Figure 4).Our strategy starts with simple pyrrole substrates, which is highly practical, as these pyrroles are either commercially available or synthetically readily and untouched siloxy moiety (3ah), and it seems that unusual stability of imine is due to the formation of the intramolecular hydrogen-bonding network.When unsymmetric pyrroles bearing an alkyl and an aryl group were reacted with azoalkene 2a, the recasting reaction took place smoothly to form desired products.Notably, the more sterically hindered aryl moieties in pyrrole substrates ended up at the 2-position of pyrrole products.The employment of aryl groups in pyrrole substrates is versatile, from simple phenyl to various substituted phenyls, regardless of the substitution pattern and electronic nature (3ai−3aq).In addition, styrene and biphenyl-containing pyrrole substrates were also found to be suitable (3ar and 3as).Moreover, the reaction was applicable to pyrrole starting materials containing a (substituted)-naphthyl, benzofuran, benzothiophene, or thiophene, and the tetra-substituted pyrroles were constantly obtained in good yields (3at−3ax).Interestingly, replacement of the hydrogen atom of pyrrole NH moiety with a cyclopropyl group (1y) or phenyl (1y′) group had little influence, and the same pyrrole product 3a was obtained.At last, when unsymmetric N-phenyl pyrroles bearing two different C2-and C5-alkyl substituents were utilized, the corresponding pyrrole products were obtained in good yields; it is notable that highly sterically hindered t-butyl could be employed (3az).
To further investigate the reaction scope, we turned our attention to the application of this methodology to biologically active molecules to demonstrate the potential of our method for the modification of drug-like molecules (Figure 5A).2,5-Disubstituted pyrroles derived from naproxen (anti-inflammatory), indomethacin (anti-inflammatory), and bezafibrate (hyperlipidaemia treatment) were subjected to the standard reaction conditions, skeletal recasting took place smoothly without touching ester and amide groups, and the corresponding tetra-substituted pyrrole derivatives (3ba−3bc) were formed in good yields.It is noteworthy that enantiomeric excess of 3ba remained >99% after the skeletal recasting process, indicating no enantiomeric erosion under our reaction conditions.Furthermore, the reaction scope with regard to trisubstituted and monosubstituted pyrroles, as well as pyrrole itself, was also examined.The utilization of trisubstituted pyrrole formed the desired recast product in good yield (3bd), while the reaction with monosubstituted pyrrole or pyrrole only led to the formation of a 1,4-addition product (3be′ and 3bf′), not the recast products (Figure 5B).
A series of experiments were conducted to shed light on the mechanism of this skeletal recasting reaction of simple pyrroles with azoalkenes (Figure 7).In our hypothesis, the deconstruction of a simple pyrrole substrate entails a dearomatization process to form an advanced intermediate and the detection of which would provide strong evidence to our mechanistic proposal.Accordingly, we mixed pyrrole 1a and azoalkene 2a in anhydrous chloroform with the introduction of 10 molar equivalence of 18 -labeled H 2 O and monitored the reaction progress with liquid chromatography−mass spectrometry (LC− MS).Within 5 min, two peaks with retention time of 9.177 and 9.374 min were observed, corresponding to advanced intermediate 3a′ (MS = 296.10observed) and the final pyrrole product 3a″ (MS = 299.05observed).After the overnight reaction, intermediate 3a′ disappeared and only 3a″ was observed, and high-resolution mass spectrometry (HRMS) of the latter was taken, confirming its presence ambiguously (Figure 7A).Another key point to be clarified in the mechanism is the fate of the pyrrole nitrogen atom.The fact that the newly recast pyrrole contains a hydrazine moiety clearly suggests the incorporation of azoalkene into the product and the departure of the nitrogen atom from the pyrrole starting material.Consequently, we performed the reaction using 15 N-labeled pyrrole (1d′) as the starting material.Indeed, the recast pyrrole product (3ad) did not contain a radio-labeled nitrogen (Figure 7B).With the above mechanistic studies, a plausible mechanism for the reaction is proposed (Figure 7C).Phosphoric acid promotes a Friedel−Crafts-type 1,4-addition of pyrrole to azoalkene, forming a dearomatized hydrazine intermediate (I).Subsequently, a tautomerization to enamine takes place, yielding intermediate II.Under the catalysis of phosphoric acid catalyst, another 1,4-addition occurs to provide the bicyclic intermediate III, which undergoes a crucial rearomatization, through the cleavage of the C−N bond at the original pyrrole nitrogen site, restoring aromaticity and furnishing a new pyrrole ring (V).Finally, an enamine-imine tautomerization, followed by a hydrolysis, leads to the formation of final product 3a.
The tetra-substituted pyrroles prepared using the skeletal recasting strategy are useful and interesting.−69 As an illustration, we carried out the asymmetric N-alkylation reaction of pyrrole 3t, using the Morita−Baylis−Hillman (MBH) carbonate 5a as the alkylating agent (Table 2).Cinchona alkaloids turned out to be good catalysts, promoting the reaction in an enantioselective manner (entries 1−3).Among all the alkaloids examined, quinidine was found to be the best, forming the desired alkylation product 6 in 70% yield with 70% ee.Subsequently, a quick solvent screening showed that dichloromethane was the most suitable solvent (entries 4−6).Lowering the reaction temperature further enhanced the enantioselectivity of the reaction.When the reaction was performed in dichloromethane at −20 °C, the desired product 6 bearing an N−N axial chirality was obtained in 80% yield and with 91% ee (entry 9).We next proceeded to perform the N−N bond cleavage and form the N-unprotected pyrrole products. 70When N-protected pyrrole products 3 were treated with azoalkene 2a, the N−N bond was cleaved, and the corresponding N-unprotected pyrroles 7 were obtained in high yields (7a−7d).It is noteworthy that the modified drug-like 3bc well tolerated the deprotection conditions, and the complex N-unprotected pyrrole 7e was obtained in good yield.The proposed mechanism of N−N bond cleavage is also illustrated.The reaction commences with a nucleophilic attack by the exocyclic nitrogen of pyrroles 3′ on the electrophilic carbon of azoalkene 2a, leading to the formation of intermediate I, which undergoes a N−N bond cleavage through an E1cB process to afford Nunprotected pyrrole 7 (Figure 8A).
To highlight the synthetic value of our tetra-substituted pyrrole products, we conducted a concise synthesis of Sutent, one of the best-selling anticancer drugs. 40As depicted in Figure 8B, the N-unprotected pyrrole 7a was subjected to a few trivial reactions, yielding aldehyde 8.At last, a simple amination, followed by a condensation, completed the synthesis of Sutent (10).
From a conceptual viewpoint, the skeletal recasting strategy we introduced herein for heterocycle editing should be generally applicable to other heterocyclic structures, provided these heterocycles may: (1) chemically interact with a judiciously selected/designed molecular perturbator (azoalkene in current study) and ( 2) be capable of incorporating extra structural moieties from perturbator to form a more complex heterocyclic structure.Indeed, when indolizine was treated with azoalkene 2a in the presence of phosphoric acid, a novel heterocycle 14 containing both pyrrole and pyridine moieties was formed.Similarly, when pyrrolo quinoline was subjected to our standard reaction conditions, deconstruction and reconstruction processes happened, and the recast product 15 bearing both pyrrole and quinoline substructures was obtained in good yield (Figure 8C).
■ CONCLUSIONS
In summary, we have developed an efficient synthesis of fully substituted pyrroles from simple pyrroles, enabled by the skeletal recasting strategy.By introducing an azoalkene as a molecular perturbator, a dearomatization of pyrrole starting materials takes place, which is followed by a rearomatization process incorporating structural moieties of the perturbator to form more complex pyrrole motifs.A broad range of tetrasubstituted pyrroles are conveniently prepared, and we also introduce N−N axial chirality to the products, as well as to complete a facile synthesis of the anticancer drug, Sutent.Conceptually, the skeletal recasting strategy has broad applicability to other heterocycles, thus may offer a powerful tool for molecular editing of heterocyclic structures.Currently, we are working toward extending this concept to heterocycle editing in a broader context, targeting the synthesis of complex heterocycles.Such strategies should find wide applications in medicinal chemistry, argochemistry, and materials sciences, and we anticipate that more exciting discoveries in the science of molecular editing are forthcoming.
Relevant data include reaction optimization, reaction procedure, product characterization, and NMR spectra (PDF) X-ray structure of compound 9 (CIF)
Figure 2 .
Figure 2. Reaction design.(A) Biologically active fully substituted pyrroles.(B) Known methods for the synthesis of fully substituted pyrroles.(C) Reaction design: skeletal recasting of simple pyrroles.
Figure 6 .
Figure 6.Skeletal recasting strategy for the synthesis of 13 C-labeled pyrroles. | 3,381 | 2023-08-15T00:00:00.000 | [
"Chemistry"
] |
Fejér and Hermite-Hadamard Type Inequalities for Harmonically Convex Functions
Feixiang Chen and Shanhe Wu 1 School of Mathematics and Statistics, Chongqing Three Gorges University, Wanzhou, Chongqing 404000, China 2Department of Mathematics and Computer Science, Longyan University, Longyan, Fujian 364012, China Correspondence should be addressed to Shanhe Wu<EMAIL_ADDRESS>Received 21 June 2014; Accepted 23 July 2014; Published 6 August 2014 Academic Editor: Yu-Ming Chu Copyright © 2014 F. Chen and S. Wu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Introduction
Let : ⊆ R → R be a convex function and , ∈ with < ; then Inequality (1) is known in the literature as the Hermite-Hadamard inequality. Fejér [1] established the following weighted generalization of inequality (1). Theorem 1. If : [ , ] → R is a convex function, then the following inequality holds: where : [ , ] → R is positive, integrable, and symmetric with respect to = ( + )/2.
In [6], Dragomir proposed an interesting Hermite-Hadamard type inequality which refines the left hand side of inequality of (1) as follows.
Theorem 2 (see [6]). Let be a convex function defined on [ , ]. Then is convex, increasing on [0, 1], and for all ∈ [0, 1], one has where An analogous result for convex functions which refines the right hand side of inequality (1) was obtained by Yang and Hong in [7] as follows.
Theorem 3 (see [7]). Let be a convex function defined on [ , ]. Then is convex, increasing on [0, 1], and for all ∈ [0, 1], one has Yang and Tseng in [8] established the following Fejér type inequalities, which is the generalization of inequalities (3) and (5) as well as the refinement of the Fejér inequality (2). Theorem 4 (see [8] where In [9,10],İşcan and Wu gave the definition of harmonic convexity as follows. for all , ∈ and ∈ [0, 1]. If the inequality in (10) is reversed, then is said to be harmonically concave.
The following Hermite-Hadamard inequality for harmonically convex functions holds true.
In this paper, we establish a Fejér type inequality for harmonically convex functions; our main result includes, as special cases, the inequalities given by Theorems 6 and 7. Moreover, we investigate some properties of the mappings in connection to Hermite-Hadamard and Fejér type inequalities for harmonically convex functions.
Fejér Type Inequality for Harmonically Convex Functions
The following Fejér inequality for harmonically convex functions holds true.
Some Mappings in connection with Hermite-Hadamard and Fejér Inequalities for Harmonically Convex Functions
Lemma 12. Let : ⊆ R\{0} → R be a harmonically convex function and , ∈ with < , and let and hence ℎ is convex on [0, − ].
Theorem 16. Let : ⊆ R \ {0} → R be a harmonically convex function and , ∈ with < . If ∈ ( , ) and is defined by where : [ , ] → R is nonnegative and integrable and satisfies the condition of (15), then is convex and increasing on [0, 1], and This completes the proof of Theorem 16. | 743.4 | 2014-08-06T00:00:00.000 | [
"Mathematics"
] |
Discovery of a novel allosteric inhibitor scaffold for polyadenosine-diphosphate-ribose polymerase 14 (PARP14) macrodomain 2
Graphical abstract
Introduction
Poly-(ADP ribose) Polymerases (PARPs) are ADP-ribosyl transferase enzymes which post-translationally modify substrate proteins. 1 Of at least 17 human family members of PARPs a sub-set, referred to as mono(ADP-ribose)transferases (mARTs), are capable of transferring on a single ADP unit to a given substrate. 2 PARP14 (ARTD8) is the largest of the mARTs and contains multiple domains including an ADP ribose transferase domain (ART), a WWE domain, two (RNA binding) RRM repeats and three (ADP-ribose binding) macrodomains. 3 PARP14 was found to be highly expressed in B-cell lymphoma and hepatocellular carcinoma and has been associated with poor patient prognosis. 4 Furthermore PARP14 has been linked to inhibition of pro-apoptotic kinase JNK1 which activates pyruvate kinase M2 isoform (PKM2) which in turn promotes a higher rate of glycolysis in cancer (Warburg effect) 5 shown in some contexts to be regulated by high MYC expression. 6 Despite links with cancer pathogenesis 5,7 and inflammatory diseases, 1b,c,7,8 only a few small molecule PARP14 inhibitors have been reported and many have suffered from a lack of selectivity. 9 Most examples of PARP inhibitors have targeted the catalytic domain (ART) 10 such as a recent example by Upton and coworkers who identified moderately selective PARP14 inhibitors, 10e however to date no PARP14 modulators targeting other domains such as the macrodomains have been reported until recently. 11 PARP14 contains three macrodomain modules (MD1, MD2 and MD3); biophysical characterisation of macrodomain:ADP-ribsose peptide binding was carried out revealing MD2 as the most potent ADP ribsosyl peptide binding domain and therefore the most likely to deliver a functional effect through small molecule inhibition (PARP14 MD1/ADP-ribose peptide K D 137 ± 7 lM, PARP14 MD2/ ADP-ribose peptide K D 6.8 ± 0.1 lM, PARP14 MD3/ADP-ribose peptide K D 15 ± 0.9 lM, Supp. Info Fig. 1 (Fig. 1 An initial medium throughput screen (50 k compounds) revealed compound GeA-69 (1) as a sub-micromolar inhibitor of PARP14 MD2 ADP-ribose binding as measured by AlphaScreen TM , ITC and BLI. 11a A co-crystal structure of closely related sulfonamide derivative 2 with PARP14 MD2, which was obtained in the course of the project revealed a unique allosteric binding mode for this inhibitor (PDB ID 5O2D). Overlay of this structure with bound ADP-ribose from a previously published co-crystal structure of PARP14 MD2 (PDB ID 3Q71) 12 showed that compound 2 occupied a novel pocket adjacent to the binding site for ADPribose ( Fig. 2A). 11a Carbazole 2 engages PARP14 MD2 in a pocket adjacent to the ADP-ribose binding site and the interaction is characterised by a H-bond between the carbazole N-H and backbone carbonyl of Pro1130 (N-O distance 2.8 Å), an H-bond between one sulfonamide carbonyl and the backbone N-H of Ile1132 (O-N distance 2.8 Å), and an H-bond from the sulfonamide N to a water molecule in the binding pocket. A comparison of the two structures rationalises inhibitory activity as carbazole 2 induces a shift in the loop region adjacent to Pro1130 which consequently moves into the ADPribose binding site ( Fig. 2A). Evaluation of the co-crystal structure of carbazole 2 with PARP14 MD2 also revealed the possibility of extending the methanesulfonamide motif into larger substituents exploring peripheral regions of this newly identified allosteric site.
Systematic SAR studies of screening hit GeA-69 (1)
The screening hit GeA-69 (1) was part of a focused library from the Bracher lab, originally designed for the improvement of kinase inhibitors derived from the 1-(aminopyrimidyl)-b-carboline alkaloid annomontine. 13 The SAR studies on screening hit GeA-69 (1) are described in the following compound library generated as potential PARP14 MD2 inhibitors (Fig. 3). In this library, the b-carboline ring system was replaced by its deaza analogue carbazole, and a number of aromatic and heteroaromatic rings were attached to position 1 (Scheme 1) using Suzuki-Miyaura cross coupling reac-tions of known 1-bromocarbazole 14 with commercially available or synthesised boronic acids and esters to give compounds 3-12 (Scheme 1).
Unfortunately none of these analogues (compounds 3-14) showed any inhibition of PARP14 MD2. Only a few further modifications of the 1-aryl substituent were performed, whereby all new compounds contained the acetylamino moeity, which was recognised as important for activity in this early stage of the project.
The aza analogue 15 was obtained from N-SEM protected 1-bromocarbazole by Masuda borylation at C-1, directly followed by Suzuki-Miyaura cross-coupling with 4-amino-3-bromopyridine, subsequent N-acetylation and SEM deprotection, as previously described. 11a This compound has virtually identical size as the active compound 1, but interestingly was found to be completely inactive at inhibiting PARP14 MD2 presumably due to the differences in electronics of both molecules. Consequently, this compound could serve as a useful negative control in biochemical experiments. The pyridyl-isomers 16 and 17 were obtained in the same manner using 3-amino-2-chloro-and 3amino-4-chloropyridine in the cross-coupling reaction (Fig. 5). Furthermore, using Suzuki-Miyaura cross-coupling reactions, the acetylaminophenyl residue was attached to position 1 (Scheme 1) of the b-carboline ring system 15 in order to obtain a ring A aza-analogue 18 and to the canthin-4-one 19 and desazacanthin-4-one 16 20 ring systems in order to give analogues bearing tetracyclic core structures (Fig. 5).
An analogue of GeA-69 (1) with the acetamido group shifted from the ortho to the meta position at the phenyl ring 21 was prepared by Suzuki-Miyaura cross-coupling of 1-bromocarbazole with 3-aminophenyl boronic acid, followed by N-acetylation. Additionally, the complete acetylaminophenyl residue was shifted from C-1 to N-9, whereby in one example a rigid isomer 22 was obtained, and in the other, by means of a methylene spacer, a product 23 in which by appropriate rotation both the phenyl and the acetamido group can adopt positions that are very similar to those these groups have in the lead structure GeA-69 (1). Compound 22 was obtained by N-arylation of carbazole with 2-fluoro-1-nitrobenzene, 17 subsequent reduction of the nitro group, and N-acetylation.
As modifications of the central pyrrole ring (ring B) of GeA-69 (1) N-methyl and N-benzyl analogues 24 and 25 were prepared starting from corresponding N-substituted 1-bromocarbazoles via Suzuki-Miyaura cross-coupling with 2-aminophenylboronic acid and subsequent N-acetylation. Dibenzofuran analogue 26 and dibenzothiophene analogue 27 were obtained in a similar manner from commercially available 4-bromodibenzofuran and known 4iododibenzothiophene (Fig. 7). 18 These experiments were performed before we obtained the crystal structure of PARP14 MD2 with inhibitor 2, which demonstrated the relevance of the pyrrole NH-group (Fig. 2).
In order to replace the NH group of ring B with either an alternative hydrogen bond donor (hydroxy group) or a hydrogen bond acceptor (carbonyl group), known 1-iodofluorenone 19 was coupled in the established manner to give the 1-arylfluorenone 28 which was easily reduced to the racemic fluorenol 29 with sodium borohydride (Fig. 7).
Controlled mono-acetylation of 2,2 0 -diaminobiphenyl with equimolar amounts of acetic anhydride gave monoamide 30 in moderate yield. Monoamide 30 was then used to access the seco analogue 31 and the acridone analogue 33. Buchwald-Hartwig arylation of the unsubstituted anilino group with iodobenzene to give biaryl 31 and with methyl 2-iodobenzoate to give biaryl 32, respectively, was accomplished with the BINAP/Pd 2 (dba) 3 catalyst system. Ester 32 was hydrolysed to give the corresponding carboxylic acid, which was converted into the acridone 33 by polyphosphoric acid-mediated intramolecular acylation (Scheme 2). 20 Further, a series of modifications of ring A was performed. Ringsubstituted analogues 37-39 were obtained in two steps from readily available 1,2,3,4-tetrahydrocarbazol-1-ones 21 34-36 in two steps. Treatment of the ketones with POBr 3 in anisole gave the corresponding 1-bromocarbazoles under bromination/dehydrogenation conditions in moderate to poor yields. Subsequent standard Suzuki-Miyaura cross-coupling gave the desired arylcarbazoles 37-39 (Scheme 3).
Analogue 44 bearing a partially hydrogenated A-ring was obtained from the corresponding brominated tetrahydrocarbazole 23 via Suzuki-Miyaura cross-coupling. A truncated analogue, the 7-aryl-3-isopropylindole 45, in which ring C is replaced by an isopropyl group, was obtained by Suzuki-Miyaura cross-coupling of the respective 7-bromoindole. The 6-aza-5,6,7,8-tetrahydro analogue 47 was prepared in a similar manner from known intermediate 46. 24 Improved yields were obtained, if the secondary amine was protected with the Boc group prior to the cross-coupling reaction (Scheme 5).
A screening of the above presented compounds on PARP14 MD2 clearly demonstrated that lead structure GeA-69 (1) is very sensitive to structural modifications. Carbazoles bearing (hetero)aromatic residues different from the acetylaminophenyl residue of GeA-69 (1) (Figure 4) were found to be inactive. Analogues with almost identical shape albeit very different electronically (aza analogues in the rings A, C and D) are completely or virtually (b-carboline 18, IC 50 30 lM) inactive. Any changes in the central pyrrole ring (ring B) eliminated inhibitory activity as well. The NH group was found to be essential, it can not be replaced by another hydrogen bond donor, as demonstrated by the inactive fluorenol analogue, 29. Surprisingly, the dibenzothiophene analogue 27 showed considerable inhibition (IC 50 2.5 lM), whereas the dibenzofuran, 26 and the acridone, 33 were inactive. The same holds for the (deaza)compounds having tetracyclic canthin-4-one backbones (canthin-4-one 19, deazacanthin-4-one 20). The seco analogue of GeA-69 (1), biaryl 31, was completely inactive, demonstrating that not only the presence of the functional groups of the lead structure, but also their fixation by the carbazole backbone is most important.
The tetrahydro-analogue 44 showed only a slight loss in activity (IC 50 1.1 lM) compared to GeA-69 (1), whereas its 6-aza analogue 47 bearing a polar aliphatic amino group in ring A, was inactive. Lipophilic chlorine substituents at ring A (compounds 37-38) were fairly tolerated (IC 50 1.4 and 3.0 lM), but the 6-methoxy analogue 39 was inactive. These observations can be rationalised by the hydrophobic environment in the binding region of ring A consisting of residues V1032, V1092, M1108, I111, I1112, F1129, I1132 (Fig. 2). Removal of the N-acetyl residue from GeA-69 (1), conversion of the acetamide into a tertiary amide 54 or into the proposed trifluoroalkyl bioisoster 51, as well as reduction of the amide moiety to an amine 53 resulted in complete loss of activity, the thioamide 52 was an order of magnitude less active (IC 50 10.5 lM) than GeA-69 (1).
In conclusion, these data confirm a very narrow structure-activity relationship for rings A-C (Fig. 3), and for further optimisation of the screening hit GeA-69 (1) only modifications of either the N-acyl residue or ring D were deemed promising.
SAR studies of ring D and N-acyl residues
Initial construction of the carbazole series was performed using 1-bromo-9H-carbazole and a series of pinacol boronic esters which were coupled under standard Suzuki-Miyaura conditions, furnishing biaryl products in moderate to good yields (Scheme 1). A number of these compounds were then converted to the corresponding acetamides or methanesulfonamides and profiled for their binding activity with PARP14 MD2. Whilst binding activity was not improved, additional substituents on ring D such as methyl, fluoro and cyano were tolerated maintaining single digit lM activity (compounds 55-57, Table 1). As previously observed a comparison of these compounds with the inactive non-acetylated and non-sulfonylated anilines (eg compounds 59-61, Table 1) showed the requirement of this group for binding activity.
Further modification of biaryl-amine 48 to the corresponding amides or sulfonamides (Scheme 7) was carried out. The corresponding amides and sulfonamides 62-108 were then profiled for their PARP14 MD2 binding affinity (Tables 1 and 2).
Compounds were profiled for binding activity with PARP14 MD2 through a competitive (AlphaScreen TM ) binding assay measuring the displacement of ADP-ribose peptide from PARP14 MD2. 11a Promising compounds were additionally profiled by biophysical assays such as Bio-Layer Interferometry or Isothermal Titration Calorimetry as previously described. 11a As previously described the parent carbazole GeA-69 (1) was profiled for its broader selectivity over 12 other human macrodomains, showing exquisite selectivity for MD2 of PARP14. 11a Furthermore a representative selectivity screen of 46 kinases in a Differential Scanning Calorimetry assay did not reveal any significant activity of carbazole GeA-69 (1) at 10 lM. 11a
Discussion
The binding activities of synthesised PARP14 MD2 inhibitors are summarised in Tables 1 and 2. Despite comprehensive SAR studies of the A-C rings of this carbazole series, no points for the development of more potent ligands were discovered, a number of deriva- tives were synthesised functionalising ring D (Figure 3). Only small additional substituents to the ring were tolerated (e.g. compounds 55-57, Table 1). Interestingly, elaboration of the sulfonamide in compound 2 into the homologated ethane-, propane-and butane-sulfonamides analogues (compounds 62-64, Table 1) furnished equipotent compounds. Further elaboration of the acetamide in GeA-69 (1) mostly retained single digit lM binding activity (eg compounds 66,67). Interestingly the n-pentanoyl ana-logue 68 was seemingly inactive, which may be due the entropic penalty associated with longer alkyl substituents or a steric clash with the protein. However, guided by the apparent tolerance of some larger substituents in place of the acetamide in GeA-69 (1) and methanesulfonamide in compound 2, the 2-phenylacetamide and phenylmethanesulfonamide of compounds 78 and 79 (IC 50 7.6 ± 0.3 and 3.6 ± 0.3 lM respectively, Table 1) were chosen for further development as they enabled rapid access to diversity and provide a suitable vector for binding pocket exploration. A number of hetero-and substituted-aromatics were appended onto the biaryl core (examples 83-108, Table 2). Moderately flat SAR was observed for both 2-and 4-substituted phenylacetyl and phenylmethanesulfonamide groups. It was found that introduction of a 3-cyano substituent in the phenylmethanesulfonamide series provided a slight improvement in binding activity compared with GeA-69 (1). Carbazole 108 displays sub-micromolar activity for a Data was not successfully obtained due to solubility issues in the AlphaScreen assay with this example.
Scheme 7. Synthesis of amide and sulfonamide derivatives of aniline 48. PARP14 MD2 (IC 50 660 ± 30 nM). Notably, by comparison the corresponding 3-cyanophenylacetamide 107 displays diminished binding activity relative to sulfonamide 108, potentially due to the greater tolerance of the sulfonamide to maintaining H-bond acceptor interactions as shown in the PARP14 MD2:compound 2 co-crystal structure (Fig. 2B). The 3-cyanobenzyl group of compound 108 may make interactions with adjacent hydrophobic residues M1108, L1137 and F1144. Although we were unable to obtain a co-crystal structure of compound 108 to confirm these interactions, we performed docking studies to examine possible binding modes of the larger compound compared to compound 2. Simple minimisation of compound 108 in PARP14 MD2 is unable to find a binding pose due to clashes between the larger 3-cyanophenyl group and the protein. To account for potential side chain rotations that would be necessary to accommodate this group, we performed SCARE docking (SCan Alanines and Refine) using ICM. 27 The optimised pose for compound 108 shows a rotation of the side chain of F1144 to open up space so that the 3-cyanophenyl group can make interactions with M1108 and L1137 in addition to a pistacking interaction with F1144 (Fig. 8). However, it is not obvious from this docking study why the 3-cyanophenyl group would be preferred to other hydrophobic groups such as in compounds 79 and 83-107. Sub-micromolar PARP14 MD2 affinity of carbazole 108 was also confirmed by BioLayer Interferometry (BLI) providing a calculated K D of 550 nM ± 220. Whilst lead compound 108 is larger and a less ligand efficient inhibitor of PARP14 MD2 than original hit compound 1, owing to the more tolerant SAR around it represents an attractive chemical starting point for future development. Additional examples similar to compound 108 (see SI, compounds 109-116) have been explored and work to improve the binding activity and physicochemical properties of this lead molecule will be reported in due course.
Summary
We herein report the development of a novel class of allosteric modulators of the second macrodomain of PARP14. Initial identification of carbazole GeA-69 (1) as a submicromolar inhibitor of PARP14 MD2 was made following a medium throughput screen. 11a Inhibitory activity can be rationalised through a PARP14 MD2 cocrystal of a similar derivative, sulfonamide 2 (PDB ID 5O2D). Investigation into this carbazole series was then made revealing new opportunities for ligand elaboration. Systematic analysis of SAR demonstrated a very narrow structure activity relationship for rings A-C (carbazole scaffold), and for further optimisation of the screening hit 1 only modifications of either the N-acyl residue or ring D showed promise. A number of carbazole containing compounds were tolerated in this newly identified allosteric site of PARP14 MD2 including a 3-cyano substituted phenylmethanesulfonamide 108. Carbazole 108 displays submicromolar activity binding to PARP14 MD2 by AlphaScreen (IC 50 0.66 lM) which was also confirmed by BLI (K D 0.55 lM).
This lead molecule along with others in this series are useful chemical starting points in the development of chemical probes for this poorly understood epigenetic target. | 3,847.2 | 2018-07-01T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
Toward an ensemble of object detectors.
. The field of object detection has witnessed great strides in recent years. With the wave of deep neural networks (DNN), many break-throughs have achieved for the problems of object detection which previously were thought to be difficult. However, there exists a limitation with DNN-based approaches as some architectures are only suitable for particular types of object. Thus it would be desirable to combine the strengths of different methods to handle objects in different contexts. In this study, we propose an ensemble of object detectors in which individual detectors are adaptively combine for the collaborated decision. The combination is conducted on the outputs of detectors including the predicted label and location for each object. We proposed a detector selection method to select the suitable detectors and a weighted-based combining method to combine the predicted locations of selected detectors. The parameters of these methods are optimized by using Particle Swarm Optimization in order to maximize mean Average Precision (mAP) metric. Experiments conducted on VOC2007 dataset with six object detectors show that our ensemble method is better than each single detector.
Introduction
Object detection is a problem in which a learning machine has to locate the presence of objects with a bounding box and types or classes of the located objects in an image. Before the rise of Deep Neural Networks (DNN), traditional machine learning methods using handcrafted features [13,22] were used with only modest success since these extracted features are not representative enough to describe many kinds of diverse objects and backgrounds. With the successes of DNN in image classification [11], researchers began to incorporate insights gained from Convolutional Neural Networks (CNN) to object detection. Some notable results in this direction include Faster RCNN [7] or You Look Only Once (YOLO) [16]. However, some object detectors are only suitable for specific types of objects. For example, YOLO struggles with small objects due to strong spatial constraints imposed on bounding box predictions [15]. In this study, we propose to combine several object detectors into an ensemble system. By combining multiple learners for the collaborated decision, we can obtain better results than using a single learner [20]. The key challenge of building ensembles of object detectors is to handle multiple outputs so that the final output can determine what objects are in a given image and where they are located.
The paper is organized as follows. In section 2, we briefly review the existing approaches relating to object detection and ensemble learning. In section 3, we propose a novel weight-based ensemble method to combine the bounding box predictions of selected base detectors. The bounding boxes for combination are found by a greedy process in which boxes having Intersection-over-Union (IoU) values with each other higher than a predetermined threshold are grouped together. We consider an optimisation problem in maximizing the mean Average Precision (mAP) metric of the detection task. The parameters of combining method are found by using an evolutionary computation-based algorithm in solving this optimisation problem. The details of experimental studies on the VOC2007 dataset [6] are described in section 4. Finally, the conclusion is given in section 5.
Object Detectors
Most early object detection systems were based on extracting handcrafted features from given images then applying a a conventional learning algorithm such as Support Vector Machines (SVM) or Decision Trees [13,22] on those features. The most notable handcrafted methods were the Viola-Jones detector [21] and Histogram of Oriented Gradients (HOG) [5]. However, these methods only managed to achieve modest accuracy while requiring great expertise in handcrafting feature extraction. With the rise of deep learning, in 2014 Girshick et al. proposed Regions based on Convolutional Neural Network (CNN) features (called RCNN), the first DNN-based approach for object detection problem [8]. This architecture extracts a number of object proposals by using a selective search method and then each proposal is fed to a CNN to extract relevant features before being classified by a linear SVM classifier. Since then, object detection methods have developed rapidly and fall into two groups: two-stage detection and one-stage detection. Two-stage detection such as Fast-RCNN [7] and Faster-RCNN [17] follows the traditional object detection pipeline, generating region proposals first and then classifying each proposal into each of different object categories. Even though these networks give promising results, they still struggle with objects which have a broad range of scales, less prototypical images, and that require more precise localization. One-stage detection algorithms such as YOLO [15] and SSD [12] regard object detection as a regression or classification problem and adopt a unified architecture for both bounding box localization and classification.
Ensemble methods and optimization
Ensemble methods refer to the learning model that combines multiple learners to make a collaborated decision [18,20]. The main premise of ensemble learning is that by combining multiple models, the prediction of a single learner will likely be compensated by those of others, thus making better overall predictive performance. Nowadays, many ensemble methods have been introduced and they are categorized into two main groups, namely homogeneous ensembles and heterogeneous ensembles [20]. The first group includes ensembles generated by training one learning algorithm on many schemes of the original training set. The second group includes ensembles generated by training several different learning algorithms on the original training set.
Research on ensemble methods focuses on two stages of building an ensemble, namely generation and integration. For the generation stage, approaches focus on designing novel architectures for the ensemble system. Nguyen et al. [19] designed a deep ensemble method that involves multiple layers of ensemble of classifiers (EoC). A feature selection method works on the output of a layer to obtain the selected features as the input for the next layer. In the integration stage, besides several simple combining algorithms like Sum Rule and Majority Vote [10], Nguyen et al. [20] represented the predictions of the classifiers in the form of vectors of intervals called granule prototypes by using information granules. The combining algorithm then measures the distance between the predictions for a test sample and the granule prototypes to obtain the predicted label. Optimization methods have been applied to improve the performance of existing ensemble systems in terms of ensemble selection (ES) which aims to search for a suitable EoC that performs better than using the whole ensemble. Chen et al. [2] used ACO to find the optimal EoC and the optimal combining algorithm.
General Description
In this study, we introduce a novel ensemble of object detectors to obtain higher performance than using single detectors. Assume that we have T base object detectors, denoted by OD i (i = 1, ..., T ). Each detector works on an image to identify the location and class label of objects in the form of prediction results the number of objects detected by OD i ). The elements of R i,j are detailed as: and h i,j are the top-coordinates and the width and height of the bounding box • Prediction l i,j , conf i,j where l i,j is the predicted label and conf i,j is the confidence value, which is defined as the probability for the prediction of this label Our proposed ensemble algorithm deals with the selection of suitable detectors among all given ones, as well as combining the bounding boxes of the selected detectors. In order to select suitable detectors, we introduce a number of selection variables α j ∈ {0, 1}, j = 1, ..., T with each binary variable α j representing whether detector OD j is selected or not. The combining process is conducted after the selection process. To combine the bounding boxes made by the selected detectors, we need to know which bounding box of each detector predicts the same object. Our proposed method consists of two steps: -Step 1: Measure the similarity between pairs of bounding boxes between the detection results from different detectors to create groups of similar bounding boxes -Step 2: For each group, combine the bounding boxes The similarity between bounding boxes is measured using Intersection over Union (IoU ), which is very popular in object detection research [22]. With two bounding boxes BB i,j and BB p,q , the IoU measure between them is given by: This measure is compared to a threshold θ (0 ≤ θ ≤ 1). If the IoU > θ then they are grouped together, eventually forming a number of box groups G = (g 1 , g 2 , ..., g K ), where K is the number of groups. Note that we do not consider the IoU s between boxes made by the same detector (i = p) since we combine bounding boxes of different detectors. We also combine bounding boxes that have the same predicted label. For each group, we perform combination of the bounding boxes. Let W x i , W y i , W w i , W h i ∈ [0, 1] be the weights of detector OD i (i = 1, ..., T ). Then the combined bounding box for group g k will be BB k = (x k , y k , w k , h k ) in which: where I[.] is the indicator function, and coord k ∈ {x k , y k , w k , h k }. Therefore, our ensemble is completely determined by the following parameters:
Optimisation
The question that arises from the proposed method is how to search for the best i are the bounding box weights, α j are the selection variables, and θ is the IoU threshold. We formulate an optimisation problem which we can solve to find the optimal value for these parameters. The fitness function is chosen to be the mean Average Precision (mAP), which is defined as the average of Average Precision for each class. In order to calculate AP c , we need to calculate the precision and recall. Precision and recall are defined as follows: if assigni = 0 then 4: continue 5: assigni ← group idx 6: for j ← i + 1 to nbb do 7: if assignj = 0 or deti == detj or li = lj then 8: continue 9: if IoU (BBi, BBj) > θ then 10: assignj ← group idx 11: group idx ← group idx + 1 12: K ← groupd idx − 1 13: G ← {g1, g2, ..., gK } where g k = {BBi} such that assigni == k 14: for k ← 1 to K do 15: Combine boxes in g k to get BB k = (x k , y k , w k , h k ) by using Eq. 2, 16: E.insert(BB k ) 17: return E P recision = T P T P + F P , Recall = T P T P + F N where T P (True Positive) is the number of correct cases, F P (False Positive) is the number of cases where a predicted object does not exist, F N (False Negative) is the number of cases where an object is not predicted. The IoU measure between a predicted bounding box and a ground truth box determines whether the ground truth box is predicted by the algorithm. The AP summarises the shape of the precision/recall curve, and is evaluated by firstly computing a version of the measured precision/recall curve with precision monotonically decreasing, by setting the precision for recall r to the maximum precision obtained for any recall r ≥ r. Then the AP is calculated as the area under this curve by numerical integration. This is done by sampling at all unique recall value at which the maximum precision drops. Let p interp be the interpolated precision values. Then the average precision is calculated as follows: Thus with T detectors, the optimisation problem is given by: We use PSO [3,9] to find the optimal values for (W . Compared to other optimisation algorithms, PSO offers some advantages. Firstly, as a member of the family of evolutionary computation methods, it is well suited to handle non-linear, non-convex spaces with non-differentiable, discontinuous objective functions. Secondly, PSO is a highly-efficient solver of continuous optimisation problems in a range of applications, typically requiring low numbers of function evaluations in comparison to other approaches while still maintaining quality of results [14]. Finally, PSO can be efficiently parallelized to reduce computational cost. To work with continuous variables in PSO, we convert each α j into a continuous variable belonging to [0, 1]. If α j is higher than 0.5, the corresponding detector is added to the ensemble. The average mAP value in a 5-fold cross-validation procedure is used as the fitness value. The combining and training procedures are described in Algorithm 1. Algorithm 1 receives inputs including the bounding boxes made by the detectors (BB i ), confidence values (conf i ), prediction labels (l i ) and the parameters . Each bounding box (BB i ) also has an associated variable (det i ) which delineates the index of the detector responsible for (BB i ). For example, if (BB i ) is predicted by the detector (OD j ) then det i = j. Line 1 sorts the selected bounding boxes in decreasing order of confidence value. Line 3-10 assigns each bounding box to a group. For each bounding box BB i we first check if it has been assigned to one of the existing groups before assigning it to the new group group idx (line [3][4][5]. Then with each unassigned bounding box BB j that is not made by the same detector as that of BB i and have the same prediction we add BB j to group group idx if its IoU value with BB i is greater than θ (line 6-10). After all boxes are grouped, lines 12 to 17 combine the boxes in each group and returns the combined bounding boxes.
Experimental Setup
In the experiments, we used a number of popular object detection algorithms as base detectors for our ensemble method. The base detectors used are SSD Resnet50, SSD InceptionV2, SSD MobilenetV1 [12], FRCNN InceptionV2, FR-CNN Resnet50 [17], and RFCN Resnet101 [4]. We used the default configuration for all of these methods. Training process was done for 50000 iterations. For the PSO algorithm, the inertial weight a was set to 0.9 while two parameters C 1 and C 2 were set to 1.494. The number of iterations was set to 100 while the population size was set to 50. The dataset VOC2007 was used in this paper containing 5011 images for training and validation, and 4952 images for testing. The evaluation metric used in the paper was mAP (mean Average Precision). Among the 9963 images in the VOC2007 dataset, there are 2715 images having at least one
RFCN-Resnet101
The red or blue color means better or poorer performance on an object
Result and discussion
Table 1 (left) shows the mAP result of the proposed method and the base detectors. The proposed method has mAP value of 67.23%, which outperforms the best base detector RFCN-Resnet101 by 2.56%. Figure 1 shows a detailed comparison of AP values between the two methods for each class. It can be seen that the proposed method achieves a remarkable increase for the "dining table" object, from 35.04% to 56.17%. This is followed by "sofa" with an increase of 9.08% from 54.19% to 63.27%. Other objects such as "dog" or "train" also saw a modest increase. On the other hand, "bicycle" and "bottle" saw a decrease, from 72.73% to 70.31% and from 49.26% to 45.98% respectively. It should be noted that ensemble methods ensure that the overall result is better, even though some cases might be worse than the base learners. In total, there are 14 object types Figure 2 provides a comparison between the selected base detectors (those with α i ≥ 0.5 after optimisation) and the proposed method. It can be seen that RFCN-Resnet101, SSD-Resnet50, and FRCNN-Resnet50 correctly identify two bicycles, but wrongly predicts another bicycle that spans the two real bicycles. On the other hand, FRCNN-Resnet50 wrongly predicts three person objects in the image. Due to the combination procedure, the redundant bicycle and person objects have been removed. Also, the bounding box for the left person by SSD-InceptionV2 is slightly skewed to the right, but after applying weighted sum of bounding boxes of the base detectors, the combined box has been positioned more accurately.
Conclusion
In this paper, we presented a novel method for combining a number of base object detectors into an ensemble that achieves better results. The combining method is constructed using PSO algorithm to search for a defining parameter set that optimise mAP. Parameters are selective indicators which show whether detectors are selected or not. The bounding boxes of selected detectors then are combined based on a weights-based combining method. Our results on a benchmark dataset show that the proposed ensemble method is able to combine the strengths and mitigate the drawbacks of the base detectors, resulting in an improvement compared to each individual detector. | 3,962 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
Selective Content Removal for Egocentric Wearable Camera in Nutritional Studies
Automatic Ingestion Monitor v2 (AIM-2) is an egocentric camera and sensor that aids monitoring of individual diet and eating behavior by capturing still images throughout the day and using sensor data to detect eating. The images may be used to recognize foods being eaten, eating environment, and other behaviors and daily activities. At the same time, captured images may carry privacy concerning content such as (1) people in social eating and/or bystanders (i.e., bystander privacy); (2) sensitive documents that may appear on a computer screen in the view of AIM-2 (i.e., context privacy). In this paper, we propose a novel approach based on automatic, image redaction for privacy protection by selective content removal by semantic segmentation using a deep learning neural network. The proposed method reported a bystander privacy removal with precision of 0.87 and recall of 0.94 and reported context privacy removal by precision and recall of 0.97 and 0.98. The results of the study showed that selective content removal using deep learning neural network is a much more desirable approach to address privacy concerns for an egocentric wearable camera for nutritional studies.
I. INTRODUCTION
Wearable sensor technology is an exponentially growing field, largely focused on health and wellness. Currently, the wearable sensor technology sector reports a global market value of 24.2 billion USD, with a projected growth of 100 billion USD by 2024 [1]. Consumer wearable sensors expand from smartwatches and wristbands to jewelry, glasses, and clothing. Each wearable sensor is providing the consumer with a unique aid, as illustrated in Fig. 1 and Fig. 2. A wearable sensor would commonly contain one or more of the following components: accelerometers, gyroscopes, GPS, physiological sensors, and cameras.
These components acquire personal information related to the motion, locations, vitals, and images of/from the wearer. The obtained data is used to provide aid in applications such as lifelogging, health monitoring, person tracking, leisure, and games/sports. However, continuous acquisition of personal information raises concerns towards the privacy of an individual. Surveys [2]- [4] have reported user privacy concerns such as privacy related to social implication, criminal abuse, facial recognition, access control, surveillance, and sousveillance (recording by wearable cameras), and speech disclosure. In the context of information technology The associate editor coordinating the review of this manuscript and approving it for publication was Jenny Mahoney. professionals, these privacy concerns are broadly classified into three groups: context privacy, bystander privacy, and external data sharing privacy [2], [5].
Context privacy is related to access control, location disclosure, and speech disclosure. For example, a wearable lifelogging sensor may acquire the wearer's speech, images of activities being performed, and location, which the wearer might not want to share with service providers. Bystander privacy is an issue of protecting the privacy of third parties in the surroundings of the wearer. Wearable sensors with microphone and camera might capture speech and images of VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ FIGURE 2. Schematic review on wearable sensor technology, primary sensors, and application [2]. Acc-Accelerometer, Gyro-Gyroscope, Phy-Physiological sensor, Cam-Camera. Citations: Google Glass [6], AIM [12] bystanders in the surroundings, leading to privacy concerns related to speech disclosure, facial recognition, surveillance, and sousveillance. External data sharing privacy is related to the security of the data that is forwarded to cloud service that store, analyzes, and provides feedback to the wearer. In this paper, we will further discuss privacy issues and solutions related to head-mounted, first-view egocentric cameras. Head-mounted wearable sensors (see Fig. 1) are often worn as eyewear. Google Glass [6], SenseCam [7], Epson Moverio [8], AIM [9], Vuzix [10], and Snap spectacle [11], may be used for health and wellness, lifelogging, leisure, and games. These wearable sensors comprise of either or a combination of several sensors (e.g., accelerometer, gyroscope, and microphone, and an egocentric (first-person view) camera. The camera is the most commonly used component for a head-mounted wearable sensor. Images captured from the egocentric camera stand as the primary source of information since the camera carries the wearer's visual perspective.
Automatic Ingestion Monitor V2 (AIM-2) is a sensor for monitoring food intake. The eating episodes are recognized from the signal of embedded accelerometer [12], prompting the egocentric camera to capture images of the food being eaten. However, individuals often eat in groups, or socially and studies [13] show that individuals are likely to use their mobile phone/computer while eating a meal (see Fig. 3).
Furthermore, in some studies AIM may be configured to capture images continuously throughout the day. Regardless of the mode of image collection, these egocentric images may contain (1) the surrounding environment, including people around the wearer; (2) sensitive information such as the content of the computer screens. The collected images may be a source of privacy leakage, raising issues of bystander privacy and context privacy. Specifically, in the United States, such images are recognized as information protected under the Health Insurance Portability and Accountability Act (HIPPA), Title II, Privacy Rule [14].
The images may contain HIPAA-protected information, such as account numbers on a computer screen, and full-face images of people. While the images are stored in a HIPAA-compliant repository and reviewed by certified nutritionists, who are not considered an explicit adversary, the risk of accidental privacy leakage still remains. Furthermore, the protected information cannot be shared. The HIPAA rule suggests deidentification of the protected information through removal of specific identifiers, such as names, phone and account numbers, and full-face photos.
Traditionally HIPPA compliance may be addressed by discarding the images with the privacy concerns [3] or scaling down the image to extremely low resolution [15]. However, in nutritional studies, discarding or scaling down the images may very likely result in complete or partial loss of information about food intake. Therefore, we opted to perform image redaction-based privacy protection by selectively removing content related to bystander privacy and context privacy [16] while preserving information related to food intake. The immediate goal is to eliminate the major sources of potential privacy leakage: screens and persons, as the first step to the eventual goal of achieving HIPAA-compliant de-identification of the images acquired in nutritional studies.
Therefore, we propose an automatic, selective content removal method to exclude bystander and context privacy content. The rest of the paper is as follows. Section II reviews current privacy content removal methods used for the head-mounted wearable sensor. Section III discusses the proposed approach used to address the privacy content removal from images of the head-mounted wearable sensor. Section IV discusses the results and analysis of the proposed approach for inappropriate content removal. Section V and VI derives the conclusion of the study, limitations, and future work.
II. PRIVACY CONTENT REMOVAL METHODS
Privacy content removal fundamentally prevents information that an individual wants to be kept privet from being public [17]. Researchers in the field of data privacy protection for the context of image and videos as visual privacy protection. Visual privacy protection is widely performed as (i) intervention, (ii) blind vision, (iii) secure processing, (iv) redaction, (v) data hiding. An intuitive review of the privacy protection approaches can be found in [18]. We notice that the majority of the wearable sensor community adapted a redactionbased privacy protection approach. Here private information was concealed by modifying or removing sensitive image regions such as face, bodies, screens, etc. in the images being reviewed by nutritionists.
Traditionally, researchers in the wearable sensor community have opted for privacy content removal relies either on manual review or on automatic image classification. The wearer performs the manual review by removing images manually and discarding the privacy content images [19]. This is a tedious approach since the wearer may have to manually review hundreds to thousands of images daily to maintain their privacy. The automatic image classification-based approach comes as a solution to the manual review approach.
At present, DeepAI [20], and Sightengine [21] performs automatic removal of content such as nudity detection, weapons, alcohol, drugs, offensive content, and hate signs by using deep convolution neural networks and discarding images. However, image removal is highly undesirable in food intake monitoring since it may lead to complete loss of information in many situations of social eating and eating in the presence of computer screens.
Researchers from the wearable sensor community attempted to address privacy issues. Thomaz et al. [22], proposed to address the bystander privacy issue by using a privacy saliency matrix approach. The authors used a combination of manual review by the wearer and image tagging to categorize the data into four quadrants, based on saliency, non-saliency, and privacy concern and non-concern. The bystander's faces were blurred by using a Haar cascade classifier. The privacy saliency matrix approach was mainly used to cluster the images prior to privacy content removal. The limitations of this method are the combination of manual review and image tagging approach, which is a tedious and time-consuming task. A privacy behavior study [23], on the wearable head-mounted sensors, reported that wearers prefer to manually control the image acquisition of the wearable camera to address privacy-related issues. This comes as the most straightforward solution; however, this approach brings in the human factor and the possibility of information loss.
Zarepour et al. [24] proposed a context privacy-preserving framework for a lifelogging sensor. The authors used a combination of human activity recognition, ambient environment detection, and sensitive subject detection modules to remove context privacy concerns. The human activity recognition module classified accelerometer data to understand human activity (i.e., walking, sitting, running).
Whereas, the ambient environment detection was performed to detect the environment (i.e., indoor/outdoor). This human activity recognition and ambient environment detection module were used to detect contextual cues. Based on the contextual cues, the authors performed sensitive subject detection to remove context privacy concerns by using an array of best-fitting-deep learning and computer vision algorithms such as AlexNet, FastRCNN, and Viola-Jones detector. The authors reported an accuracy of 70% during testing on 300 images.
Muchen et al. [25], proposed a privacy-preserving approach for the head-mounted wearable sensor using multi-sensor information from a mobile phone and smartwatch. Here the authors used the multi-sensor information to identify privacy concern scenarios and trigger the wearable camera.
III. METHOD
Existing privacy content removal methods use semi-automated approaches by combining manual review, image tagging methods, and performing a multi-sensor fusion. The semi-automatic approach brings in human factors; whereas, the multisensory approach depends on the availability of the multiple sensors. Automated privacy content removal methods based solely on images would be much more desirable. Scene understanding using semantic segmentation is a widely used approach to identify contents within the image. Selectively identifying specific content within an image would improve privacy content recognition and the removal process.
Our proposed method aims to perform image redactionbased privacy protection by selectively removing privacy content (object and people) on free-living image data captured by AIM-2. We use a semantic segmentation-based approach to selectively remove bystander and context privacy content from images. Semantic segmentation performs a pixel-level classification of the content, compared to the object detection approach using bounding boxes. Object detection methods may capture the food content and background information within the bounding box used for content removal. This would lead to the loss of information related to the eating episode.
Multiple semantic segmentation models such as Unet [26], FCN [27], DeconvNet [28], SegNet [29] were studied. We chose to use SegNet, due to its ability to provide accurate semantic segmentation while operating with a simple architecture at high computational efficiency, and low memory usage. Computational efficiency is an essential aspect of the practical usage of this method since the AIM-2 is meant to operate throughout the day. The participants from the study have shown an active usage period between 12.5 to 18.5 hours, capturing about 3000 to 4500 images per day.
A. SegNet
SegNet comprises of 109 layers, following an encoderdecoder architecture. The primary emphasis of SegNet is to use the encoder layer for feature extraction and performing semantic segmentation at decoder layers. Therefore, SegNet incorporated the architecture and the weights of the convolution base from the VGG16 [30]. The VGG16 is already been trained on the ImageNet database comprised of 1000 classes for over 14 million images [31]. As mentioned in the previous sections, wearable sensors capture images from various environments with a great diversity of content (see Fig. 5). Hence, using pre-trained weights is highly advantageous for the feature extraction.
Therefore, SegNet naturally inherits 13 pre-trained convolution layers, branched among five max-pooling operators. Each convolution and pooling stage comprises a combination of convolution filters,a batch normalization layer, and a ReLU activation layer to extract feature maps at each encoder stage. The 2 × 2 max-pooling operators are an essential aspect of the network as the max-pooling indices capture the locations of the maximum feature value in each pooling window and forward the information to the decoder layer to improve boundary delineation of the semantic segmentation.
The decoder networks upsample the input feature map at each decoder stage using the pre-known max-pooling indices at each corresponding encoder stage. Thus, resulting in 13 trainable convolution layers. The convolution layers are brunched among a combination of Max Unpooling layers, batch normalization layers, and ReLU activation layers. Here the final encoded feature map of the input image is passed to the decoder Max Unpooling layer along with the Max Pooling indices to produce a dense feature map. The process of decoding is followed for the 5 Max Unpooling layers.
Finally, a multi-channel feature map, similar to the initial feature map of the encoder stage is generated. A trainable, SoftMax layer followed by a pixel classification layer is used to classify the pixels of the image to corresponding classes: person/bystander privacy, screen/content privacy, and background.
B. SENSOR SYSTEM
The sensor system used for the method development is Automatic Ingestion Monitor, version 2 (AIM-2), which is a second-generation egocentric wearable sensor used for monitoringof diet and eating behavior (Fig. 4). The AIM-2 may capture periodic images only during food intake or during the whole day an image per 15 seconds or about 5760 images per day. AIM comprises of five main components: inputs; 5-megapixel CMOS image sensor and 3D accelerometer, STM32 processing unit and an FPGA based frame buffer, and micro SD-based storage unit. The camera sensor is a 5-megapixel CMOS image sensor with a wide-angle lens to capture a broader field of view.
C. DATA
We used data from fifteen volunteers (9 males and 6 females) aged 18 to 33 years old. The study was approved by the Institutional review board of the University of Alabama. The participants signed an informed consent form prior to the experiments and were compensated for their participation. The experiment was conducted in two parts: a controlled laboratory experiment (1 day) and a free-living experiment (1 day).
To represent a variety of real-world eating scenarios, we only used the data from the free-living experiment. The participants were asked to wear the AIM-2 egocentric wearable sensor for the entire day. The actual wear time was between 12.5 to 18.5 hours. The participants were asked to follow their daily routine activities and have at least one meal. Furthermore, the free-living experiment did not limit the participants from any social/personal interaction and activities. At the end of the experiment, participants reviewed and removed any images that did not want to be included in the dataset. Even though the images been removed manually, the free-living experiment generated a dynamic dataset containing social/personal interaction and activities at public/personal spaces including bystanders and the usage of screens (see Fig. 5).
We randomly selected 400 images from each participant to generate our dataset comprised of 6000 images. Images that were distorted by motion blur, overexposure, and idle view (i.e., the view when the participant is not wearing glasses/ after removing the glasses and placing somewhere else) were excluded. The randomly selected images were annotated to 3 classes: person, screen, and background. The persons and screens were annotated. The remaining pixels (i.e., non-person/screen) of the image were grouped as background pixels.
The annotation was performed by pixel-wise labeling using the MATLAB annotation tool known as the Image Labeler. Annotation for the person was defined as an entire human or any human body part occluded or non-occluded within the frame of the image. The definition for screens expanded over any digital screens: mobile phones, desktop monitors, televisions. We annotated 3781 images with screen labels containing entire, or partially occluded mobile phones/ desktop monitor/ television. We annotated 4378 images with labels for single person/ multiple persons/ partially occluded person.
We supplemented our dataset with publicly available datasets. We used the pixel annotated data from the CamVid database [29], SUN RGB-D database [32], and ADE20K database [33]. Here we relabeled the annotations of these public databases to use classes: person, screen and relabeled all the remaining pixels to the background.
AIM-2 is built with a wide-angle lens to capture a broader field of view and captures images in portrait orientation. The wide-angle lens projects a barrel distortion to the captured image. The barrel distortions and portrait orientation would contribute to reducing the efficiency of SegNet as its unlikely that the encoder base was previously trained with distorted images. Therefore, we performed a −90 0 image rotation and a barrel correction. We calculated the focal length, camera optical center, and the radial distortion coefficients of the lens to rectify the barrel distortion [34]. Furthermore, we carried out an image enhancement step by performing a histogram normalization step to improve the intensity distribution of the image.
1) TRAINING AND VALIDATION
A transfer learning strategy was adopted to train the SegNet semantic segmentation model. We froze the encoder base and trained the 13 convolution layers in the decoder base along with the SoftMax layer and pixel classification layer (see Fig. 5). Stochastic gradient descent with momentum as the optimization algorithm to train the decoder layers. A momentum of 0.4 with a fixed learning rate of 0.01 was defined for the optimization. A class weighted cross-entropy loss approach was used to train the pixel classification layer to overcome the unbalanced data distribution in the training dataset.
We used 60% of our self-collected AIM-2 images for training along with the complete data from CamVid database, SUN RGB-D database, and ADE20K database, amounting to a total of 36846 training images. We further enhanced the number of training images to 331,614 in our training dataset by augmenting the data using 7 augmentation techniques: image rotation by ± 25-degree, image shift in horizontal axis, and vertical axis by ±30 pixels, horizontal image flip, image scaling by a factor of 1.25, and 1.5, and brightness shift by a factor of 0.5 to 1.5. The training was carried out on a GeForce RTX 2070 GPU using a mini-batch size of 8 images for 80 epochs. The data was shuffled at each epoch.
During the validation phase, we adopted a holdout validation approach. We held 40% of the self-collected AIM-2 images from the self-collected dataset and was used for the validation.
The results of the validated images were evaluated using measures of accuracy, MeanBFScore, and IoU for each class [29]. The accuracy (see Eq. 1) shows the percentage of pixels that are correctly identified to the respective class. MeanBFScore (see Eq. 4), also known as the boundary F1 contour matching score shows how well are the boundaries of each class is aligned with the ground truth boundary. The intersection of the union (IoU) (see Eq. 5) shows a measure in the convergence between the pixel area of each class to the ground truth.
where TP is true positive, TN is true negative, FP is false positive, and FN is a false negative. We evaluate the performance of the privacy concerning content (i.e., bystander and context privacy) removal by using precision (see Eq. 2) and recall (see Eq. 3) metrics. Here we considered bystander and context privacy content as objects by inserting a bounding box around the object silhouette in each image channel corresponding to the classes: person/bystander privacy and screen/context privacy. Hence the classification of the bounding box was used to estimate the precision and recall.
D. SELECTIVE REMOVAL OF PRIVACY CONTENT
The images were processed with the trained SegNet semantic segmentation model (see Fig. 6). SegNet generates a semantically segmented tri-channel pixel classification map highlighting the pixels related to the person, screen, and background. Here the pixels in the first channel are classified as a person, pixels in the second channel are classified as screens, and pixels in the third channel are classified as background. We perform bystander and context privacy removal by nullifying the pixels in the first and second channels and preserving the pixels in the third channel. The updated pixel classification map is superimposed to the original image to generate the image with privacy content removed (see Fig. 7(d)). VOLUME 8, 2020
IV. RESULTS AND DISCUSSION
The results for the bystander privacy removal are tabulated in Table 1, showing high precision and recall of 0.87 and 0.94 as an average of all the test subjects. The high recall rate suggests that our method is sensitive to detecting bystanders in the diverse dataset containing bystanders in social/individual interaction, and activities at public/personal spaces. Furthermore, our method was also able to remove body parts that were occluded. This was important since the bystanders and the wearer's privacy could be violated either by the full figure or part of the body. The accuracy of 0.87 was reported for pixel-wise classification and removal of the bystander. However, our method reported a lower IoU and MeanBF; this is mainly due to the convergence of the classification region and boundary with the ground truth region and boundary (Fig. 7(c)). This is mostly due to the boundary delineation of the semantic segmentation approach. SegNet performs a process of capturing the locations of the maximum feature value in each pooling window and passing the information to the decoder layer to improve boundary delineation. However, we noticed that the SegNet did not segment within the boundary of the bystander and often smear at edges.
The results related to context privacy removal are tabulated in Table. 1. The context privacy removal results showed high precision and recall of 0.97 and 0.98 as an average of all the test subjects. The high recall rate suggests that our method is sensitive to detecting digital screen from mobile phone screens, laptop screens, and, television screens at privet and public spaces. Partially occluded screens were also detected and were removed to preserve context privacy.
Therefore, the overall accuracy of pixel classification and removal for context privacy was 0.90. The method also reported a reasonable convergence of the classified region with ground truth by reporting an IoU of 0.77. The improvement in IoU for context privacy would have been due to the ridged structure of the objects (see Fig. 8(c)).
However, context privacy removal reported a lower meanBF of 0.55. The boundary misalignment between the classified region and the actual region is the leading cause of the lower meanBF. Furthermore, it was noticed that ReLU layers activated in much coherence to the brightness intensity in comparison to other features (see Fig. 8(b)).
Therefore, it could have contributed to smearing the boundary of the screen (see Fig. 8(d)). We also noticed an odd incident when the mobile screen was classified as a bystander.
Here the picture of a person was displayed on the screen, and our method removed region as a bystander instead of content (see Fig. 9(b)). This is mainly due to the training examples in which the wearer's mirror reflection was labeled as the bystander (see Fig. 9(a)).
As mentioned earlier, AIM-2 is intended to monitor individual eating habits. Therefore, selectively removing privacy content was essential to retain information related to food intake. Dietitians and nutritionists often use the images of the food intake to determine food content and the eating environment. Therefore, we estimated the background pixel classification accuracy.
Our proposed method removed privacy content while preserving 0.99 of the background accurately. Considering the application of ingestion monitoring, the critical process of privacy content removal is performed as a preprocessing operation. Therefore, we also estimated the computational efficiency of the proposed method. We tested the method on the test dataset containing data of 6 participants. The proposed method computed efficiently at a computational speed of 8.93 images per second.''
V. CONCLUSION
This paper addressed an important problem of privacy concern for the egocentric camera sensor. Image information related to bystander privacy and context privacy was identified as the most significant issue for imaging-based wearable sensors. A fully automatic proposed method was developed to remove privacy content selectively from an egocentric wearable sensor. Semantic segmentation based deep convolution neural network was adopted to remove privacy content accurately while preserving background information. We used about to 331,614 images comprised of augmented data from 9 participates of the self-collected data and publicly available data for training. The proposed method was validated on data of 6 participants. The validation results reported that the proposed method removed bystander privacy with precision and recall of 0.87 and 0.94 while removing context privacy with precision and recall of 0.97 and 0.98.
VI. LIMITATIONS
Our proposed method was able to protect sensitive information of individuals participating in the study by selectively removing sensitive content while preserving information related to food intake. This would potentially enable nutritionists to analyze diet and ingestive behavior. However, the sensitive information in the images from these ingestion monitoring studies are not entirely protected and could be compromised by adversarial attacks [35]- [37], where the individuals and their behavioral patterns may potentially be identified based on the object silhouette in the repeated food images.
While this is a limitation, the main goal of the proposed method is to improve the acceptance and compliance with device wear in nutritional studies, where the participants may be concerned that the device captures private information during meals. The proposed privacy protection method should alleviate these concerns, which we plan to test in future studies. A limitation of the proposed method is that it provides privacy protection for off-line, post-data-collection review of the images of nutritional studies. Ongoing development of embedded hardware accelerators for deep neural networks, such as Google's Edge TPU [38], may enable eventual deployment of these algorithms directly on the wearable device for on-line execution.
Although the proposed method operated satisfactorily on unseen data from ongoing studies, we noticed instances of inaccurate segmentation that erased the food images due to the person sitting next to the food. This is mainly caused by the generalization problem of the deep learning neural network. However, this is not a major concern for the related nutritional analysis, as many images are captured during any given meal and instances of improper segmentation are sufficiently rare. We propose to overcome this limitation by fine-tuning the model with the negative results from future studies that will provide additional data for retraining the models.
[42] (2018 His research interests include applying computer vision and linear and non-linear machine learning-based methods to address real-world problems in video analytics, wearable sensors, and interdisciplinary research with a particular focus on remote health monitoring applications. He serves as a reviewer for several journals, including IEEE, Elsevier, and other publications. EDWARD SAZONOV (Senior Member, IEEE) received the Diploma degree in systems engineer from the Khabarovsk State University of Technology, Russia, in 1993, and the Ph.D. degree in computer engineering from West Virginia University, Morgantown, WV, in 2002. He is currently a Professor with the Department of Electrical and Computer Engineering, University of Alabama, Tuscaloosa, AL, and the Head of the Computer Laboratory of Ambient and Wearable Systems. His research interests span wearable devices, sensor-based behavioral informatics, and methods of biomedical signal processing and pattern recognition. Devices developed in his laboratory include a wearable sensor for objective detection and characterization of food intake (AIM -Automatic Ingestion Monitor); a highly accurate physical activity and gait monitor integrated into a shoe insole (SmartStep); a wearable sensor system for monitoring of cigarette smoking (PACT); and others. His research has been supported by the National Institutes of Health, National Science Foundation, National Academies of Science, as well as by state agencies, private industry and foundations. He serves as an Associate Editor for several journals, including IEEE, Frontiers and other publications. VOLUME 8, 2020 | 6,601 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
Comment on tc-2021-394 for the role of anthropogenic climate change in glacier
The overarching goal of this study is to provide a framework to attribute glacier retreat to anthropogenic climate change. The authors seek to do this by performing ensemble simulations of glacier retreat in idealized geometries driven by quasi-random climate variability. Overall, I think the study is very well written and illustrated. The figures were easy to read and interpret and the text provided sufficient motivation and narrative structure to follow the thread. I have some overarching comments, some boring if highly technical comments and some minor comments about wording, but overall my comments are minor.
past this implies glaciers should all be in their retreated position(s) and we wouldn't observe any glaciers in their more advanced positions. This isn't really a problem for the study because the climate, has not been stationary stochastic and has variability on a range of time scales. This raises two questions.
1. As the authors point out, modern glaciers are responding to both past and present climate forcing. We know that glaciers have advanced during colder periods. For example, some glaciers may have advanced during the Little Ice Age and then retreated and, depending on glacier size, glacier response to the Little Ice Age and other climate anomalies would overlap with the anthropogenic climate interval. However, the model clearly shows rapid retreat with little advance. This raises the question of whether the model (or perhaps geometry?) is capable of simulating prior glacier advance. If advance is not possible, then it seems possible that the model is overestimating the probability of glacier retreat in response to climate forcing (?), potentially biasing the statistical inference. To put this another way, in the authors model the glaciers will eventually retreat irrespective of anthropogenic climate forcing and the only thing that warming does is increase the probability that this occurs sooner. In this scenario, anthropogenic climate change only affects rates and not states (i.e., the time of retreat can be accelerated by warming, but retreat is ultimately going to happen irrespective). It would be more satisfying intellectually if the authors could turn around and also attribute glacier advance to periods when the climate was colder, like we have observed in the historical and paleorecords.
2. The authors break the probability distributions into a component related to (random) natural variability of the climate forcing and a component related to parameter uncertainty. This is fairly standard, at least in the glaciological literature and it follows from numerous studies in engineering. However, it makes a potentially large assumption: that we understand the system well enough that the model uncertainty largely derives from a handful of parameters that are imperfectly known. There is another possibility that also has to be considered which is that the underlying parameterizations are either not complete or fail in different climate scenarios. This seems especially relevant when dealing with submarine melt, iceberg calving, shear margin weakening, subglacial hydrology, etc, none of which are especially well understood. To be clear, my understanding is that the entire formalism presented here can be applied to any model irrespective of the models fidelity. For example, ca 2000 one could apply this same method using Shallow Ice Approximation models that don't account for longitudinal stresses or marine ice sheet instability. These models would require much more oomph from the climate variability to drive retreat because they lack crucial physics. But the same formalism would allow "attribution". Hence, a crucial point made by Shepherd (2021) is that we also have to consider all the alternative hypotheses that could also account for the observations. Shepherd (2021) described how to do this using Bayesian analysis through the use of the "complement". The trick is that one can formally include how much confidence we have in the model vs alternative models/explanations. I don't propose that the authors utilize this approach here, but I would like to make sure that they are aware of it and urge them to consider the possibility that their model might not be as physically robust as one might assume from the discussion in the text. I will note that this is gently hinted at near line 215, but it does seem important to emphasize that the attribution is very sensitive to model assumptions and ultimately, this effect needs to be quantified.
Technical comments:
1. How do the authors define noise and what does it mean for the forcing to bè`r andom''? My understanding is that the authors assume mass balance has a secular component with zero-mean fluctuations super-imposed. But I'm not entirely sure how the fluctuations are defined and I would encourage the authors to add additional details and equations about how the noise is created in the supplementary materials. More concretely, I take it that noise is added to the surface mass balance? For a zero-mean Gaussian process, the noise is not smooth and differentiable, so we would then need to integrate it in the form: $h(t+\Delta t) = h(t) + \Delta t (f(t)+S(t)) + \sqrt(\Delta t) \sigma(t,h)$ where \sigma(t,h) is the standard deviation of the Gaussian process and S(t) is the secular component and I have defined f(t) as the divergence term in the mass balance (or other terms in the equation). Note that the random noise term is multiplied by the square root of the time step in a Brownian process. There is a literature on integrating stochastic differential equations using colored (as opposed to white) noise, but that far exceeds my mathematical acumen. It would be helpful to me to see more details summarizing how the noise is created and how the stochastic differential equation is then integrated along with demonstrations of numerical convergence using both varying time step size and grid resolution. I don't request a host of convergence studies added to the paper, but a few sentences explaining that they were done and the results of the convergence experiences.
2. The frontal ablation parameterization is intriguing, but I have some questions and comments about this. As I understand it, this approach involves applying a large, negative surface mass balance localized at the last grid point at the grounding line and labeling this a "flux" or frontal ablation term. This seems intuitive at first: the flux term is removing ice at the terminus. The large frontal ablation causes a surface slope between the last two grid points in the (discretized) model. That this is in fact a surface ablation parameter can be seen by moving the frontal ablation to the right hand side of the ice thickness equation. For example, writing the flux as q, an upwind finite difference has the form: $h(x,t+\Delta t) = h(x,t) + \frac{q(x) -q(x-\Delta x)}{\Delta x}\Delta t + \frac{h \dot m}{\Delta x}\Delta t + S(t)\Delta t$ Note that the second to last term is the frontal ablation term. In the limit that $\Delta x$ becomes small, the surface ablation term becomes large, leading to an increased effective surface mass balance at that point. I have messed around with this type of parameterization a lot in the past (e.g., Bassis et al., 2017) and could not convince it to converge numerically under grid refinement. Instead, this type of parameterization created an unphysical singularity in the slope/thickness of the glacier that became larger and larger as the grid spacing became finer and finer. To cure the singularity, I had to regularize the frontal ablation term, recognizing that the surface ablation needs to be spread out over a characteristic length scale (I used 1 ice thickness). Doing this cured the lack of convergence and provided more physical surface slopes when using small grid spacing. But the results will depend modestly on the regularization scheme. Because of my experience, I would recommend considering a numerical convergence study to assess if the results are independent of grid resolution, time stepping, etc. This is not to say that the authors scheme is problematic, but it would be reassuring to provide some additional tests. To be honest, the entire attribution framework would still work even if the model does depend on the grid spacing. It would just emphasize my previous point that a real attribution requires some estimate of our confidence in model physics and numerics.
Bassis, J., Petersen, S. & Mac Cathles, L. Heinrich events triggered by ocean forcing and modulated by isostatic adjustment.Nature 542, 332-334 (2017) 3. I think my biggest recommendation is that the authors conduct some numerical convergence studies. In my opinion, numerical convergence studies are like brushing your teeth: unglamorous, but essential hygiene that needs to be regularly performed to avoid unpleasant surprises. This is often done and then forgotten about. Please tell readers what you have done even if you don't show it.
Minor comments:
Line 275 and elsewhere: While->Although. While is technically supposed to refer to time.
Introduction: Grounding line retreat in WAIS maybe related to the MISI (although also ocean forcing!), which is tied to retrograde bed slope. But there are other types of retreat. For example, the disintegration of ice shelves in the Antarctic Peninsula is not tied to bed slope (because the ice shelves are freely floating). Similarly, retreat of Petermann Ice Tongue is also not tied to bed slope. I think the discussion here is mainly focused on Greenland and grounded glaciers. This might be something worth emphasizing.
Bed topography in Figure 1 looks like it is piecewise continuous, but not differentiable. This can create numerical issues and problems with numerical convergence in models that assume the ice thickness is smooth and differentiable.
Line 135: Out of curiosity, why not use a one sided probability distribution (e.g, lognormal) that naturally avoids unphysical adding mass to the terminus? Figure 2 makes a key point: In a system that is close to a system with an instability, retreat always occurs and the only question is how long it takes for retreat to initiate. I don't know that there is strong evidence for this type of behaviors for glaciers. This might be partially because the climate is not stationary stochastic over this type of time scale.
Equation (4) appears to be missing an ice thickness on the right hand side? Please check. | 2,370.2 | 2022-02-10T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Effective Mass of Quasiparticles in Armchair Graphene Nanoribbons
Armchair graphene nanoribbons (AGNRs) may present intrinsic semiconducting bandgaps, being of potential interest in developing new organic-based optoelectronic devices. The induction of a bandgap in AGNRs results from quantum confinement effects, which reduce charge mobility. In this sense, quasiparticles’ effective mass becomes relevant for the understanding of charge transport in these systems. In the present work, we theoretically investigate the drift of different quasiparticle species in AGNRs employing a 2D generalization of the Su-Schrieffer-Heeger Hamiltonian. Remarkably, our findings reveal that the effective mass strongly depends on the nanoribbon width and its value can reach 60 times the mass of one electron for narrow lattices. Such underlying property for quasiparticles, within the framework of gap tuning engineering in AGNRs, impact the design of their electronic devices.
, and charge ± e associated with a local lattice distortion 28 . Bipolarons, in turn, present two narrower intragap energy levels and stronger lattice distortion than polarons, charge ± 2e and are spinless quasiparticles 28 . The lattice distortions produced by either quasiparticle result in the observed larger effective masses responsible for reduced charge mobility. In this sense, the interplay between the carrier's effective mass and the different properties of AGNRs is a crucial aspect that should be understood to promote the enhancement of graphene-based devices figures of merit.
Herein, we study the drift of charge carriers in AGNRs to phenomenologically characterize their effective masses (m eff ). By means of a 2D generalization of the Su-Schrieffer-Heeger (SSH) model 29,30 , along with a Stokes dissipation model, we numerically investigate the dynamics of polarons and bipolarons in these systems. In the scope of our approach, we determine terminal velocities and effective masses of the charge carriers for AGNRs with different widths. Our findings show that effective mass strongly depends on carrier type and ribbon width, varying up to two orders of magnitude. Importantly, different carbon-based systems (or even inorganic-based nanomaterials 31 ) have different electronic structures and may present distinct responses when it comes to transport properties, in the sense their symmetry and doping level may alter these properties substantially 32 . Zigzag GNR does not present an energy gap with which polarons and bipolarons are usually associated 3,4 . In this way, they are not considered here.
Results
The quasiparticles present local lattice distortions that accumulate charge. Figure 1(a,b) depict the time evolution of charge density in 6-AGNR, where hot colors represent such charge accumulation. In Fig. 1(a), the charge density profile represents a polaron moving under the influence of an external electric field applied in the vertical direction. Similarly, Fig. 1(b) shows a bipolaron, subject to the same electric field strength. Both quasiparticles respond to the applied field. However, the polaron experiences stronger acceleration. The slower response to the electric field as presented for the bipolaron is so in spite of the its charge (2e), which results in twice as much force applied on bipolarons when compared to polarons. Such behavior goes to show that the extra force is not enough to balance the increased inertia of a bipolaron. Indeed, bipolarons carry along a stronger distortion of the nanoribbon lattice. Comparison between both distributions of charge density demonstrates that the polaron's charge is distributed over 40 sites in the vertical direction, whereas the bipolarons' charge spread over less than 30 sites. As such, the polaron is more delocalized than the bipolaron. The combination of more charge confined in shorter regions results in more significant lattice deformation, which is responsible for the increased inertia observed for bipolarons. Therefore, Fig. 1 illustrates that increased localization will take a toll on charge mobility.
By taking the center of the charge density distribution and registering its position, it is possible to obtain the time evolution of the charge carrier's position in the nanoribbon. Figure 2(a) shows the polaron displacement, in the vertical direction, as a function of time. Each color represents the behavior observed in different AGNR families. Red curves refer to 4-AGNR, green to 5-AGNR, and blue to 6-AGNR. In Fig. 2(a) it can be seen that the displacement curves approach linear behavior, denoting that charge carriers reach a terminal velocity. Similar results are seen in Fig. 2(b) for bipolarons. Note that a curve representing the 3p + 2 family is absent in Fig. 2(b) since bipolarons are not stable in lattices belonging to this family 33 . Regarding polarons, it is worth to mention that this quasiparticles are present only in thinner AGNRs for the 3p + 2 family 33 , and because of that we consider as representative systems the ribbons 5-AGNR and 8-AGNR in which polarons can be formed 33 . Despite similar qualitative behavior, the two quasiparticles show different characteristic times for reaching terminal velocity. For polarons, this time corresponds roughly to 1 ps, around half of the simulation time. In the case of bipolarons, an extra 0.5 ps is necessary. A simple Stokes dissipation model is used to describe the observed carrier motion. This model considers a dissipation term proportional to the first power of the velocity: F d = −bv. Therefore, we can obtain terminal velocity v t when the dissipation term equals the force produced by the electric field v t = qE/b, where q stands for the carrier's charge and E the electric field. Under these conditions, the displacement of the charge center as a function of time (t) is given by where m eff stands for the effective mass of the carrier and b is the drag coefficient. By fitting the displacement curves, changing the values of m eff and b, we can evaluate both polarons' and bipolarons' effective masses as well as the drag coefficients. Note that meff is a fitting parameter. In this sens, we are able to evaluate both effective masses and drag parameter by using its value. Importantly, the main goal of the present work is proposing a route to evaluate the effective masses of quasi-particles in graphene nanoribbons. This quantity is fundamental to experimental approaches to the evaluation of charge mobilities, as well as the understanding of underlying properties of charge transport in graphene systems. Regarding the model used for the determination of the effective mass, as mentioned above, we chose Stoke's model for its simplicity and its phenomenological aspect. A vital feature of this model comes from observation of terminal velocity and thus the necessity of a dissipation term.
Through the procedure described above, it is possible to understand how the ribbon width affects charge carriers inertia. Figure 3(a) shows the polaron effective mass in units of electron mass (m e ) as a function of nanoribbon's width. The blue, red, and green lines correspond to the 3p, 3p + 1 and 3p + 2 families, respectively. In all cases, the effective mass correlates inversely with the ribbon width, ranging from 0.2 to 14 m e . As expected, effective masses are lower for the 3p + 2 family, given its quasi-metallic nature. The remaining two families, however, show no clear ordering, with both curves intersecting each other. Analogous conclusions hold for bipolarons, as shown in Fig. 3(b). www.nature.com/scientificreports www.nature.com/scientificreports/ In contrast to polarons, bipolaron effective masses are considerably larger, ranging from around 10 to near 60 m e in the case of the 3p + 1 family and from 5 to 15 m e in the case of the 3p family. These results reflect the above mentioned more considerable inertia observed in the bipolaron's response to the electric field, responsible for the longer times needed for terminal velocity to be reached. Due to the differences in effective mass, the required electric field intensity required to move both charge carriers differ considerably. The insets of Fig. 3 show representative results of the evaluation of effective mass under different electric fields. One can note that in the case of polarons, the obtained masses are practically field independent. The same situation occurs for bipolarons until fields of around 3000 V/cm. After this critical field strength, the effective masses increase, and the fits to the model are not appropriate, indicating that the model may be not valid in high electric field regimes. This behavior takes place due to strong electron-phonon interaction in GNRs. Below a critical field strength, electrons and phonons are strongly coupled. Above this critical limit, electrons are decoupled from the lattice assuming supersonic velocities. Therefore, the lattice distortions and electron starts to move disconnectedly, and the kinematic model is not valid anymore.
Finally, the interplay between effective masses and ribbon width can be understood from a microscopic perspective by analyzing the charge distribution for quasiparticles in different AGNRsy. Figure 4(a,b) show these distributions for 4-AGNR and 6-AGNR, considering a lattice containing a polaron (left panels) and a bipolaron (right panels). It is clear that as ribbon width increases, the polaron becomes more delocalized, as can be seen by the hotter colors in smaller widths. Despite increasing the ribbon width, the charge tends to concentrate laterally. The same qualitative behavior takes place for lattices containing a bipolaron. The combination of these two underlying effects for the net charge localization leads to an increase in the local lattice deformations associated with the presence of charge for narrower AGNRs. Contrarily, for wider AGNRs, the interplay of these two effects decreases the local distortions that are interacting with the charge. Moreover, in Fig. 4 it is possible to note that lattices containing a bipolaron presents a higher degree of charge localization (that are represented by the signatures in red). Bipolarons quasiparticle have a similar extension to the polaron, approximately 30 Å. Since polarons and bipolarons are composite quasiparticles in which the local lattice distortions are coupled to an additional charge, both evolve in time together during the transport of these quasiparticles. Therefore, the higher the degree of distortion more lattice energy should be transferred between neighboring sites to accomplish the polaron/ bipolaron transport. Consequently, this mechanism for charge transport increases the effective mass of more localized charge carriers.
Methodology
The model Hamiltonian employed here is given by H = H latt + H elec , where the first and second terms govern the lattice and electronic degrees of freedom, respectively. By employing a harmonic approximation 30 , we treat the lattice dynamics classically. In this sense, its Hamiltonian assume the following form where P i is the momentum of the i-th site with mass M, and K is the force constant associated with the σ bond 30 . The electronic Hamiltonian, in turn, describes the π-electrons dynamics according to the equation below, www.nature.com/scientificreports www.nature.com/scientificreports/ The summation runs over π-electrons in neighboring i and j sites with spin s (see Fig. 5). † C i s , and C i,s denote the creation and annihilation of an electron in states denoted by their subscript indices. To consider an external electric field ( → E ), we use a vector potential according to → = − E t c A t ( ) (1/ ) ( ). The exponentials come from the Peierls substitution method 34 . The unit vector r i j , points from j site to i site. Finally, the parameter γ in H elec is defined as γ ≡ ea c , where a is the lattice parameter, e the fundamental charge, and c the speed of light. The term t i,j is the hopping integral, which couples the π-electrons to the lattice according to In Eq. 4, α is the electron-phonon coupling constant and η i,j is the relative displacement of the lattice sites from their equilibrium positions.
The dynamics calculation starts from an arbitrary initial set of coordinates {η i,j }, that is necessary to solve the electronic part of our model Hamiltonian initially. As a consequence, this procedure leads to an eigenvalue-eigenvector equation for the electronic component of the system, where the eigenvalues are E k and the eigenvectors are ψ k,s (i,t = 0). These quantities can be related as follows: k k s i j l s i j l s i j l s , ,, where i, j, j′ and j″ are neighboring sites.
To solve the classical component or our model, that describes the lattice structure, we turn to the Euler-Lagrange equation 23 . From the solution of the electronic part, we evaluate the expectation value of the wave function 〈Ψ|L|Ψ〉. This equation leads to: where www.nature.com/scientificreports www.nature.com/scientificreports/ couples the electronic and lattice degrees of freedom. The primed sum means that only the occupied states are considered.
The solution of the Euler-Lagrange equation with P i = 0 leads to a new set of coordinates {η i,j } that is used to recalculate the electronic Hamiltonian. This process is repeated iteratively until they reach the convergence criteria. As a result, this self-consistent procedure yields the ground state geometry that considers the interdependence between charge and lattice.
After achieving the convergence criteria, the time evolution of the initial state can be accomplished using the full Euler-Lagrange equation 23 . The time evolution of the electronic part is governed employing the time-dependent Schrödinger equation. To do so, we expand the wave function ψ k,s (t) in the basis of eigenstates of the electronic , at a given time t. Therefore, the wave function in time t + dt can be expressed as The dynamics of the electronic structure is carried out by using Eq. 8, that is evaluated numerically and then employed to the calculation of the expectation value of a new Lagrangian 23 . The Euler-Lagrange equation leads to a Newtonian type expression that takes into account the neighboring bonds: The applied electric field is turned on adiabatically, to avoid numerical error, in the following scheme: where t f is the total time of simulation and τ is the time needed until the electric field reaches its full strength.
To avoid edge effects, we consider periodic boundary conditions in the vertical direction of the nanoribbon, where the field is applied. Here, we use the notation NxM-AGNR, where N and M represent the number of sites on the horizontal and vertical directions of the nanoribbon, respectively. As all systems considered have M = 70, for the sake of simplicity, we use the notation N-AGNR. In the studied cases, N vary within the interval of 4-12.
Conclusions
In summary, charge carrier dynamics in AGNRs under the influence of an external electric field were analyzed employing a 2D generalization of the SSH Hamiltonian. AGNRs with widths ranging from N = 4 to N = 12 were studied. Results point to the polaron and bipolaron formation on such systems, where these quasiparticles respond differently to the external electric field, being the inertia of bipolarons larger. Eventually, however, both quasiparticles stop accelerating under the electric fields, moving afterward with constant velocity. Making use of a Stokes dissipation model, we were able to determine the effective mass of the charge carriers for several ribbon widths. It is shown that the effective mass of these quasiparticles varies drastically depending on two aspects, the system's width and the particular kind of quasiparticle present in the system. The effective mass for polarons presented values from 0.31 m e to 14.7 m e . In the case of bipolarons, the effective mass had values between 4 m e and 60 m e . | 3,565.8 | 2019-11-13T00:00:00.000 | [
"Physics"
] |
A method for inspecting near-right-angle V-groove surfaces based on dual-probe wavelength scanning interferometry
High-angled structured surfaces such as micropyramidal arrays, V-grooves and lenslet arrays are widely used in industries. However, there is currently no effective way to inspect these microstructures, resulting in very high scrap rates. This paper presents a proof-of-principle demonstration of an optical system capable of measuring V-groove structures in a single measurement acquisition. The dual-probe wavelength scanning interferometry (DPWSI) system comprises dual probes, with orthogonal measurement planes. The calibration of the DPSWI system is the key to registering the relative locations of the dual measurement planes and allowing the surface topography to be correctly reconstructed. In order to achieve this, a custom calibration artefact was manufactured comprising focused ion beam etched features on two faces of a precision cube. The procedures for the characterisation of the artefact to generate reference topography, and the subsequent calibration of the DPWSI are described in full. A measurement example from a metallised saw tooth sample featuring near-right-angles grooves having a peak-to-valley height of 32 μm and nominal pitch of 25 μm is presented and compared with a result obtained using stylus profilometry. DPWSI is shown to obtain an areal dataset in a single acquisition and is able to better resolve peak/valley points compared with the stylus, which is limited by a 2-μm tip radius. Some lateral scale error is apparent in the final DPWSI results and a discussion of this in terms of the limitations surrounding the current calibration artefact is presented.
Introduction
An important and developing need in surface metrology is to be able to measure structured and freeform surfaces [1]. Micro-fabricated structured surfaces with multiple highangle facets are widely applied in many application areas [2][3][4]. For example, brightness enhancement film has a prismatic structure and is frequently used in liquid crystal displays to enable power saving and thermal management. Optical gratings comprising periodic structures find widespread applications in optical instruments such as spectrometers, lasers, wavelength division multiplexing devices, etc. [5][6][7][8][9]. However, in general, the manufacturing processes to create such surfaces are heavily reliant on the experience of fabrication workers adopting an expensive trial-and-error approach, resulting in the reporting of high scrap rates of up to 50-70% [10]. In this context, it is clear that overcoming the challenges of providing the effective measurement of these surfaces types will have a meaningful impact in terms of improving processes and reducing product costs.
Currently, the contact stylus profilometer and optical instruments are predominantly applied to inspect these types of structures. Stylus profilometery features the required vertical resolution but is potentially destructive, especially given many structured surfaces may be produced in polymers e.g. prismatic films. Additionally, areal measurement is relatively time-consuming with a stylus because the sample must be scanned line-by-line. Finally, the size of the stylus tip can induce large errors when measuring microstructures with high aspect ratios due to mechanical filtering [11]. Optical measurement techniques, such as confocal microscopy, focus variation and interferometry, are commonly limited by the numerical aperture (NA) of the objective lens which sets a limit on the maximum measurable slope angle [12]. Light reflected from facets that exceeds the acceptance angle associated with the objective lens is not collected and cannot be analyzed, leading to missing data or reduced precision. Additionally, multiple reflections can occur in V-grooves and other similar structures which results in severe measurement errors in optical instruments, particularly in coherence scanning interferometers (CSI) [9]. Other types of imaging techniques are also very restricted in this area, for instance scanning electron microscopy (SEM) provides high lateral resolution and depth of focus, but cannot acquire the height information directly and additionally; sample preparation can be non-trivial [13].
The measurement system reported in this paper aims to solve one commonly encountered high-angled structured surface feature, namely the V-groove. The method relies on wavelength scanning interferometry (WSI) [14][15][16] which can achieve measurement with axial resolutions approaching the nanometre [17] without the requirement for the mechanical scanning of either the sample or optics, unlike comparable techniques such as CSI. This lack of mechanical scanning opens up the possibility of using a dual-probe optics system to provide simultaneous measurement with two adjacent fields of view. This technique, which we term dual-probe wavelength scanning interferometry (DPWSI) is described in the following section.
Methodology
The DPWSI system, as illustrated in Fig. 1, comprises three modules: light source, interferometer and console. The light source consists of a tungsten-halogen filament source which is coupled to an acousto-optical tunable filter (AOTF). The AOTF filters the white light from the tungsten-halogen bulb into a narrowband wavelength (linewidth of~2 nm, selectable across a range between 590.98 and 683.42 nm) which is then coupled into a multimode fiber before subsequently illuminating the interferometer module. The interferometer module is composed of two cameras and two identical × 4 magnification Michelson interference objectives (OBJ1, OBJ2). The two objectives are of a long working distance design (30 mm) and are placed such that their optical axes are orthogonal and their focal planes intersect. A beamsplitter (BS3) is used to split the collimated light from the multimode fiber and illuminate both objectives. The console acts as the controller and interface for the system as a whole. During the measurement process, the wavelength scanning of the illumination light is controlled and synchronised by a software running on the PC via the AOTF driver. A total of 256 spectral interferograms, each at a discrete wavelength, are captured from each of the Michelson objectives by their corresponding cameras. The two interferograms sets are then analyzed to acquire the surface topography under each respective probe using a suitable fringe analysis algorithm [15][16][17][18].
The measurable axial range for a WSI system is determined by the lesser of either (a) the coherence length of the illuminating light or (b) the depth of field (DoF) of the Michelson objective. In the apparatus described, the limit is found to be due to the objective DoF and is approximately 110 μm. The lateral measurement range is determined by the field of view (FOV) of the optical system. The Rayleigh criterion defines a lateral resolution for the system of 4.1 μm as defined by the objective numerical aperture (NA = 0.1) and the longest illumination wavelength.
For each objective, interference is generated between the light reflected from a reference mirror (REF1, REF2) and the light reflected from the measurand. In WSI, a virtual reference plane exists in the measurement path at the point at which the interferometer is balanced. Results obtained from each interferometer yield the optical path difference (OPD) from the measurand surface to the virtual reference planes (VREF1, VREF2). Because there is no mechanical movement during the measurement process, all the optics are stationary and thus the virtual reference planes VREF1 and VREF2 are nominally static. Therefore, if the relative location of the VREF1 and VREF2 can be established through calibration, then the topography of the sample as measured by the combination of both interferometers can be recovered.
The positional relationship between the two probes is built through the spatial coordinate calibration (Fig. 2). Because there is no overlapping measurement area between the two probes, traditional calibration methods are not applicable, thus a custom calibration artefact has been designed and manufactured. Figure 3 shows the calibration artefact which comprises a precision cube with two adjoining faces (labelled plane 1/2 in Fig. 3) certified to be perpendicular within 2 s of arc (the actual deviation measured by autocollimator is 1.2 arcsecs). The two faces of the cube have flatness better than 50 nm and S a roughness of less than 30 nm. Upon each face, an array of 50 μm square wells, having a depth of approximately 200 nm are etched using a focused ion beam (FIB). To our knowledge from both the literature and experiment, no existing instruments are able to directly measure the topography of the whole artefact with sufficient resolution to detect the features from a single orientation. As such, the acquisition of the reference topography was accomplished by combining the results obtained from CSI (Taylor Hobson CCI 3000) and SEM (FEI Quanta 200 3D SEM-FIB) instruments. CSI was adopted to measure the areas containing the etched features on each cube face separately. SEM was then applied to acquire relative locations between the features (f 1 , f 2 ) on the two faces, namely the distances d 1 , d 2 and d 3 as shown in Fig. 3. SEM has a large enough depth of field to image all the features on both faces, while also providing the ability to determine distances with submicron resolution. Since the dihedral angle between the two faces is already known, the reference topography can be reconstructed by binding the datasets together based on the coordinate system shown in Fig. 3c by satisfying Eq. (1).
Here, P n signifies the fitted plane of face n = 1 or n = 2, f m refers to the specific feature on the artefact. This can be accomplished by maintaining one dataset, while rotating and translating the other dataset to satisfy these restrictions using only rigid transformations.
The features in the measurement results are extracted by an image segmentation algorithm such as Sobel operator and watershed transformation [19,20]. By matching the features correspondingly to the reference result, the relationship between the coordinate systems of the two probes and the coordinate system of the reference topography can be determined. The relative orientation of the coordinate systems of the two probes can then be calculated. If P n and P′ n are the quaternions of the feature points such as corners or centers of the features in the reference topography, Q n and Q′ n represent the quaternions of the corresponding feature points in the measurement results by the two probes. The following equations must be satisfied: R 2 t 2 0 1 Q n ¼ P n ð3Þ where R 1 , R 2 , t 1 and t 2 represent the rotation matrices and translation matrices from the coordinate systems of the two probes to the reference topography respectively. These matrices are functions of α n , β n and γ n which are the respective rotation angles, and t nx , t ny and t nz which are the respective translation components, along the x, y and z axes. As such, there are 12 unknown independent variables in total. Theoretically, if there are a sufficient number of feature point pairs, it is possible to precisely determine the matrices. However, since there are inevitable errors when extracting the feature point pairs in a practical measurement, a 3D registration algorithm based on iterative closest point (ICP) is utilised to increase the matching accuracy [21]. The datasets obtained by the two probes are bound together to acquire the whole topography of the structured surface as follows: where X and Y represent the datasets acquired by the two probes respectively. To minimise the computation, the result can be rotated and translated as a whole in the same coordinate system as shown in Eq. (6) with no change to the result.
Experimental results and discussion
The two probes of the DPWSI were first individually calibrated using a set of standard step-height specimens, after which the calibration artefact was measured. The stitched reference topography is shown in Fig. 4a, which was then used to calibrate the DPWSI system yielding the result shown in Fig. 4b. To verify the presented approach to V-groove measurement, a sawtooth metallised prismatic film manufactured by Microsharp Co. Ltd. was measured using the described DPWSI system. Figure 5 shows the individual data obtained from each probe and the combined result. The height of the measured step features is~32 μm and the nominal pitch is 25 μm. It is apparent that in high curvature areas (peaks tops/ valley bottoms), some measurement data loss is experienced, which is due to the previously mentioned NA-dependent acceptance angle limitation for the optics [12]. Nonetheless, it is clear that a large proportion of the artefact has been successfully measured in a single acquisition, with the instrument in a single orientation. Fig. 6a compares a profile taken from the DPWSI measurement result with one obtained across a similar region with a stylus profilometer (Taylor Hobson PGI Form Talysurf Series 2) using a 2-μm tip radius stylus. The stylus profile was optimised using a deconvolution filter which provides some limited improvement in the obvious low-pass mechanical filtering exhibited by the tip radius in the peaks/valleys regions of the profile. For the DPWSI profile, the small amount of missing data apparent in Fig. 5 is also identifiable at the peaks/valleys of the profile, we have chosen not to interpolate this data; however, these regions where confirmed as being physically sharp using SEM imaging. The two profiles were registered using an ICP method to enable a stable comparison.
These results demonstrate that DPWSI can successfully recover a V-groove profile in a single acquisition from a single measurement orientation, representing what we believe is a first for an optical measurement system. Careful evaluation Fig. 4 a The reference topography of the calibration artefact acquired using the combined CSI and SEM measurements described in the preceding section. b The calibration artefact measured by the DPWSI system. Both plotted with CloudCompare of Fig. 6a shows that there is some lateral scale error associated with the result from the DPWSI as compared with the profile obtained using stylus profilometry. Looking at the middle of the three obtained ridges, the left hand facet (measured by probe 1) exhibits a shortened scale (amplification coeffi-cient~0.96). Conversely, the scale for the right hand facet (measured by probe 2) is long (amplification coefficient1 .04) in relation to the stylus result. Figure 6b shows the resulting residual error as obtained from the ICP algorithm along the profile. The error is seen to peak at approximately 820 nm but remains substantially less than this across most of the profile (the average value is 292.1 nm). The residual error is contributed predominantly by the scale error previously discussed; however, there is an additional contribution from the mismatched sampling intervals of the two datasets. These scale errors are due to a systematic error introduced during the calibration process. In the calibration process, there are two primary error sources resulting from the image processing. Segmentation error resulting from inaccurate edge extraction is in part due to the imperfections in the fabricated features on the calibration artefact. Residual registration error derives not only from limitations in the 3D registration algorithm used, but also from mismatches between the obtained reference dataset and the measurement dataset. Considering all of the error sources, the residual error is quite reasonable since it is far less than the lateral resolution of the probes which is about 4 μm.
Conclusion
This paper demonstrates the principle and operation of an optical system capable of acute-angled V-groove measurement based upon a wavelength scanning interferometer coupled with a probe that provides two orthogonally located fields of view. A calibration procedure involving a custom designed calibration artefact was used to provide registration of the two fields. Experimental results demonstrate the measurement of a sawtooth profile from metallised film featuring a near-right-angled V-groove structure. For a given probe, a range of V-groove angles are measurable, limited by the acceptance angle of the objective lenses employed. This range can be expanded further by designing equivalent probe heads with varying angular separation. Further work will examine in more detail the errors associated with the calibration procedure and aim to minimise these through a combination of improved artefact fabrication, improved reference topography construction, feature extraction and registration algorithm optimisation.
Funding information The study was funded by the UK's Engineering and Physical Sciences Research Council (EPSRC) of the Center for Innovative Manufacturing in Advanced Metrology (EP/I033424/1) and the EPSRC Future Advanced Metrology Hub (EP/P006930/1). Fig. 6 a The comparison of the measured profiles of the metallised prismatic film by the DPWSI (blue) and stylus profilometry (red). b Plot of the residual error between the DPWSI and stylus profiles obtained from the ICP algorithm used to register the two datasets | 3,745 | 2018-07-12T00:00:00.000 | [
"Physics",
"Engineering"
] |
The effect of copper on the multiple carbon nanofilaments growths by the methane decomposition over the oxidized diamond-supported nickel–copper bimetallic catalyst
To clarify the indispensable parameters for the multiple carbon nanofilaments (CNFs) growths, in other words, having a unique Octopus-like morphology consisting of the Marimo-like carbon (MC), we have systematically studied to synthesize the MC by the decomposition of methane using oxidized diamond-supported Ni–Cu bimetallic catalysts. We discovered that a Cu addition of 20 wt.% by weight and a growth temperature in the region of 550 °C to 600 °C resulted in many CNF forms from a single catalyst particle, specifically the "Octopus-like" morphology of CNFs. We also discovered that the several CNFs forms might occur from the carbon dissolved in the sintered catalyst particles. We described a model process of the unique structure formation. We expect that the Octopus-like CNFs growth gives enough space volume in the MC for a mass transfer, consequently, it should contribute to realizing a higher power generation performance of a polymer electrolyte fuel cell (PEFC) although under a higher-voltage generation region.
Introduction
Fibrous carbon nanoparticles were discovered by electron microscopy over 70 years ago and described as "an unusual form of carbon" generated by the catalytic reaction of carbon monoxide over iron oxide tiny particles included in a brick [1]. At that time, another transition metals of cobalt and nickel were also used as a catalyst for growing carbon filaments [2]. It is now known as a carbon nanofiber or nanofilament; this type of carbon represents a unique quality, such as high surface area and high electrical conductivity at the same time. Many trials for generating vapor-grown carbon nanofiber catalyzed hydrocarbons over small metal particles began in the 1970s and 1980s [3][4][5][6][7]. A simultaneous yield of hydrogen and fibrous carbon nanomaterials has again attracted the attention of researchers working on the catalytic decomposition of methane over Ni catalysts [8]. To realize structure-controlled fibrous carbon nanomaterials growth, a role of catalyst metal particles is essential, and this concept has been widely accepted in the research field by the pioneered studies reviewed in Ref. [9]. As a result of the potential for various applications, a somewhat large-scale process involving the catalytic decomposition of hydrocarbons over tiny metal particles has been extensively studied.
To tune the catalyst reactivity, some research groups have been focused on the effect of bimetallic, especially Ni and Ni-based by the addition of Cu as catalysts for the production of carbon nanostructures [10][11][12][13]. Nishiyama and co-workers [10] reported that the addition of Cu on Ni promoted the catalytic activity for carbon formation, and also found that the Ni-Cu-particle diameter was larger than the fiber diameter formed on metals of higher Cu content. The SEM image of the fibrous material revealed that one metal particle generated two or three fibers. Bernardo and associates [11] discovered similar SEM photos of the shape of carbon generated by the decomposition of methane over a silica-supported Ni-Cu catalyst and referred to it as "octopus" carbon. Rodriguez, Baker, and colleagues conducted extensive research on the effect of Cu addition to Fe [14], Co [15], and Ni metals on carbon nanostructures and determined that Cu-Ni alloys were the most effective catalysts for the reaction. In the periodic table of the elements, Cu is next to Ni, however, their chemical nature is different; Ni can catalyze hydrocarbons and indicate a higher carbon solubility, while Cu cannot [2,16,17]. Although Cu itself does not catalyze hydrocarbons to form carbon nanofilaments, Cu addition to Ni enhances the activity to decompose and form carbon nanofilaments. The Ni-based Cu bimetallic catalyst is both attractive and important for studying growth mechanisms and realizing an acceptable procedure for several applications.
We have developed and studied to grow a novel spherical carbon [18], we named it the 'Marimo-like carbon' (abbreviated as MC) after its spherical shape looks like the 'Marimo' which is algae lived in Lake Akan of Hokkaido. The fundamental particle of MC is a carbon nanofilament (CNF), and a large number of CNF cluster to generate its spherical shape. The greater graphitized CNFs are obtained by a decomposition of hydrocarbons, particularly methane, over an oxidized-diamond supported Ni catalyst. We have interested in this higher activity of Ni catalyst loaded on the oxidized-diamond support for CNFs deposition [19].
The oxidized-diamond surface shows a specific electronic structure as it is chemisorbed with oxygen-included functional groups, otherwise, the diamond bulk is an insulator and electrons are strictly localized at the chemical bond. The oxidized-diamond surface is expected to act as solid carbon oxide material, and to affect the electronic states of supported metal particles related to its catalytic activity. The MC includes diamonds as a core; the MC is all-carbon material, in other words, a sp 2 -sp 3 carbon composite. We have proposed the MC as the Pt catalyst support for use in the polymer electrolyte fuel cell (PEFC) catalyst layer and proved that the membrane electrode assembly using Pt/MC (MC-MEA) needed less ionomer to keep a form of cathode catalyst layer [20], consequently, the MC-MEA efficiently worked in the high current density range compared to an existing catalyst using amorphous carbon. Because the MC has a space volume between the CNFs, the reactant gas and product water could readily pass through the space volume [21]. The accelerated deterioration test (ADT) revealed that the MC was a more oxidation-resistant support than ordinary carbon black [22]. This result indicates that the sp 2 -crystallinity of CNF is much higher compared to the carbon black. To realize a high-performance PEFC, the catalyst support carbon material should be a higherordered fibrous structure, and keep an appropriate space volume between fibers. It is essential to control both the CNF sp 2 -structure and the space volume between CNFs.
In this study, we investigated intensively the synthesis of MC by the decomposition of methane using oxidized diamond-supported Ni-Cu bimetallic catalysts in order to clarify the effect of Cu addition on the morphology and fine structure of CNF. We discovered that regulated Cu addition resulted in the formation of many CNFs from a single catalyst particle, specifically the "Octopus-like" morphology of CNFs. We described a model process for the development of a unique structure.
Experimental
The oxidized diamond powder (diamond powder: SANDVIK HYPERION, Worthington, OH in USA, Type RVM 0-0.5) calcined at 450 °C in the air was used as catalyst support. The MC growth catalyst was prepared by impregnating it with a mixed aqueous solution of nickel nitrate hexahydrate and copper nitrate trihydrate (special grade, FUJIFILM Wako Pure Chemical Corporation, Osaka, Japan). The solution contained in the diamond powder was dried before being calcined in the air at 400 °C. The oxidized diamond powder loaded with 5wt.% Ni-Cu-catalyst is referred to Ni-Cu-(5 wt.%)/O-dia. The growth conditions for the MC are shown in Table 1. We used a fixed-bed type quartz reactor (GEN-TECH, INC., Yokohama, Japan) for the growth experiments. Methane was used as the reaction gas and the flow rate was set at 30 sccm. The bimetal catalyst used for the MC growth was Ni-Cu-(5 wt.%)/O-dia. with the copper concentration in the range from 0 to 90 wt.%. The growth temperature was in the range of 400 to 725 °C to investigate the effect of the temperature on the CNF morphology. To understand the initial stage of the growth process, the duration time was in the range from 0 to 4 h. The MC formation was carried out in three steps, as indicated in Fig. 1. Step 1 was the temperature raising time; the catalyst was placed in the reaction tube and electrically heated to the reaction temperature in the Ar gas flow at a rate of 20 K/min. The catalytic reaction period was the second step. The reaction temperature was kept constant, and methane gas was injected before turning off the Ar gas flow. In this study, we defined step 2 as the reaction period.
Step 3 is the post-reaction process, a cooling-down duration. As the growth process was finished, the heating power supply was off and the methane gas flow was switched to the Ar gas flow to cool down the MC to a room temperature. The amount of grown CNFs was calculated from a weight gain yielded by the growth process.
Scanning electron microscopy (SEM; S-4100, Hitachi High-Tech Corporation, Tokyo, Japan) was used to examine the morphology and nanostructure of the obtained MC. A variation of the earliest step of the CNFs development phase was revealed using transmission electron microscopy (TEM; JEM-2100, JEOL Ltd.). catalyst was used for MC growth, and the reaction temperature of 590 °C resulted in the most carbon deposition, as demonstrated by the solid circle mark. There was no carbon deposition at 600 °C, which was only 10 °C higher than 590 °C. According to the X-ray photoelectron spectra [18], Nakagawa et al. showed that the supported Ni particles may diffuse into the oxidized diamond support under at a higher temperature of 600 °C. Under the reaction temperature of 600 °C, the supported Ni catalysts were hidden under the diamond surface, consequently, methane could not catalytically react with Ni resulted in no carbon yield. We found that the temperature giving the maximum amount of carbon yield shifted to a higher temperature over 590 °C by the addition of Cu to the Ni catalyst. As indicated by the left arrow in Fig. 2, the Ni-Cu bimetal catalyst containing 20% Cu (we briefly explained it as 'Ni8-Cu2/Odia. ' , indicated by an open rhombus mark) was used for the growth, and the maximum carbon deposition temperature was 650 °C, which is 60 °C higher than the 590 °C observed when using the Ni/O-dia. catalyst. When Ni5-Cu5/O-dia. was used for the growth (solid rectangular mark), as indicated with the right arrow in Fig. 2, the maximum carbon yield temperature shifted to the further high-temperature around at 700 °C. We speculate that the existence of Cu in the bimetal catalyst could play a role in preventing the Ni diffusion into the diamond support, which was observed at the reaction temperature of 600 °C with using Ni/O-dia. catalyst.
The Effect of Cu addition in the Ni-Cu-bimetal catalyst on the MC growth
As shown in Fig. 2, when 20% wt Cu was added to the Ni catalyst, the maximum carbon deposition increased from 229.8 mol Ni-mol −1 produced using the Ni/O-dia. Step 1: Heating-up period. The catalyst was brought into the reactor on a quartz boat, then heated to the appropriate temperatures in the Ar gas flow. The rate of temperature rise was 20 K/min. Step 2: Catalytic reaction period. The Ar gas was changed to methane for the Marimo-like carbon growth.
Step 3: A period of rest is required. When the growth process was completed, the heating power supply was turned off, and the methane gas flow was converted to the Ar gas flow to cool the grown material to room temperature [10] for benzene decomposition on the Cu-Ni alloy sheets and that of reported by Ashok et al. [23] for the decomposition of methane on a SiO 2 -supported Ni-Cu-catalyst. Nishiyama et. al. reported that the increase of carbon deposition by the addition of Cu would be yielded by the preventing effect of Cu on the Ni surface coverage with a carbon layer. They proposed that Cu-atoms partially dominated the surface of the catalyst particle, implying that the Ni-Cu-surface composition was not uniform. As the catalytic reaction sites, a Ni-rich patch on the catalyst particle surface might react with the gas molecules. The presence of Cu in the catalyst particle is expected to boost the reactivity of the Ni-rich region to the decomposition of gas molecules. Ashok et al. [23] also supported the lack of uniformity of the catalyst particle composition; it consisted of a Ni-rich area and a Cu-rich area, and the latter was known as having a higher affinity with the graphite structure. As a result, a Cu-rich area would limit the formation of a graphite layer on the Nirich surface, and the reactivity of the Ni-rich area could be preserved and aided in the synthesis of solid carbon. Although the details of the effect of Cu addition to the Ni catalyst on the increase of a maximum carbon yield and a higher shift of its temperature, the existence of a Ni-rich area and a Cu-rich area in the Ni-Cu-catalyst, such an inhomogeneous structure would be essential for increasing the reactivity of Ni to solid carbon deposition, are unknown. Figure 3 shows the amount of the carbon yield per 1 mol of Ni as a function of Cu content in the Ni-Cu bimetal catalyst under different reaction temperatures. In the case of the reaction temperature of 550 °C (solid circle mark), the carbon yield decreased with an increase of Cu content in the range from 0 to 20 wt.%. As the Cu content is above 20 wt.%, the carbon yields per 1 mol of Ni almost kept constant although the Cu content increased. Carbon yields decreased with a rise in Cu content in the range from 5 wt.% to 20 wt.% in the case of the reaction temperature of 600 °C (solid rohmbus mark) then appeared to be constant above 20 wt.% Cu, as observed in the case of 550 °C growth. At a higher temperature of 650 °C (open rectangular mark), a catalyst containing 10% Cu resulted in no carbon deposition. In contrast, raising the Cu concentration from 15 to 50% resulted in a significant increase in carbon yields. The Cu content above 20 wt.% gave a small effect to the amount of carbon deposition compared to that observed below the 20 wt.% Cu. This indicates that the reactivity of Ni-Cu-catalyst below or above 20 wt.% Cu content is suggested to be through different mechanisms. We focused on the effect of 20 wt.% Cu content on the MC growth.
There was no carbon deposition with 100% Ni at the reaction temperature of 600 °C, and a modest amount of Cu addition greatly increased the carbon yield. Because Ni particles on the diamond support could be dissolved into the diamond above 600 °C in our system, no carbon deposition occurred above 600 °C. We speculate that a small amount of 5 wt.% Cu addition can play a role at the bimetal catalyst particle surface for preventing Ni from dissolving into the diamond support. It is expected that the Ni-Cu-catalyst particle supported on the diamond has a Cu-rich surface; It would prevent Ni-Cu-catalyst particles from dissolving into the diamond support at temperatures above 600 °C, and as a result, Ni-Cu-catalyst might result in carbon deposition. These experimental results also indicated the existence of a Cu-rich patch on the Ni-Cu-catalyst surface, as well as an inhomogeneous structure of Ni-Cu-particles.
The effect of Cu addition in the Ni-Cu-bimetal
catalyst on the multiple-CNFs formation from one catalyst particle Figure 4 shows SEM images of the MCs grown using different Cu contents catalysts under three-kinds of reaction temperatures. Fibrous carbon nanomaterials were observed and any other form of carbon such as amorphous carbons could not. As indicated in the SEM images with arrowheads, we found that multiple CNFs grow from one catalyst particle likely as an "octopus". To clarify the growth conditions that resulted in an octopus-like morphology, 50 to 150 over catalyst particles were analyzed to count the number of CNFs generated from the particles using SEM images. Because a CNF was formed from a single catalyst particle, its morphology was classified as "1". Two CNFs were grown from one particle was named "2". In the same way, three CNFs were grown from one particle was "3". We counted how many particles as called "1", "2", "3", "4", and "5", then made histograms of the distribution of the five types of catalyst particles, as shown in Fig. 5. In the case of the 10 wt.% Cu content, the ratio of "1" and "2" particles were over 60% in the counted particles. As the Cu content of 15 wt.%, 550 °C growth gave a distribution peak at "2". When the temperature was raised to 600 °C, 40% of the catalyst particles produced one CNF growth, as shown by a peak at "1". The histogram displayed a flat distribution during higher temperature growth of 650 °C. Cu content less than 15% resulted in a slow-growing octopus-like shape. 650 °C growth produced a flat distribution regardless of Cu content. In the red-framed histograms generated from the Cu concentration of 20 wt.%, two peaks were identified at "4" and "5". With an increase of Cu content to 30 wt.%, the ratio of "5" particles grown at 550 °C decreased, and also the distribution became flat grown at 600 °C. The addition of 20% Cu to the Ni-Cu-catalyst had a significant effect on the formation of an octopus-like shape. As illustrated in Fig. 3, we discovered that the effect of Cu on carbon yields varied depending on the boundary of the 20 wt.% Cu concentration. The 20 wt.% Cu content of the Ni-Cu-catalyst would play a crucial role in the numerous CNF development mechanisms.
To understand the multiple-CNF growth, the octopuslike morphology change was observed by FESEM during the catalytic reaction with methane using Ni8Cu2/O-dia. catalyst at the temperature of 600 °C. Figure 6 shows a morphology of the CNFs grown with a reaction time of 10 min., 30 min., and 60 min., respectively. As shown in Fig. 6(a), in the case of just 10 min. growth, catalyst particles with a shorter CNFs in the length of a sub-micron meter were grown and two-CNFs growth from one catalyst particle was occasionally observed. In the case of 30 min. In Fig. 6(b), the majority of CNFs were formed from a single catalyst particle, and a numerous CNFs were rarely observed to grow. In contrast to these occurrences, as shown in Fig. 6 (c) of a 60-min-growth, we often identified a four or five-CNFs growth from a single catalyst particle, and such multiple CNFs growths were produced. The diameter of these catalyst particles appeared to be greater than 60 nm, which was larger than the diameter of catalyst particles obtained by a shorter growing time. The observed morphological changes in Fig. 6 were consistently indicated by the histograms in Fig. 7. Figure 7 shows the histograms obtained from the SEM images in Fig. 6. Both in Fig. 7 (a) and (b), above three-CNFs growth from one catalyst particle were hardly observed. As shown in Fig. 7 (c), in the case of growth duration was 60 min., the frequency of a four or five-CNFs growth from one catalyst particle was almost 50%. The frequency of multiple CNFs growths increased, as the growth phase increased, and the diameter of the catalyst particles appeared to be greater at the same time. Based on the FESEM data, we hypothesize that a bigger catalyst particle resulted in several CNFs growths.
To clarify the relationship between a catalyst diameter and the multiple-CNFs growth, we also observed the MC shown in Fig. 6 by using TEM. Figure 8 shows TEM images observed from the MC grown at 600 °C with using Ni8Cu2/ O-dia. catalyst by a growth durations is (a) 0 min., (b) 10 min, (c) 30 min, and (d) 60 min respectively. The 0 min. reaction in Fig. 8 (a) means the heating was off at the end of Step I in Fig. 1, then cooling down in Ar flow. Figure 8(a) shows that the catalyst particle diameter was less than 10 nm. In the instance of Fig. 8(b), the diameter was less than 30 nm, and several CNFs growths were unusual. In the event of growth durations of more than 30 min., as illustrated in Fig. 8 (c) and (d), the diameter was close to 100 nm, and multiple-CNFs growth was obtained. TEM observation suggested that the catalyst diameter which gave multiple CNFs growths was larger than that observed in the case of shorter growth duration. We speculate that an increase of the catalyst diameter during MC growth in the methane flow is a necessary step to occur the multiple CNFs growths from a single catalyst particle.
A histogram was generated from the TEM images presented in Fig. 8 to quantitatively indicate the link between the catalyst diameter and the growth of multiple-CNFs. The histograms produced from the TEM images in Fig. 8 are shown in Fig. 9. The frequency of the catalyst diameter above 60 nm increased as the growth duration increased from 0 to 60 min, as illustrated by the red rectangle area. This tendency indicates that the catalyst diameter increased with increasing of growth duration, in other words, catalytic decomposition of methane gave carbon dissolution in the Ni-Cu-catalyst particles; and resulted in growing catalyst diameter. Carbons dissolved in the catalyst particles are expected to reduce the melting temperature of Ni-Cu-particles, as a result, sintering of the catalyst particles was promoted and the diameter increased. It appears plausible to believe that four or five CNFs formed from a single catalyst particle, and that such multiple-CNFs growth occurred from sintered Ni-Cu-catalyst particles. The existence of a Cu-rich area on the Ni-Cu-catalyst surface was supported by experimental data, namely the impact of Cu addition on the protection against Ni dissolution in the diamond support, as detailed in the previous section. We speculate that the sintered particle having a Cu-rich surface, in other words, a partially covered surface with Cu, resulted in the multiple-CNFs growth. In Fig. 10, we suggest a model of the multiple-CNFs growth using Ni-Cu/O-dia. catalysts. Carbons dissolved in the Ni-Cu-catalyst particles supported on the oxidized diamond were sintered as the growth period progressed. The presence of a Cu-rich area on the Ni-Cu-catalyst particle
Summary
For the multiple-CNFs growth, we experimentally revealed the necessary conditions by the systematic study. The Cu content in the Ni-Cu-bimetal catalyst was 20 wt.%. It has been proposed that the composition of Ni-Cu-particles is inhomogeneous. The Ni-Cu-catalyst particle may have a Cu-rich surface. The Ni-Cu-catalyst particle diameter was at least 60 nm or above in size could yield multiple growths. The reaction temperature ranged from 550 °C to 600 °C. The regulated multiple-CNFs growth is projected to result in the formation of a controlled-space volume by the CNFs. The space volume provided in the MC is critical for both helping gas diffusion and a redox producing water removal [21]. We expect that the space-controlled growth process of MC will contribute to realizing a higher power generation performance although under a region of a higher-voltage generation. We are going to fabricate a membrane electrode assembly using the MC consisted with the Octopus-like CNFs as the catalyst support for the PEFC electrode.
Author contributions M. Shiraishi produced and characterized of MC then discovered the multiple CNFs growths. She also pulled together all of the raw data to make charts and graphs for considering the "Octopus-like" MC growth mechanism. K. Nakagawa designed the reactor and made suggestions to improve controllability of the parameter settings. T. Ando is a deputy supervisor of M. Shiraishi's doctoral dissertation and he gave important suggestions for promoting the research with his expertise of hydrocarbon physical chemistry. M. Nishitani-Gamo is a supervisor of M. Shiraishi's doctoral dissertation. She is also a supervisor of this research project for realizing a structure-controlled MC growth process not only to achieve higher performance of PEFC but also to explore various applications.
Funding This study was funded by the INOUE ENRYO Memorial Grant, TOYO University.
Conflict of interest
The authors have no relevant financial or nonfinancial interests to disclose.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. Fig. 9 The histograms of the Ni8Cu2/O-dia. catalyst particle diameters measured from the TEM images. These catalyst particles were generated at 600 °C for (a) 0 min (= heating off at the end of Step I, then cooling down in Ar flow), (b) 10 min, (c) 30 min, and (d) 60 min, respectively ▸ Fig. 10 A model of the multiple-CNFs growth using Ni-Cu/O-dia. catalyst | 5,752.6 | 2022-04-01T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Effect of Sodium Gluconate on Properties and Microstructure of Ultra-High-Performance Concrete (UHPC)
The properties of concrete can be significantly affected by sodium gluconate (SG) at very small dosages. In this paper, the effects of SG on the fluidity, setting time, heat of hydration, and strength of ultra-high-performance concrete (UHPC) were studied. The results show that (1) in the plastic stage, SG inhibited the formation of early ettringite (AFt) and delayed the hydration of tricalcium silicate (C3S) and dicalcium silicate (C2S). SG increased the initial fluidity of UHPC without decreasing within 1 h. When the SG dosage was ≥0.06%, the slumps at 30 min and 60 min increased slightly. (2) In the setting hardening stage, the addition of SG inhibited the formation of calcium hydroxide (CH), which significantly extended the setting time of UHPC. When the dosage of SG was 0.15%, the initial and final setting times were 5.0 times and 4.5 times that of the blank group, respectively. SG had no obvious effect on the hydration rate of cement in the accelerated period, but the peak hydration temperature of UHPC was increased when the SG dosage was 0.03~0.12%. (3) In the strength development stage, the 1 d and 3 d strength of UHPC decreased significantly with the increase in the SG dosage. However, SG could promote the formation of AFt at the pores and aggregate interface in the later stage, reduce the porosity of cementite, and improve the compressive strength of UHPC in 28 d, 60 d, and 90 d. When the SG dosage was 0.12%, the 90d strength increased by 13%.
Introduction
Ultra-high-performance concrete (UHPC) is a new cementitious composite material with ultra-high strength, ultra-high durability, and high toughness [1]. The materials used to prepare UHPC mainly include cementitious materials, quartz sand, chemical admixtures, water, etc. Cementitious materials are usually composed of cement, silica fume, fly ash, mineral powder, etc. In addition, UHPC is also mixed with high-strength steel fibers, which can significantly improve the toughness and tensile resistance of concrete [2]. The application of UHPC in engineering can effectively reduce the self-weight of a structure while improving the ductility of the structure, and UHPC with excellent performance is increasingly used in roads, bridges, and long-span projects [3,4]. However, the development and application of UHPC are also limited due to the construction difficulties caused by its rapid setting speed, serious loss of fluidity over time, and the need for higher compressive strength in special projects.
A high-performance superplasticizer is an indispensable component for preparing UHPC [5,6]. In recent years, the rapid development of polycarboxylate superplasticizers (PCEs) has provided technical support for the promotion and application of UHPC. However, the use of PCE alone often cannot meet the combined requirements of UHPC in terms of fluidity, fluidity loss, and setting time. Therefore, it is necessary to compound admixtures such as retarders, air entrainers, and plasticizers, and the application of composite admixtures in UHPC has received increasing attention from researchers [7,8].
loss, Li et al. [14] showed that the addition of SG can reduce liquidity loss over time. When SG and SP were mixed together, the fluidity loss decreased when the SG dosage was less than 0.03%. A small amount of SG had a significant effect on the liquidity loss of cement at 30 min but had little effect on the liquidity loss after 60 min and 90 min of hydration [13,21].
A trace amount of SG also had a significant enhancement effect on the strength of concrete, which gradually increased with the increase in SG from 0.03% to 0.08% at the same water-cement ratio [28]. The later strength of UHPC was improved via the combined use of SG and SP, but when the SG dosage was greater than 0.1%, the strength of the concrete was seriously reduced due to excessive retardation. The optimal dosage of SG was 0.03~0.07% in cement. With the same water-cement ratio, the strength of concrete can be improved to varying degrees by adding different amounts of naphthalene-based SP and being compounded with different dosages of SG. Each amount of naphthalene-based SP corresponds to an optimal dosage of SG to achieve the highest concrete strength. Moreover, the optimal dosage of SG corresponding to different cement varieties is different, indicating that there are also issues with the suitability of SG and cement. It has been proved that the best compressive strength is obtained at an SG dosage of 0.03% [13]. However, the mechanism by which SG enhances the compressive strength of cement is still unclear. Some scholars have preliminarily analyzed the microscopic mechanism of the use of SG to significantly improve compressive strength via X-ray diffraction analysis (XRD) or scanning electron microscopy (SEM) [13,15]. Ma et al. [13] believe that the surface energy of calcium silicate hydrate (C-S-H) is changed via SG due to adsorption, thereby enhancing the cohesion between C-S-H and improving the compressive strength. Ren et al. [29] found that the formation of AFt could be promoted with small amounts of SG. The rapid formation of AFt was advantageous for the rapid setting and strength development of cement concrete [30]. However, some scholars [13] believe that excessive SG increases the pore size and porosity of cement pastes. Therefore, the influence of SG on the structure of long-aged cement needs to be further studied. UHPC is a system with an ultra-low water-to-binder ratio, large amounts of cementitious materials, and large amounts of nano-powder. Currently, the mechanism of how SG affects the various properties of UHPC in this system, the extent of its influence, the optimal dosage, and how it affects the hydration leading to a significant increase in strength has been little studied. In addition, existing studies have mostly focused on the influence of SG on one or two stages of concrete plastic, setting hardening, and strength development, while the developments of the three stages of concrete have been shown to be closely linked and interact with each other. Therefore, the systematic analysis of these three stages can comprehensively explain the impact of SG on the cement hydration process. However, a systematic analysis of the entire process of these three stages has not been reported.
In this paper, the effects of SG on the fluidity, setting time, and compressive strength of UHPC are discussed in three sections, and the mechanism of action is analyzed according to the compositions and morphologies of the hydration products, and finally, the mechanism of the effect of SG on UHPC throughout the process from the beginning of water addition to 90 days is discussed, and the research steps are shown in Figure 1.
Research Significance
Trace amounts of SG can significantly affect the fluidity, setting time, and strength growth of concrete, which has only attracted the attention of a few researchers, and its application in UHPC has not been reported. With a high cementitious material dosage and an ultra-low water-to-binder ratio, the role of SG in UHPC may be more significant. The purpose of this study is to observe the effect of SG on the whole process of the UHPC plasticity stage, setting hardening stage, and strength development stage.
Raw Materials
The cement used in this experiment was Ordinary Portland cement 'P·O 52.5R', according to Chinese National Standards GB 175-2007. The specific surface area was 411 m²/kg, the chemical composition is shown in Table 1, and the mineral composition of the clinker is shown in Table 2. Silica fume was obtained from Sichuan Langtian Resources Comprehensive Utilization Co., Ltd. (Chengdu, China), and its chemical composition and physical properties are shown in Tables 3 and 4. The slag powder was S 95 grade slag powder, and the chemical composition of the slag powder was measured with an X-ray fluorescence (XRF) spectrometer, as shown in Table 5.
The particle size of quartz sand was 0.045~1.7 mm, and the chemical composition is shown in Table 6. The coarse aggregate was diorite with a particle size of 5~10 mm and a crushing index of 6.4. The steel fiber was the end-hook-type RS 65/20 steel fiber produced by Zhenqiang High Performance Materials Co., Ltd. (Jiangsu, China). The SP was PCE produced by Sobute New Materials Co., Ltd. (Jiangsu, China). SG was an analytical pure chemical reagent.
Research Significance
Trace amounts of SG can significantly affect the fluidity, setting time, and strength growth of concrete, which has only attracted the attention of a few researchers, and its application in UHPC has not been reported. With a high cementitious material dosage and an ultra-low water-to-binder ratio, the role of SG in UHPC may be more significant. The purpose of this study is to observe the effect of SG on the whole process of the UHPC plasticity stage, setting hardening stage, and strength development stage.
Raw Materials
The cement used in this experiment was Ordinary Portland cement 'P·O 52.5R', according to Chinese National Standards GB 175-2007. The specific surface area was 411 m 2 /kg, the chemical composition is shown in Table 1, and the mineral composition of the clinker is shown in Table 2. Silica fume was obtained from Sichuan Langtian Resources Comprehensive Utilization Co., Ltd. (Chengdu, China), and its chemical composition and physical properties are shown in Tables 3 and 4. The slag powder was S 95 grade slag powder, and the chemical composition of the slag powder was measured with an X-ray fluorescence (XRF) spectrometer, as shown in Table 5. The particle size of quartz sand was 0.045~1.7 mm, and the chemical composition is shown in Table 6. The coarse aggregate was diorite with a particle size of 5~10 mm and a crushing index of 6.4. The steel fiber was the end-hook-type RS 65/20 steel fiber produced by Zhenqiang High Performance Materials Co., Ltd. (Jiangsu, China). The SP was PCE produced by Sobute New Materials Co., Ltd. (Jiangsu, China). SG was an analytical pure chemical reagent.
Mix Proportion of UHPC
The base mix proportion of UHPC is shown in Table 7, at 30 min the water-binder ratio is 0.16, the steel fiber is the volume content, and the SP dosage is the mass percentage of the cementitious material.
Fluidity and Setting Time
The fluidity of the UHPC mixture, including slump, expansion, and expansion loss over time, was determined in accordance with the relevant provisions of GB/T 50080. The initial and final setting times of UHPC were determined using the penetration resistance method according to the method of T/CECS 864-2021.
Heat of Hydration
The heat of hydration of UHPC was measured according to the direct method in GB/T 12959-2008 by using the proportion and mixing time of the base proportion after removing the coarse aggregate to prepare the samples. According to the calorimeter in a constant temperature environment, the temperature change of the specimen in the calorimeter was determined directly, and the heat of hydration of the cement within 7 d was obtained by calculating the sum of the accumulated and dissipated heat with the calorimeter. The temperature was always maintained at 20 • C during the test, and the temperature change data were recorded continuously for 7 d.
Mechanical Properties
The compressive strength of UHPC was determined according to the relevant provisions of GB/T 50081, using 100 mm × 100 mm × 100 mm cubic specimens.
Microscopic Testing
The hydration products of UHPC containing different doses of SG at different ages were analyzed using XRD. The microstructure of UHPC containing different doses of SG was observed via SEM using a ZEISS Sigma300 (Jena, German), including the morphologies of the hydration products at different ages and the morphologies of the interface areas between the aggregates, steel fibers, and matrix.
The paste was made of SG containing different dosages in the same conditions as the reference mix ratio and mixing time. The prepared samples were poured into plastic cups, and bottles were sealed and stored in a standard curing chamber at a temperature of 20 ± 1 • C and relative humidity of 95 ± 4%. Before performing XRD and SEM, small pieces were soaked in absolute ethanol, and hydration was terminated after 1 d, 3 d, 7 d, and 28 d. The small pieces were then dried to a constant weight in a vacuum drying oven at 45 • C and stored in sealed bags. For XRD analysis, the dried samples were further ground to pass through a 200-mesh sieve. Before SEM testing, the samples were gold-plated at 20 mA for 2 min. XRD analysis was performed with the Bruker D8 Advance model from Germany using CuK radiation in the range of 5-70 • with a scanning speed of 10 • /min. Fluidity is an important performance indicator that affects UHPC pouring, which is generally evaluated according to slump and expansion. The initial 30 min and 60 min slumps and expansions of UHPC mixed with SG are shown in Figure 2. With the increase in the SG dosage, UHPC fluidity gradually increased, and it can be concluded from Figure 2a that the initial 30 min and 60 min slumps increased by 16%, 26%, and 29%, respectively. In Figure 2b, it can be seen that the initial 30 min and 60 min expansions increased by 55%, 74%, and 81%, respectively. In addition, the slump and expansion loss over time was obvious in the blank group without SG for 30 min, with an 8% reduction in slump and a 10% reduction in expansion. At 60 min, the time loss was further increased, with further reductions of 2% and 4% in slump and expansion, respectively. When the dosage of the added SG was 0.03%, there was almost no loss of slump at the initial 30 min and 60 min, while there was a slight loss of expansion at 30 min but almost no loss of expansion at 60 min. The 4 groups with 0.06%, 0.09%, 0.12%, and 0.15% SG were added, respectively, and there was almost no time loss within 60 min, and the slump and expansion were almost unchanged or even slightly increased, so it can be determined that SG can significantly reduce the time loss of the UHPC slump. Moreover, UHPC can achieve self-leveling when the dosage of SG is greater than 0.03%, indicating that SG has a significant auxiliary plasticizing effect and slump retention effect on UHPC. This is consistent with the findings of Yu [31] and Hu et al. [32] on cement mortar and ordinary concrete.
Results and Discussion
The paste was made of SG containing different dosages in the same conditions as the reference mix ratio and mixing time. The prepared samples were poured into plastic cups, and bottles were sealed and stored in a standard curing chamber at a temperature of 20 ± 1 °C and relative humidity of 95 ± 4%. Before performing XRD and SEM, small pieces were soaked in absolute ethanol, and hydration was terminated after 1 d, 3 d, 7 d, and 28 d. The small pieces were then dried to a constant weight in a vacuum drying oven at 45 °C and stored in sealed bags. For XRD analysis, the dried samples were further ground to pass through a 200-mesh sieve. Before SEM testing, the samples were gold-plated at 20 mA for 2 min. XRD analysis was performed with the Bruker D8 Advance model from Germany using CuK radiation in the range of 5-70° with a scanning speed of 10°/min.
UHPC Fluidity Test Results
Fluidity is an important performance indicator that affects UHPC pouring, which is generally evaluated according to slump and expansion. The initial 30 min and 60 min slumps and expansions of UHPC mixed with SG are shown in Figure 2. With the increase in the SG dosage, UHPC fluidity gradually increased, and it can be concluded from Figure 2a that the initial 30 min and 60 min slumps increased by 16%, 26%, and 29%, respectively. In Figure 2b, it can be seen that the initial 30 min and 60 min expansions increased by 55%, 74%, and 81%, respectively. In addition, the slump and expansion loss over time was obvious in the blank group without SG for 30 min, with an 8% reduction in slump and a 10% reduction in expansion. At 60 min, the time loss was further increased, with further reductions of 2% and 4% in slump and expansion, respectively. When the dosage of the added SG was 0.03%, there was almost no loss of slump at the initial 30 min and 60 min, while there was a slight loss of expansion at 30 min but almost no loss of expansion at 60 min. The 4 groups with 0.06%, 0.09%, 0.12%, and 0.15% SG were added, respectively, and there was almost no time loss within 60 min, and the slump and expansion were almost unchanged or even slightly increased, so it can be determined that SG can significantly reduce the time loss of the UHPC slump. Moreover, UHPC can achieve self-leveling when the dosage of SG is greater than 0.03%, indicating that SG has a significant auxiliary plasticizing effect and slump retention effect on UHPC. This is consistent with the findings of Yu [31] and Hu et al. [32] on cement mortar and ordinary concrete.
XRD Analysis of Hydration Products in the Plastic Stage
UHPC cementitious materials contain a large number of nanoscale particles while using an ultra-low water-to-binder ratio, so there are more SPs. In such a system, the addition of an appropriate amount of SG plays a significant role in the auxiliary plasticizing effect, that is, it significantly improves the initial fluidity while making almost no loss or even a slight increase in the 1h fluidity. There are two reasons for this: one is the retarding effect; the second is competitive adsorption [24,25,33,34]. Tan et al. [22,26,27] believe that the retarding effect of SG reduces the consumption of PCE due to cement hydration, thus improving the dispersion capacity of the PCE-SG system [34]. The XRD plots of the UHPC paste hydrated product at 15 min and 30 min were measured, and the results are shown in Figure 3.
XRD Analysis of Hydration Products in the Plastic Stage
UHPC cementitious materials contain a large number of nanoscale particles while using an ultra-low water-to-binder ratio, so there are more SPs. In such a system, the addition of an appropriate amount of SG plays a significant role in the auxiliary plasticizing effect, that is, it significantly improves the initial fluidity while making almost no loss or even a slight increase in the 1h fluidity. There are two reasons for this: one is the retarding effect; the second is competitive adsorption [24,25,33,34]. Tan et al. [22,26,27] believe that the retarding effect of SG reduces the consumption of PCE due to cement hydration, thus improving the dispersion capacity of the PCE-SG system [34]. The XRD plots of the UHPC paste hydrated product at 15 min and 30 min were measured, and the results are shown in Figure 3. In Figure 3, the diffraction peaks of the hydrated UHPC pastes at 15 min and 30 min are in the same position, indicating that the addition of SG did not change the type of cement hydration products. The higher the SG dosage (0.12% and 0.15%), the higher the diffraction peaks of unhydrated C3S and dicalcium silicate (C2S), indicating that SG inhibited the hydration of the initial C3S and C2S. It can also be seen in Figure 3 that a small amount of SG had little effect on the formation of AFt, while an excess of SG significantly inhibited the formation of AFt at 15 min of hydration. This is because the newly formed AFt in the solution adsorbs SG molecules with negative electricity due to its high positive zeta potential. Furthermore, the adsorption of SG molecules on the newly formed AFt crystalline surface inhibits the formation of new crystal planes on the AFt surface. Therefore, the formation rate of the initial AFt was reduced, leading to a decrease in the initial AFt generation [35].
There are obvious differences in the gypsum diffraction peaks in Figure 3, and the local XRD diffraction peaks of gypsum are shown in Figure 4. The gypsum diffraction peak of the blank sample at 30 min is significantly lower than at 15 min, which indicates that the dissolution of gypsum took place. It can also be seen in Figure 4 that the gypsum diffraction peak decreases first and then increases with the increase in the SG dosage at 15 min, while at 30 min, the gypsum diffraction peak gradually increases with the increase in the SG dosage. This may be because of the two effects of SG of promoting and inhibiting gypsum dissolution: on the one hand, SG can complex with Ca 2+ in the solution to form calcium gluconate with low solubility, which reduces the concentration of Ca 2+ in the solution and promotes the dissolution of gypsum due to the homogenic effect. On the other hand, undissolved gypsum in an aqueous solution is positively charged, and SG is an anionic polymer, which adsorbs on the surface of gypsum and hinders the contact between the water and gypsum when the adsorption amount is large, thereby inhibiting the dissolution of gypsum. However, the ion reaction rate in the solution was faster than In Figure 3, the diffraction peaks of the hydrated UHPC pastes at 15 min and 30 min are in the same position, indicating that the addition of SG did not change the type of cement hydration products. The higher the SG dosage (0.12% and 0.15%), the higher the diffraction peaks of unhydrated C 3 S and dicalcium silicate (C 2 S), indicating that SG inhibited the hydration of the initial C 3 S and C 2 S. It can also be seen in Figure 3 that a small amount of SG had little effect on the formation of AFt, while an excess of SG significantly inhibited the formation of AFt at 15 min of hydration. This is because the newly formed AFt in the solution adsorbs SG molecules with negative electricity due to its high positive zeta potential. Furthermore, the adsorption of SG molecules on the newly formed AFt crystalline surface inhibits the formation of new crystal planes on the AFt surface. Therefore, the formation rate of the initial AFt was reduced, leading to a decrease in the initial AFt generation [35].
There are obvious differences in the gypsum diffraction peaks in Figure 3, and the local XRD diffraction peaks of gypsum are shown in Figure 4. The gypsum diffraction peak of the blank sample at 30 min is significantly lower than at 15 min, which indicates that the dissolution of gypsum took place. It can also be seen in Figure 4 that the gypsum diffraction peak decreases first and then increases with the increase in the SG dosage at 15 min, while at 30 min, the gypsum diffraction peak gradually increases with the increase in the SG dosage. This may be because of the two effects of SG of promoting and inhibiting gypsum dissolution: on the one hand, SG can complex with Ca 2+ in the solution to form calcium gluconate with low solubility, which reduces the concentration of Ca 2+ in the solution and promotes the dissolution of gypsum due to the homogenic effect. On the other hand, undissolved gypsum in an aqueous solution is positively charged, and SG is an anionic polymer, which adsorbs on the surface of gypsum and hinders the contact between the water and gypsum when the adsorption amount is large, thereby inhibiting the dissolution of gypsum. However, the ion reaction rate in the solution was faster than that of surface adsorption; thus, at 15 min, a small amount of SG mainly promoted the dissolution of gypsum through complexation and solubilization, while at 30 min, the surface adsorption mainly inhibited dissolution, so the solubility of gypsum decreased with the increase in the SG dosage. The gypsum diffraction peak of the SG-doped sample was significantly enhanced at 30 min compared with 15 min, which may be due to the initial AFt formation being inhibited by the composite use of PCE and SG, and the initial dissolved SO 4 2− was not converted to AFt [34]. As the hydration process proceeded, free water decreased, the SO 4 2− concentration increased, and the solubility of CaSO 4 ·2H 2 O was lower, so CaSO 4 ·2H 2 O crystals were re-formed [29]. that of surface adsorption; thus, at 15 min, a small amount of SG mainly promoted the dissolution of gypsum through complexation and solubilization, while at 30 min, the surface adsorption mainly inhibited dissolution, so the solubility of gypsum decreased with the increase in the SG dosage. The gypsum diffraction peak of the SG-doped sample was significantly enhanced at 30 min compared with 15 min, which may be due to the initial AFt formation being inhibited by the composite use of PCE and SG, and the initial dissolved SO4 2-was not converted to AFt [34]. As the hydration process proceeded, free water decreased, the SO4 2-concentration increased, and the solubility of CaSO4·2H2O was lower, so CaSO4·2H2O crystals were re-formed [29]. In the above XRD analysis, it can be seen that the hydration of cement was inhibited by the addition of SG, thereby reducing the early adsorption capacity of the SP, which is more effective in the adsorption of the SP, thus improving the fluidity. However, in a complex system of multi-component superimposed cement hydration, the mechanism of SG action is not singular but multifaceted: First, after the hydration of cement mixed with the SP, all mineral particles are negatively charged on the surface due to the adsorption of the SP, resulting in electrostatic repulsion and particle dispersion. The positively charged Ca 2+ in the solution forms an electric double layer with the negatively charged mineral surface, and the concentration of Ca 2+ is an important factor affecting the zeta potential of the diffusion electric double layer [35]. The magnitude of the zeta potential depends on the counterion concentration within the sliding surface. The more counterions entering the sliding surface, the smaller the zeta potential, and vice versa. Since SG was combined with Ca 2+ to form water-insoluble calcium gluconate, it reduced the concentration in the solution bilayer, thus increasing the zeta potential of the diffusion electric double layer, which is conducive to improving the electrostatic repulsion between particles and thus increasing the fluidity of the paste (see Figure 5). The second mechanism is competitive adsorption. Compared with PCE, SG is easily soluble in water, and its smaller molecules and high mobility can occupy positively charged active adsorption vacancies on the surface of C3A through competitive adsorption [24][25][26]34,35]. Therefore, SG reduces the early adsorption of PCE [24][25][26]34] and increase the remaining amount of PCE in the solution, which is beneficial to the fluidity of cement paste or concrete. In the above XRD analysis, it can be seen that the hydration of cement was inhibited by the addition of SG, thereby reducing the early adsorption capacity of the SP, which is more effective in the adsorption of the SP, thus improving the fluidity. However, in a complex system of multi-component superimposed cement hydration, the mechanism of SG action is not singular but multifaceted: First, after the hydration of cement mixed with the SP, all mineral particles are negatively charged on the surface due to the adsorption of the SP, resulting in electrostatic repulsion and particle dispersion. The positively charged Ca 2+ in the solution forms an electric double layer with the negatively charged mineral surface, and the concentration of Ca 2+ is an important factor affecting the zeta potential of the diffusion electric double layer [35]. The magnitude of the zeta potential depends on the counterion concentration within the sliding surface. The more counterions entering the sliding surface, the smaller the zeta potential, and vice versa. Since SG was combined with Ca 2+ to form water-insoluble calcium gluconate, it reduced the concentration in the solution bilayer, thus increasing the zeta potential of the diffusion electric double layer, which is conducive to improving the electrostatic repulsion between particles and thus increasing the fluidity of the paste (see Figure 5). The second mechanism is competitive adsorption. Compared with PCE, SG is easily soluble in water, and its smaller molecules and high mobility can occupy positively charged active adsorption vacancies on the surface of C 3 A through competitive adsorption [24][25][26]34,35]. Therefore, SG reduces the early adsorption of PCE [24][25][26]34] and increase the remaining amount of PCE in the solution, which is beneficial to the fluidity of cement paste or concrete.
Effect of SG on UHPC Setting Time
The results in the previous Section 3.1 show that SG had an impact on the formation of hydration products in the initial stage of cement hydration. The setting and hardening process of cement are closely related to the formation rate and amount of cement hydration products. The effect of SG on the setting time of UHPC is shown in Figure 6. The initial setting time of the blank group was 8 h 14 min, and the final setting time was 10 h 18 min. As the dosage of SG increased, the initial and final setting times of UHPC were significantly prolonged. When the dosage reached 0.15%, the initial and final setting times were 41 h 58 min and 46 h 5 min, which were 5.0 times and 4.5 times those of the blank group, respectively. Compared with the retarding effect of SG in ordinary concrete obtained by other scholars [34][35][36][37], the retarding effect of SG on UHPC is more significant.
Effect of SG on UHPC Setting Time
The results in the previous Section 3.1 show that SG had an impact on the formation of hydration products in the initial stage of cement hydration. The setting and hardening process of cement are closely related to the formation rate and amount of cement hydration products. The effect of SG on the setting time of UHPC is shown in Figure 6. The initial setting time of the blank group was 8 h 14 min, and the final setting time was 10 h 18 min. As the dosage of SG increased, the initial and final setting times of UHPC were significantly prolonged. When the dosage reached 0.15%, the initial and final setting times were 41 h 58 min and 46 h 5 min, which were 5.0 times and 4.5 times those of the blank group, respectively. Compared with the retarding effect of SG in ordinary concrete obtained by other scholars [34][35][36][37], the retarding effect of SG on UHPC is more significant.
The setting process of cement is closely related to the hydration reaction process of cement. The setting and hardening of cement occur after the induction period, and the initial and final setting times of cement correspond to the beginning and end of the acceleration period in the cement hydration process, respectively, as shown in Figure 7. Although the initial and final setting times were significantly prolonged after adding SG, the difference between the initial and final setting times did not change significantly with the increase in the SG dosage, which indicates that SG significantly prolonged the hydration induction period, while it had little effect on the reaction process during the hydration acceleration period. The setting process of cement is closely related to the hydration reaction process of cement. The setting and hardening of cement occur after the induction period, and the initial and final setting times of cement correspond to the beginning and end of the acceleration period in the cement hydration process, respectively, as shown in Figure 7. Although the initial and final setting times were significantly prolonged after adding SG, the difference between the initial and final setting times did not change significantly with the increase in the SG dosage, which indicates that SG significantly prolonged the hydration induction period, while it had little effect on the reaction process during the hydration acceleration period.
Effect of SG on UHPC Heat of Hydration
The changes in the temperature, heat flow, and cumulative hydration heat of UHPC with different SG dosages are shown in Figure 8. The heat flow represents the hydration rate of cement. As can be seen in Figure 8, the hydration kinetics of UHPC was significantly altered by the incorporation of SG. With the increase in the SG dosage, the hydration induction period of UHPC was significantly prolonged, indicating that SG retarded the hydration of C3S. The acceleration phase started with a delay, which is consistent with the variation in the initial setting time in Figure 6. As also shown in Figure 8, the addition of SG significantly changed the peak value of the UHPC hydration temperature. With the increase in the SG dosage, the hydration temperature peak of UHPC increased first and then decreased, indicating that different dosages of SG had different effects on the hydration heat release of UHPC. The experimental results of this study are basically consistent with the results of SG in ordinary concrete [31,36].
Effect of SG on UHPC Heat of Hydration
The changes in the temperature, heat flow, and cumulative hydration heat of UHPC with different SG dosages are shown in Figure 8. The heat flow represents the hydration rate of cement. As can be seen in Figure 8, the hydration kinetics of UHPC was significantly altered by the incorporation of SG. With the increase in the SG dosage, the hydration induction period of UHPC was significantly prolonged, indicating that SG retarded the hydration of C 3 S. The acceleration phase started with a delay, which is consistent with the variation in the initial setting time in Figure 6. As also shown in Figure 8, the addition of SG significantly changed the peak value of the UHPC hydration temperature. With the increase in the SG dosage, the hydration temperature peak of UHPC increased first and then decreased, indicating that different dosages of SG had different effects on the hydration heat release of UHPC. The experimental results of this study are basically consistent with the results of SG in ordinary concrete [31,36]. The slope of the acceleration period curve in Figure 8 was calculated via numerica fitting, and the slopes of the acceleration period were 0.12, 0.14, 0.20, 0.10, 0.09, and 0.02 for SG dosages of 0%, 0.03%, 0.06%, 0.09%, 0.12%, and 0.15%, respectively. It shows tha The slope of the acceleration period curve in Figure 8 was calculated via numerical fitting, and the slopes of the acceleration period were 0.12, 0.14, 0.20, 0.10, 0.09, and 0.02 for SG dosages of 0%, 0.03%, 0.06%, 0.09%, 0.12%, and 0.15%, respectively. It shows that when the SG dosage is <0.06%, the slope increases with the increase in the dosage, indicating that SG promoted a hydration acceleration period within this dosage range. When the SG dosage is >0.06%, the slope decreases with the increase in the SG dosage, which indicates that SG inhibited the hydration acceleration period within this dosage range. When the SG dosage is 0.06%, the hydration temperature peak reaches the maximum, and SG had the greatest promotion effect on the hydration acceleration period at this dosage. When the SG dosage reaches 0.15%, the hydration exothermic peak decreases significantly, indicating that a high SG dosage excessively inhibits the early hydration of UHPC [38] and is not conducive to the development of the early strength of UHPC.
The variation in the accumulated hydration heat with hydration time is shown in Figure 8c. As the early dosage of SG increased, the early accumulated heat of UHPC significantly decreased. After the acceleration period, the accumulated heat of hydration of UHPC mixed with SG accelerated and increased faster, whereby the accumulated heat of hydration of UHPC with 0.06% SG was the fastest growth rate, catching up and surpassing the blank group after 90 h. However, the cumulative hydration heat grew slowly when the SG dosage was 0.15%. This indicates that an appropriate amount of SG promotes the early hydration of UHPC, but an excessive amount significantly inhibits the early hydration reaction of UHPC [38].
Comparing Figures 6 and 8, it can be seen that there is a significant correlation between the UHPC hydration temperature change and setting time, and the relationship between the occurrence time of hydration temperature peak and the final setting time is shown in Figure 9. It can be seen in Figure 9 that the occurrence time of the UHPC hydration temperature peak was linearly correlated with the final setting time, and the correlation coefficient is R = 0.973. This also shows that the setting process of cement was closely related to the formation process of hydration products of cement. shown in Figure 9. It can be seen in Figure 9 that the occurrence time of the UHPC hydration temperature peak was linearly correlated with the final setting time, and the correlation coefficient is R = 0.973. This also shows that the setting process of cement was closely related to the formation process of hydration products of cement.
Setting and Hardening Stage XRD and SEM Test Results
The XRD patterns of the UHPC paste at 4 h, 8 h, and 1 d are shown in Figure 10. It can be seen that the CH diffraction peaks are not obvious at 4 h and 8 h, while significant CH diffraction peaks appear at 1 d for the blank sample and the samples with SG dosages of 0.03%, 0.06%, and 0.09%. In Figure 6, it can be seen that the blank sample and the samples with 0.03% and 0.06% SG additions are finalized at 1 d, and the sample with an SG dosage of 0.09% has initially set, but the samples with SG dosages of 0.12% and 0.15% have not yet reached initial setting at 1 d. This shows that obvious CH crystals were formed after the initial setting of UHPC. The SEM images in Figure 11 also confirm the significant CH formation in the blank sample and the samples with SG dosages of 0.03%, 0.06%, and 0.09% at 1 d of hydration.
Setting and Hardening Stage XRD and SEM Test Results
The XRD patterns of the UHPC paste at 4 h, 8 h, and 1 d are shown in Figure 10. It can be seen that the CH diffraction peaks are not obvious at 4 h and 8 h, while significant CH diffraction peaks appear at 1 d for the blank sample and the samples with SG dosages of 0.03%, 0.06%, and 0.09%. In Figure 6, it can be seen that the blank sample and the samples with 0.03% and 0.06% SG additions are finalized at 1 d, and the sample with an SG dosage of 0.09% has initially set, but the samples with SG dosages of 0.12% and 0.15% have not yet reached initial setting at 1 d. This shows that obvious CH crystals were formed after the initial setting of UHPC. The SEM images in Figure 11 also confirm the significant CH formation in the blank sample and the samples with SG dosages of 0.03%, 0.06%, and 0.09% at 1 d of hydration.
can be seen that the CH diffraction peaks are not obvious at 4 h and 8 h, while significant CH diffraction peaks appear at 1 d for the blank sample and the samples with SG dosages of 0.03%, 0.06%, and 0.09%. In Figure 6, it can be seen that the blank sample and the samples with 0.03% and 0.06% SG additions are finalized at 1 d, and the sample with an SG dosage of 0.09% has initially set, but the samples with SG dosages of 0.12% and 0.15% have not yet reached initial setting at 1 d. This shows that obvious CH crystals were formed after the initial setting of UHPC. The SEM images in Figure 11 also confirm the significant CH formation in the blank sample and the samples with SG dosages of 0.03%, 0.06%, and 0.09% at 1 d of hydration. The important reason for cement hydration to enter the accelerated period after the induction period is that the Ca 2+ concentration reaches supersaturation and is able to form CH, which promotes the large-scale formation of C-S-H gels [36]. SG is adsorbed on the surface of C 3 A in cement and can inhibit the dissolution of Ca 2+ . Meanwhile, the concentration of Ca 2+ in the solution is reduced due to the formation of insoluble calcium gluconate. These two effects prolonged the time for Ca 2+ to reach supersaturation, thus delaying the initial setting time of cement [16,31,36]. When SG was consumed, the concentration of Ca 2+ increased continuously and reached supersaturation, which eventually promoted the crystallization of CH and the mass production of C-S-H gels, and the cement's initial setting. Since SG mainly interacted with C 3 A and had little effect on C-S-H, thus SG did not affect the hydration rate during the acceleration period. On the other hand, the solubility of calcium gluconate was very small at room temperature, but its solubility increases significantly with an increase in temperature. Therefore, during the acceleration period as hydration proceeded, the temperature increased, and calcium gluconate continued to dissolve, causing a rapid increase in the Ca 2+ concentration in the solution, thereby promoting hydration and increasing the peak hydration temperature. However, when SG was excessive (such as at 0.15%), it not only adsorbed on the surface of C 3 A but also heavily adsorbed on the surface of positively charged AFt. The growth of AFt crystals in the encapsulation layer formed during the induction period of cement, which leads to the rupture of the encapsulation layer, was an important factor affecting the hydration rate of cement during the acceleration period. Therefore, the excess SG significantly affected the hydration rate of cement during the acceleration period and significantly reduced the hydration exothermic temperature and early accumulated hydration heat release [16,31,36]. The important reason for cement hydration to enter the accelerated period after the induction period is that the Ca 2+ concentration reaches supersaturation and is able to form CH, which promotes the large-scale formation of C-S-H gels [36]. SG is adsorbed on the surface of C3A in cement and can inhibit the dissolution of Ca 2+ . Meanwhile, the concentration of Ca 2+ in the solution is reduced due to the formation of insoluble calcium gluconate. These two effects prolonged the time for Ca 2+ to reach supersaturation, thus delaying the initial setting time of cement [16,31,36]. When SG was consumed, the concentration of Ca 2+ increased continuously and reached supersaturation, which eventually promoted the crystallization of CH and the mass production of C-S-H gels, and the cement's initial setting. Since SG mainly interacted with C3A and had little effect on C-S-H, thus SG did not affect the hydration rate during the acceleration period. On the other hand, the solubility of calcium gluconate was very small at room temperature, but its solubility increases significantly with an increase in temperature. Therefore, during the The speed of concrete setting and hardening significantly affects the early strength of concrete, but it generally has little effect on the later strength. Figure 12 shows the standard curing compressive strength of UHPC with different SG dosages. When the SG dosage was <0.06%, the 1 d compressive strength of UHPC decreased slightly with the increase in the SG dosage. When the SG dosage was 0.03% and 0.06%, the strength decreased by 6% and 11%, respectively. When the dosage of SG was >0.06%, the compressive strength was significantly reduced, and when the dosage was ≥0.12%, UHPC was not completely set and had no strength at 1 d. At 3 d and 7 d, the compressive strength of UHPC decreased significantly with the increase in the SG dosage, while at 28 d, 60 d, and 90 d, the compressive strength of UHPC was higher than that of the blank group. In summary, at 28 d, 60 d, and 90 d, the compressive strength of UHPC was enhanced when the dosage of SG was in the range of 0.03~0.12%. When the SG dosage was 0.12%, the maximum enhancement of 90 d strength was 13%. While the compressive strength of concrete at the 60 d age was the largest, the corresponding optimal SG dosage was 0.05~0.07% in the research by Hu [32]. The optimal SG dosage in this test study was higher (0.12%) because the test duration was longer (90 d), and there were more cementitious materials. maximum enhancement of 90 d strength was 13%. While the compressive stre concrete at the 60 d age was the largest, the corresponding optimal SG dosa 0.05~0.07% in the research by Hu [32]. The optimal SG dosage in this test study was (0.12%) because the test duration was longer (90 d), and there were more ceme materials. Porosity and pore structure are important factors affecting the strength of co The porosity and pore distribution of UHPC with different SG dosages at 90 day tested using the mercury pressure method, and the results are shown in Figure 13 be seen that the porosity of the sample with SG was smaller than that of the blank s Moreover, when the SG dosage was 0.06%, 0.09%, 0.12%, and 0.15%, the total por the samples was significantly reduced, and the number of harmful pores greater Porosity and pore structure are important factors affecting the strength of concrete. The porosity and pore distribution of UHPC with different SG dosages at 90 days were tested using the mercury pressure method, and the results are shown in Figure 13. It can be seen that the porosity of the sample with SG was smaller than that of the blank sample. Moreover, when the SG dosage was 0.06%, 0.09%, 0.12%, and 0.15%, the total porosity of the samples was significantly reduced, and the number of harmful pores greater than 50 nm was also reduced. The number of harmful pore volumes greater than 50 nm significantly affects the strength of concrete, and the correlation between harmful pore volumes greater than 50 nm and the compressive strength of 90 d concrete is shown in Figure 14. In Figure 14, it can be seen that the smaller the volume of harmful pores greater than 50 nm, the higher the compressive strength of concrete. Moreover, there is a significant linear negative correlation between the two, with a correlation coefficient of R = 0.943. This is an important reason for the improvement in the strength of UHPC doped with SG.
Materials 2023, 16, x FOR PEER REVIEW
nm was also reduced. The number of harmful pore volumes greater than significantly affects the strength of concrete, and the correlation between harm volumes greater than 50 nm and the compressive strength of 90 d concrete is sh Figure 14. In Figure 14, it can be seen that the smaller the volume of harmful pores than 50 nm, the higher the compressive strength of concrete. Moreover, th significant linear negative correlation between the two, with a correlation coeffici = 0.943. This is an important reason for the improvement in the strength of UHPC with SG. >10,000nm 500-10,000nm 50-500nm <50nm
XRD Test Results at Different Ages
The XRD patterns of UHPC with different SG doping at 3 d, 7 d, 28 d, and 90 d are shown in Figure 15.
XRD Test Results at Different Ages
The XRD patterns of UHPC with different SG doping at 3 d, 7 d, 28 d, and 90 d are shown in Figure 15. As can be seen in Figure 15, there are obvious CH and AFt diffraction peaks in the XRD plots at 3 d, 7 d, 28 d, and 90 d. The CH peak gradually weakens with the curing age, while the AFt peak gradually strengthens, and, especially for SG-doped samples, the increase in the AFt peaks is more obvious. Figure 16 shows the local diffraction patterns As can be seen in Figure 15, there are obvious CH and AFt diffraction peaks in the XRD plots at 3 d, 7 d, 28 d, and 90 d. The CH peak gradually weakens with the curing age, while the AFt peak gradually strengthens, and, especially for SG-doped samples, the increase in the AFt peaks is more obvious. Figure 16 shows the local diffraction patterns of AFt at 7 d, 28 d, and 90 d. It can be seen that the addition of SG increases the diffraction peak strength of AFt at all three ages, indicating an increase in the production of AFt. Figure 16c also shows that the AFt diffraction peak is widened by the addition of SG, indicating that the crystalline morphology of AFt changed. As can be seen in Figure 15, there are obvious CH and AFt diffraction peaks in the XRD plots at 3 d, 7 d, 28 d, and 90 d. The CH peak gradually weakens with the curing age, while the AFt peak gradually strengthens, and, especially for SG-doped samples, the increase in the AFt peaks is more obvious. Figure 16 shows the local diffraction patterns of AFt at 7 d, 28 d, and 90 d. It can be seen that the addition of SG increases the diffraction peak strength of AFt at all three ages, indicating an increase in the production of AFt. Figure 16c also shows that the AFt diffraction peak is widened by the addition of SG, indicating that the crystalline morphology of AFt changed.
SEM Test Results at Different Ages
The reason for the ability of SG to significantly increase the strength of observed in the SEM images of the hydration products at different ages. Fi the SEM images of AFt in UHPC with different SG dosages at different hyd can be seen in Figure 17a, a small amount of AFt formed at 1 day for the which was complete and coarsely crystalline. In Figure 17b, the AFt cr significantly finer after adding 0.06% SG. In Figure 17c, the AFt crystals of s with 0.06% became shorter, but the number increased. As hydration co alumina formed in the pores of the cement paste in the form of elongated ne the number of AFt crystals increased significantly and staggered symbiotic in Figure 17d-h. Due to AFt mostly existing in the pores, aggregate inter interface where the water-cement ratio is large, it filled the pores and improved the compactness of the concrete, thus significantly increasing the
SEM Test Results at Different Ages
The reason for the ability of SG to significantly increase the strength of UHPC can be observed in the SEM images of the hydration products at different ages. Figure 17 shows the SEM images of AFt in UHPC with different SG dosages at different hydration ages. As can be seen in Figure 17a, a small amount of AFt formed at 1 day for the blank sample, which was complete and coarsely crystalline. In Figure 17b, the AFt crystals become significantly finer after adding 0.06% SG. In Figure 17c, the AFt crystals of samples doped with 0.06% became shorter, but the number increased. As hydration continued, more alumina formed in the pores of the cement paste in the form of elongated needle rods, and the number of AFt crystals increased significantly and staggered symbiotically, as shown in Figure 17d-h. Due to AFt mostly existing in the pores, aggregate interface, and fiber interface where the water-cement ratio is large, it filled the pores and significantly improved the compactness of the concrete, thus significantly increasing the strength.
significantly finer after adding 0.06% SG. In Figure 17c, the AFt crystals of samples doped with 0.06% became shorter, but the number increased. As hydration continued, more alumina formed in the pores of the cement paste in the form of elongated needle rods, and the number of AFt crystals increased significantly and staggered symbiotically, as shown in Figure 17d-h. Due to AFt mostly existing in the pores, aggregate interface, and fiber interface where the water-cement ratio is large, it filled the pores and significantly improved the compactness of the concrete, thus significantly increasing the strength. Figures 18 and 19 show SEM images of C-S-H in UHPC with different SG dosages at 28 d and 90 d. It can be seen that the intertwined C-S-H gel particles formed a network structure with a proportion of flocculated gels, relatively, with the increase in age. However, the microscopic morphologies of the C-S-H gels of samples with different SG dosages had little difference at different ages. This shows that SG has little effect on C-S-H gels. Figures 18 and 19 show SEM images of C-S-H in UHPC with different SG dosages at 28 d and 90 d. It can be seen that the intertwined C-S-H gel particles formed a network structure with a proportion of flocculated gels, relatively, with the increase in age. However, the microscopic morphologies of the C-S-H gels of samples with different SG dosages had little difference at different ages. This shows that SG has little effect on C-S-H gels. Figures 18 and 19 show SEM images of C-S-H in UHPC with different SG dosages at 28 d and 90 d. It can be seen that the intertwined C-S-H gel particles formed a network structure with a proportion of flocculated gels, relatively, with the increase in age. However, the microscopic morphologies of the C-S-H gels of samples with different SG dosages had little difference at different ages. This shows that SG has little effect on C-S-H gels.
Analysis of the Influence of SG on the Whole Process of Cement Hydration
The previous experimental results show that SG has a significant effect on the fluidity, hydration, setting and hardening, and mechanical properties of cement. Carbohydrates are mostly high-efficiency retarders but are unlike glucose and sucrose, whose molecular structures are mostly cyclic structures. SG is a ring-opening structure, as shown in Figure 20. After the dissolution of Na + ions in water with a negatively charged carboxylic acid at one end, SG was able to absorb with positively charged adsorption vacancy on the surfaces of cement particles. The alkyl backbone of SG contains five hydroxyl groups, which can form a solvated water film layer via hydrogen bonding with The previous experimental results show that SG has a significant effect on the fluidity, hydration, setting and hardening, and mechanical properties of cement. Carbohydrates are mostly high-efficiency retarders but are unlike glucose and sucrose, whose molecular structures are mostly cyclic structures. SG is a ring-opening structure, as shown in Figure 20.
After the dissolution of Na + ions in water with a negatively charged carboxylic acid at one end, SG was able to absorb with positively charged adsorption vacancy on the surfaces of cement particles. The alkyl backbone of SG contains five hydroxyl groups, which can form a solvated water film layer via hydrogen bonding with water molecules. In addition, SG is easily soluble in water, while the solubility of calcium gluconate is very low at room temperature but increases significantly at high temperatures. Therefore, adding SG to cement has the adsorption coverage function of inhibiting hydration, competitive adsorption, and complexation to stabilize Ca 2+ [24][25][26]33,34]. fluidity, hydration, setting and hardening, and mechanical properties of cem Carbohydrates are mostly high-efficiency retarders but are unlike glucose and suc whose molecular structures are mostly cyclic structures. SG is a ring-opening struc as shown in Figure 20. After the dissolution of Na + ions in water with a negatively cha carboxylic acid at one end, SG was able to absorb with positively charged adsorp vacancy on the surfaces of cement particles. The alkyl backbone of SG contains hydroxyl groups, which can form a solvated water film layer via hydrogen bonding water molecules. In addition, SG is easily soluble in water, while the solubility of calc gluconate is very low at room temperature but increases significantly at temperatures. Therefore, adding SG to cement has the adsorption coverage functio inhibiting hydration, competitive adsorption, and complexation to stabilize Ca 2+ 26,33,34]. During the plastic stage of the UHPC paste, SG inhibited the hydration of initia and C2S and the formation of AFt due to adsorption coverage, while the compet adsorption effect of SG resulted in the presence of more PCE in the solution. These asp resulted in a significant increase in the initial fluidity of the UHPC paste doped wit and a reduction in the fluidity loss [24][25][26]33,34]. During the plastic stage of the UHPC paste, SG inhibited the hydration of initial C 3 S and C 2 S and the formation of AFt due to adsorption coverage, while the competitive adsorption effect of SG resulted in the presence of more PCE in the solution. These aspects resulted in a significant increase in the initial fluidity of the UHPC paste doped with SG and a reduction in the fluidity loss [24][25][26]33,34].
During the setting and hardening stage of UHPC, the stabilizing Ca 2+ effect of SG complexation and the continuation of the previous adsorption covering effect significantly prolonged the time for Ca 2+ to reach saturation, thereby greatly extending the initial setting time of UHPC. However, due to the increase in the hydration temperature of UHPC during the accelerated period, the solubility of calcium gluconate increased and Ca 2+ was rereleased. Therefore, there was no significant effect of SG on the hydration rate during the acceleration period.
During the strength development stage of UHPC mixed with SG, due to the slower hydration in the early stage, the hydration products generated in the early stage were more uniformly distributed. More AFt with finer grains and staggered symbiotics were generated in the pores and interfaces, which improved the pore structure of the UHPC matrix, thus improving the compressive strength of UHPC.
Selection of SG Dosage
With the increase in the SG dosage, the initial fluidity and fluidity retention continued to increase, and the initial and final setting times continued to increase. However, the early strength (1 d and 3 d) gradually decreased with the increase in the SG dosage, while the later strength (28 d, 60 d, and 90 d) increased with the increase in the SG dosage. The greater the SG dosage, the longer the enhancement effect takes to appear. Considering the influence of fluidity, setting time, and strength, the amount of SG in UHPC should be 0.03~0.09%, and too much SG leads to severe retardation and increased cost.
Study Limitations and Recommendations
The amount of SG added to UHPC was very small, but it had a significant effect on its fluidity, setting time, and strength. The effects of SG on the composition, morphology, hydration heat, and pore structure of hydration products were studied in this paper. However, due to the complex composition of UHPC and the small amount of SG, many differences cannot be well characterized with existing research methods, especially since the existence of admixtures in hardened concrete has not been reported. In subsequent studies, new means can be used to study the existence of forms of admixtures in the hardened matrix.
SG is an organic electrolyte that exists in the form of ions in cement solutions. When cement is mixed with various electrolytes (such as sulfate, chloride, etc.), it significantly affects the ion concentration in the solution and affect the effectiveness of SG, and this effect needs to be further studied. In addition, UHPC has the characteristics of high toughness and ultra-high durability, and the effect of SG on the toughness, shrinkage performance, and durability of UHPC should be researched in the future.
Conclusions
In this paper, the influences of SG on the fluidity, setting times, hydration heat, and compressive strength of UHPC were studied, and the mechanism of SG was preliminary analyzed through XRD, SEM, and mercury pressure, and the main conclusions are as follows: (1) The fluidity loss of UHPC without SG was significant within 30 min, at approximately 10%. It was further reduced at 60 min, with a loss of approximately 4% at 60 min. When the initial fluidity of UHPC was 0.15%, the initial slump and expansion of UHPC increased by 15.6% and 55.1%, respectively, and the fluidity did not decrease or increased slightly within 1 h. (2) The addition of SG significantly prolonged the initial and final setting times of UHPC but had little effect on the interval between the initial and the final setting times. SG inhibited the dissolution of gypsum in cement and delayed the formation of AFt in the early stage of hydration. SG can also complex with Ca 2+ to generate insoluble calcium gluconate, inhibit the hydration of C 3 S and C 2 S, prolong the time for Ca 2+ to reach saturation, prolong the induction period, and thus delay the setting time of UHPC. (3) SG significantly affected the hydration kinetics of UHPC but had no obvious effect on the hydration rate of cement during the acceleration period. The addition of 0.06~0.12% SG reduced the peak hydration temperature and the heat of hydration of UHPC. (4) When the SG dosage exceeded 0.09%, the 1 d and 3 d strengths of UHPC decreased significantly, but the strength from 7 d to the later stage was not affected and could significantly exceed that of the blank sample. When the SG dosage reached 0.12%, the compressive strength at 90 d increased by 13.0% compared with the blank group. SG causes more AFt to form in the pores in the later stage, reduces the porosity of UHPC, improves the pore structure, and thus effectively enhances the strength of UHPC. | 14,260.4 | 2023-05-01T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Exploring the role of learning goal orientation, instructor reputation, parasocial interaction, and tutor intervention in university students’ MOOC retention: A TAM-TRA based examination
While MOOC platforms allow universities to implement various strategies such as brand promotion and student recruitment, the alarmingly low retention rate suggests a need to explore the critical factors that influence students’ course retention. So far, studies on MOOC platforms focus either on the students’ individual factors (i.e., students’ personal factors such as perceived value) or situational factors (i.e., external influences shaping students’ behavior, such as system quality) for students’ learning, thus lacking a complete view of those determinant factors. This study integrates the TAM model with the TRA model to analyze the roles of three important antecedents (learning goal orientation; LGO, instructor reputation; IR, & parasocial interaction; PI) on university students’ perceived value (PU) and learning attitude (LA), two critical predictors of MOOC retention (CR). Using data from an online survey of 449 Chinese university students, the hypothesis model was tested using PLS. We found that LGO, IR, and PI each positively affect PU; LGO, IR, and PI each positively affect LA; PU and LA each positively influence course retention (CR), with each impact enhanced by tutor intervention (TI). The theoretical and practical implications of such findings are presented.
Introduction
Digital technologies have empowered the availability of massive open online course (MOOC) platforms such as Coursera, Edx, and Chinese University MOOC.These platforms allow higher education institutions to reconfigure their education models for the purpose of student recruitment, brand promotion, and on-campus learning supplements.The materials on MOOC platforms are available for users to retain, reuse, and redistribute for non-commercial purposes [1].The open nature and variety of MOOCs have attracted a large number of interested students who seek to overcome barriers in current education by accessing various subjects and learning [2,3].However, MOOCs tend to have the issue of low student retention and low completion rates, even for known universities such as Sandford University and MIT [4][5][6].Student retention in this study refers to the degree of a MOOC is able to keep students actively enrolled and engaged.Such low retention is attributed to the non-credit-bearing nature of MOOCs and learners' different motivations (e.g., academic and fulfilling agendas) [7,8].However, universities building and introducing MOOCs often aim to enhance students' knowledge acquisition.Within this context, the low retention rate suggests that university students have not developed knowledge from MOOCs and that universities' investment in technologies, professors, and funding in MOOCs are underutilized [9,10] have reminded that the missing physical presence, lack of student involvement, and low completion rates, which could suggest financial losses, learning failures, and even reputation of universities providing MOOCs.
The literature has identified a number of factors that could affect student retention in MOOCs.These factors include students' individual factors, such as their personal learning habits, motivation, and MOOC learning experience, as well as perceptions of the MOOCs [9].Among these individual factors, students' motivation, i.e., an individual's desire to acquire knowledge from a specific course [11], could significantly affect their retention.Former studies [12][13][14] suggest motivation to learn as an important antecedent of learning results.As an important motivator, students' learning goal orientation enables them to seek self-improvement when taking MOOCs.Thus, they are less likely to simply hoard the course materials and avoid the assignments and tests, as some scholars [14,15] have suggested.However, motivational factors alone may not fully explain students' course retention.Some researchers [16,17] suggest that students enrolled in MOOCs are neither engaged nor committed enough.As such, situational factors should be integrated with the motivational variable in explaining students' MOOC intention.Situational factors refer to the external factors that determine the quality of the learning environment, such as platform design and educators' support [10].One important situational factor lies in MOOC instructors' reputations.Fauth et al. [18] recognize instructor popularity as an important indicator of students' learning interests and attitudes.Likewise, the Chinese criminal law professor Luo Xiang attracted over five million audiences and followers through online law teaching [19].While the popularity of MOOC instructors has been well documented in social media, limited academic research has examined the role of instructor popularity in students' MOOC retention.Moreover, MOOCs often involve a large number of students who socially interact with the instructor through not only online learning platforms but also external social media platforms such as LinkedIn, Twitter, and YouTube [20,21].Such interactions may also affect students' course retention.Indeed, MOOC platforms and social media allow known instructors to appear on different occasions where their personal and academic development could be exposed to inspire the audiences, including students.Studies on the digital environment [22,23] have identified parasocial interaction, i.e., the face-to-face association between audiences (e.g., students) and performers (e.g., MOOC instructors), in stimulating behavioral responses.While [24,25] suggest that MOOC platforms should help instructors and students improve student retention through social media, the role of parasocial interaction in the online education context has rarely been examined.Finally, scholars argue that MOOC platform functions such as course video playing, announcement portals, and discussion forums may not effectively facilitate the development of learning communities, which are important for student retention [15].In this case, universities adopting MOOCs often rely on tutors to implement administrative functions, such as organizing chats, encouraging student participation, and answering students' logistic questions to ensure pleasant digital learning experiences.So far, not much has been written about students' perception of tutors' digital presence in university students' MOOC retention.
As such, the purpose of this study is to integrate the technology acceptance model with the theory of reasoned action to explore the personal and situational factors that could influence the MOOC retention of university students.As mentioned above, the increasing number of MOOCs provided by universities is confronted with the low rate of student completion.As such, investigating the personal and situational factors not only helps universities to identify the factors when designing MOOCs but, more importantly, helps students to finalize their online learning and facilitate their knowledge acquisition.This study focuses on university students, who, unlike high school students, are given enough freedom to choose courses and constitute the largest market for MOOCs.The objectives of this study involve empirically examining how university students' learning goal orientation (i.e., personal factor), instructors' reputation, and student-instructor parasocial interactions influence their perceived usefulness and attitudes towards MOOCs, thereby influencing their course retention, as well as the moderating role of perceived tutor intervention.
This study makes important contributions to literature.First, this study contributes to the growing empirical literature that primarily considers the situational antecedents (i.e., instructor reputation) of digital learning.Specifically, it integrates learners' individual factors (i.e., learning goal orientation & learning attitudes) from the TRA model to provide a comprehensive view of university students' MOOC retention.In doing so, this study complements the studies that highlight students' motivations and perceived value [26,27].
Second, this study extends the digital learning (e.g., MOOCs) studies [25,[28][29][30][31] that highlight the interface design and functionalities of platforms without recognizing the important role of student motivation.While previous studies recognized the personal and situational enablers for students' course retention, they have not combined those constructs to generate a comprehensive understanding of the differential impacts of each factor.Specifically, it recognizes the lack of instructor-student interaction issue raised by the literature [15,32,33] and further suggests parasocial interactions between students and instructors on social media platforms as a possible outlet for the above issue.
Third, this study examines the moderating role of tutor intervention as an important antecedent of university students' MOOC retention.Few studies have examined how tutors can be appointed to improve students' MOOC retention by facilitating student-instructor interaction, organizing students' sense of community, providing suggestions based on instructors' feedback, and monitoring students' attendance and learning progress.
The rest part of this paper is organized as follows.First, the literature on TAM and TRA, together with the key variables and their relationship with the dependent variable (i.e., students' MOOC retention), was discussed, followed by the presentation of the conceptual framework for this study.Next, the research methods will be described, including a discussion of the sample, the variables, and their measurement, as well as reliability and validity.Next, the results are presented, followed by a discussion of the findings, important implications, and limitations.The final section concludes this study.
A TAM-TRA perspective on student retention
In developing countries such as China, the ambition for mass higher education has driven universities to improve the quality and efficiency of student learning [34].MOOCs provide a selfpaced pattern where students can access recorded lectures and reading materials autonomously, as compared to attending on-site lectures, which are structured by time and space [35].These courses provide complementary learning opportunities for university students to develop knowledge and professional skills that are missing from their registered programs or universities [36].MOOCs also improve the exposure of some university professors, some of whom have enjoyed popularity and social media coverage [37].While MOOCs are widely accepted, the low student retention rate has raised extensive concerns [14,38], thereby demanding further investigation into the antecedents of student retention of MOOCs.
Most studies on digital learning used the D&M IS Success Model [39], the technology acceptance model (TAM) [40][41][42][43][44] and the theory of reasoned action (TRA) [42] to examine the antecedents of student learning in digital platforms.The D&M IS Success Model considered information quality, system quality, and service quality [39].However, the model has not considered learners' involvement in the learning process.
The TAM model includes the factors (e.g., perceived usefulness and one's subjective perceptions regarding whether technology-based service can bring desirable outcomes [41] that explain the determinants for an individual to adopt or forsake a specific technology-based service.The TRA model considers an individual's attitude, i.e., one's assessment of whether performing a behavior could bring positive or negative results [45].So far, studies on individual adoption of technology-based services have included attitude as an important antecedent [45][46][47][48].In the context of digital learning, Cheung and Vogel [48] proved the impact of attitude on students' intention to adopt digital learning technology.Similar conclusions were found in several studies [49,50].Nevertheless, Li [51] contends that general behavioral models often miss specific situations and thus require context-specific integration.As a result, this study integrates the motivation variable (i.e., learning goal orientation) with three important situational variables, i.e., learning goal orientation, instructor reputation, parasocial interaction, and tutor intervention.
Students' learning goal orientation
An individual's goal orientation could be a predictor of his or her motivation for specific achievements [52], as it can help shape perceptions and regulate behaviors related to the tasks required for a specific goal [53].In particular, learning goal orientation (LGO) refers to an individual's efforts and perseverance to acquire new knowledge or develop new skills, with satisfaction by accomplishing a specific task [54,55].Learning goal-oriented individuals tend to perceive challenging tasks as opportunities to improve their skills and abilities [56]; in other words, they seek learning opportunities by accepting challenging tasks [31,52,57].When taking MOOCs, students with LGO may aim to understand the new subjects and develop important skills or knowledge that they deem necessary for their study [58].Previous scholars suggest that LGO as a learning state can be developed in various learning contexts [59], such as MOOC learning [60,61].Consumer researchers define perceived value as one's assessment of whether the functions and performance of a product can meet his or her needs [62].In the related context of MOOC learning, learning goal-oriented students are likely to recognize the value of MOOCs as they often focus on developing abilities and skills to obtain a sense of accomplishment [63].
Moreover, Albelbisi, Al-Adwan [39] suggested that students need social, technical, and selfmanagement skills to benefit from MOOCs.We further argue that students with a high LGO are more likely to proactively develop those skills.According to [15], students who have accomplished over 20 MOOCs were driven by self-motivated learning and the ambition to accomplish challenging tasks in acquiring more knowledge.In other words, LGO allows students to appreciate and enjoy the challenges of MOOCs.As a result, the following hypothesis can be developed:
H1:
LGO has a significant positive effect on perceived course value.
Former studies have confirmed that high LGO students are more willing to acquire new knowledge by adapting their attitudes [52].The knowledge and career development opportunities in social media may attract arts and social science students to develop their computer literacy and digital skills.In particular, learning goal-oriented students may adapt their attitude to video editing, coding, and even programming MOOCs and devote time and effort to learning activities.According to [64], LGO could predict a higher level of learning objectives and superior learning outcomes.In other words, LGO can help shape students' attitudes toward dedicated learning through MOOCs, allowing them to prepare for the complementary skills that are missing in the current curriculum to prepare for career development.As such, the following hypothesis can be developed:
H2:
LGO has a significant positive effect on learning attitude.
Instructor reputation
Early education scholars [65,66] have recognized the importance of instructor popularity for effective teaching.Students' favor for a specific instructor could influence their learning in a specific course.This relationship can be explained by the instructors' various characteristics, such as charismatic teaching style and educational background [67].Instructors who are known for their academic reputation and engaging and interesting teaching styles are more likely to improve the perceived value of MOOCs.In addition to MOOC platforms, university professors increasingly adopt social media to introduce their academic background and upload short clips to demonstrate various benefits [37,68].Such 'advertisements' on social media allow students to perceive the values associated with specific MOOCs.Previous studies [32,66] have provided evidence regarding the role of instructors' popularity in course retention.In particular, several scholars have identified instructors' expressive skills and affiliations to be influential to learners' perceived value [69].Moreover, instructors with good popularity on social media are more likely to shape students' attitudes towards specific MOOCs.Given the above discussion, the following hypotheses can be developed: H3: MOOC instructor reputation has a significant positive effect on perceived course value.
H4: MOOC instructor reputation has a significant positive effect on learning attitude.
Student-instructor parasocial interaction
Studies on digital learning platforms such as MOOCs have stressed the importance of instructor-student interaction in students' attitudes and behavior [15,32,70].In fact, interactions between students and instructors have been recognized as an important determinant of student involvement and engagement in online learning [39].These scholars suggest that student-instructor interactions and student involvement could enhance students' learning attitudes.As mentioned above, academics have increasingly adopted social media to publicize their works and research, which attracts tremendous aspiration from audiences [71].Meanwhile, social media platforms increasingly invite academic celebrities to public speaking [72] and comment on social events.As most universities appoint known academics to deliver MOOCs, those academics are more likely to be recognized by students on social media [72].
While instructors may not be accessible on MOOC platforms, students may maintain parasocial interactions with them on social media platforms.Parasocial interaction refers to the imagined interaction between audiences and social media personalities as if they are interacting with each other in person [73,74].Parasocial interaction between audience individuals and media characters like newscasters and actors has been investigated in various research studies [75,76].Students who notice the activities of their MOOC instructors on social media may follow those instructors on social media, such as LinkedIn, comment on their latest achievements, and even receive replies.As such parasocial interactions develop, students may perceive the instructors on social media as true friends [77].The literature suggests that parasocial interactions may change the perceptions of students by helping them realize the different alternatives in their lives [78].In other words, parasocial interactions may allow students to realize the value of a specific MOOC, e.g., how the related knowledge can be applied in their careers or in society.
According to Katz et al. [79], parasocial interaction can encourage audiences to relate social media celebrities' messages to their own social experiences.As such, after observing instructors' introduction to academic background and career development on social media, students may relate such experiences to their own learning and career plans.In doing so, students may realize that the courses mentioned by the instructors may help them realize their own potential for specific roles and jobs.Moreover, cognitive reflections on parasocial interactions may lead to an attitudinal change in the audiences [80].For instance, students' attitudes toward MOOCs could change once they develop the perception (through parasocial interactions with instructors on social media) that the MOOCs delivered by the instructor could help them realize specific abilities that lead to the desired outcomes in their lives.Therefore, parasocial interactions with academic celebrities who teach on MOOC platforms may lead to students' cognitive and attitudinal changes.Drawing on the above discussion, the following hypotheses can be developed: H5: Parasocial interaction has a significant positive effect on perceived course value.
H6:
Parasocial interaction has a significant positive effect on learning attitude.
Course retention
The low student retention in MOOCs, reflected in completion rates, has caught tremendous research attention [81].Recent literature increasingly encourages the exploration of the factors that could influence student retention in MOOCs [32,82,83].Although courses on MOOC platforms are often provided by leading higher education institutions such as Harvard, MIT, and Stanford, the student with an initial interest in obtaining a learning certificate from those institutions may decrease once they realize the efforts required to accomplish a specific course.In this study, we draw on the TAM model to predict that students' perception of the value associated with a specific MOOC could influence their retention [40].Former scholars have confirmed how perceived value can affect students' course retention or continuous use of a technology-based system [44].As such, the following hypothesis can be developed: H7: Perceived course value has a significant positive effect on course retention.
According to the TRA model, attitude is an important predictor of one's behavioral intention; it indicates one's perception of a familiar object to be enjoyable or tedious [83,84].Former studies confirmed that students' attitude toward learning could predict their learning behavior [85].University students with a learning goal orientation and inspired by instructor reputation and parasocial interaction experience are likely to find joy when taking MOOCs.Such joy could enhance students' attitudes toward performing specific activities, such as course retention.
As such, the following hypothesis can be developed: H8: Learning attitude has a significant positive effect on course retention.
Moderating role of tutor intervention
While previous studies stressed the importance of interaction in digital platform learning [86], many MOOC platforms provide limited interaction opportunities between instructors and students.This could discourage students' need to develop interpersonal relationships and a sense of community.Moreover, insufficient direct communication also prevents students from finding assistance in understanding the various concepts and their applications in a MOOC.Finally, a lack of interaction suggests a lack of discipline that ensures students' learning progress and discourages their commitment to the course.Given such problems, universities introducing MOOCs appoint tutors to facilitate student learning [87].In this case, the tutor plays the surrogate role of an instructor in answering student queries, organizing students' learning activities and submission of assignments, explaining instructor feedback, providing suggestions, and monitoring students' progress.These tutors could help students recognize the value of MOOCs and achieve the learning objectives stated in the course descriptions [88].Moreover, tutors could help students develop a sense of community by forming an e-learning environment with frequent communications, thereby reducing the barriers to distance learning (e.g., watching videos of lectures & answering multiple-choice questions) [15].In short, the moderation of tutors could enhance the impact of perceived value and learning attitude on students' course retention.Given the above discussion, we predict the following hypotheses: H9: Tutor intervention positively moderates the relationship between perceived course value and course retention.
H10: Tutor intervention positively moderates the relationship between learning attitude and course retention.
The conceptual model is presented in Fig 1.
Sampling
This study aims to examine university students' MOOC retention.To achieve this objective, a survey was distributed to determine the individual and situational factors that influence university students' MOOC retention in China.We collected survey data from international universities in southern China.The universities introduced MOOCs from known universities into their library systems and assigned tutors from the libraries to assist students' MOOC learning.We collected data using the Chinese survey website "WenJuanXing" from October 21st, 2022, to July 21st, 2023.Due to the inaccessibility of all university students' databases, we adopted convenience sampling techniques to collect data from five universities in Guangdong Province and Zhejiang Province in China.Pukyong National University waived the ethical approval requirement for this study as the study did not have potential risk to the participants and it complied with local legislation and institutional requirements.No one under the age of eighteen participated in this study, and none of the collected data contained sensitive personal information.Respondents were informed of the purpose and instructions for completing the survey.Respondents were permitted to stop answering questions and exit the survey at any time.Respondents were asked to assess their levels of learning goal orientation, instructor reputation, parasocial interaction, perceived course value, learning attitude, tutor intervention, and course retention.The a priori sample size calculator for structural equation models was used to determine the required sample size.Using Free statistics calculators 4.0, anticipated effect size = 0.30 (medium), power level = 0.90, number of latent variables = 7, number of observed variables = 33, and probability = 0.05 were applied, and it was shown that a minimum sample size of 210 was required in the analysis of the current study (Structural equation model).However, the current study included a greater number of subjects to increase the statistical power as well as cope with the possibility of non-response error.We received 465 questionnaires and, after excluding 16 invalid samples, finally received 449 valid samples.The demographics of the respondents are as follows (Table 1): More than half the respondents were female (N = 244, 54.34%), and 49.67% were social science and humanities majors (N = 223).In terms of education level, 57.46% of the respondents were undergraduates (N = 258), and 42.54% of the respondents were graduates (N = 191).
Measures
Since the original scales were created in English, all items underwent a process of back translation [89].One English-Chinese scholar translated the items into Chinese, and then the other English-Chinese scholar translated them back into English, thereby ensuring an accurate translation.The measurements were graded on a 5-point scale.
Learning goal orientation
A 5-item scale was adapted from [90] to measure learning goal orientation.Sample items are, "I am willing to select a challenging MOOC that I can learn a lot from," "I often look for opportunities to develop new skills and knowledge," and "I enjoy challenging MOOCs where I'll learn new skills".
Instructor reputation
Instructor reputation was measured by a 2-item scale adapted from [91].Sample items are, "The reputation of the instructor in terms of teaching style was good," and "The reputation of the instructor in terms of the student's satisfaction was good".
Parasocial interaction
Parasocial interaction was measured by adapting the 4 items from [92].Sample items are, "I felt free to ask questions from this instructor on social media," "The instructor responded to my comments and queries," and "The instructor was easily accessible to me".
Perceived course value
Perceived course value was measured by the 4 items adapted from [93].Sample items are, "The MOOCs I selected would help me to develop career-related skills more quickly," "The MOOCs I selected would improve my academic performance," and "The MOOCs I selected would help me prepare for my career in my point of view".
Learning attitude
Learning attitude was measured by the 11 items adapted from [94].Sample items are, "I enjoy finding the solutions by learning from the MOOCs I selected," "The MOOCs I selected improve my motivation for learning," and "I enjoy the learning experience when taking the MOOCs I selected".
Tutor intervention
Tutor intervention was measured by 5 items adapted from [92].Sample items are, "The tutor played an essential role in facilitating my learning in this course," "The tutor organized the discussions in this course," and "The tutor was actively helpful when students had problems".
Course retention
Course retention was measured by 3 items adapted from [32].Sample items are, "Did you complete the online courses to earn a credential signifying official completion?(Yes/No).If not, when did you drop out? (First few days, first few weeks, towards the middle, towards the end, just before the end).""How many exercises/assessments did you complete in the online learning system?" and "How much of the online learning course content do you estimate you watched or read?".
Data analysis
Smart PLS 4.0 and SPSS 25.0 were used to analyze the data.In order to determine the subjects' basic level, we first conducted a descriptive analysis.Second, we checked common method bias and evaluated the stability and efficiency of the variables using reliability and validity analysis.Third, we used PLS-SEM to analyze the relationships between the seven variables.
Common method bias
To check the problem of common method bias, Harman's single-factor test was conducted.The analysis returned seven factors with eigenvalues greater than 1, with the first factor explaining less than 40% [95] of the variance (38.76% of 72.41%).Thus, the findings provide no serious indications of common method variance.
Reliability and validity
For each variable, the test of reliability and validity was conducted (see Table 2).The Cronbach's alphas ranged from 0.73 to 0.96, consistent with [96], who noted that alpha coefficients above 0.7 are acceptable and those above 0.8 are preferable, indicating the strong attribution relationship between the items and the variables.CR values are all above 0.7, and all AVE values are higher than the suggested 0.5 [97], indicating good convergence validity.The square roots of factors' AVEs are higher than their correlation coefficients with other factors, which strongly support the discriminant validity [97] presented in Table 3.
In addition, there is an additional criterion for discriminant validity testing, namely the heterotrait-monotrait ratio of correlations (HTMT), which is illustrated in Table 4.This criterion accepts values less than 0.9 and estimates the correlation between two variables.
Collinearity assessment
The values of VIF were computed to check the issue of multi-collinearity in the model.The results of VIF values are between 1.102 and 1.506 (less than 5).Therefore, it is concluded that the issue of multi-collinearity is not present among the variables.According to the results shown in Table 5, learning goal orientation (β = 0.310, p<0.001), instructor reputation (β = 0.311, p<0.001), and parasocial interaction (β = 0.348, p<0.
Discussion
While the educational technology literature has developed increasing knowledge in online learning [4,62], the surprisingly low rate of completion of MOOCs suggests that more studies are required to understand student retention on MOOC platforms [38].Integration of student and situational perspectives is important in unravelling how the issues (e.g., poor interaction, low student motivation, & learning material hoarding rather than learning) raised in the literature can be addressed.Therefore, this study aims to identify the factors influencing university students' MOOC retention.Based on the TAM model and TRA model, we developed and tested a conceptual model to address the research objective.This study was conducted using a random sample of students from international universities in southern China.Data were obtained via an online survey.The current results have been assisted by PLS-SEM analysis.Our results on the positive impact of learning goal orientation on perceived course value concur with previous studies [63]; this finding suggests that learning goal orientation plays a crucial role in students' perceived value of taking a specific MOOC.In other words, students are willing to accomplish MOOCs embedded in their university learning platforms because of their motivation to acquire new knowledge and overcome challenges.We also found a significant positive impact of instructor reputation on perceived course value, in contrast to earlier research [65,66] that investigated the relationship in on-site teaching contexts.Students who feel inspired by academic celebrities on social media could perceive greater value in their selected MOOCs.This study found that parasocial interactions between students and MOOC instructors had a significant positive impact on perceived course value.We also agreed with [64] on the role of learning goal orientation on perceived course value.Students who are learning-goal oriented are more likely to form positive learning attitudes, accept challenges, and appreciate the opportunities for acquiring new skills and knowledge.Our empirical results on the positive role of instructor reputation and parasocial interaction on students' attitudes towards selected MOOCs.This extends the previous social media [22,98,99] studies that confine these two variables in a non-teaching context.Finally, as a new finding, we proposed and empirically confirmed the important role of tutor intervention in enhancing the positive impact of perceived course value and learning attitudes on students' course retention.This illustrated how perceived course value and learning attitudes are enhanced to improve students' MOOC course retention.
Theoretical implication
This study makes several theoretical contributions.First, this study integrates the TAM model with the TRA model, thereby complementing prior studies [42,44] that primarily focus on perceived value and the situational antecedents (i.e., instructor reputation & parasocial interaction) as the determinant factor when predicting students' adoption of digital courses.Those situational antecedents include the external factors that influence and shape the conditions for individuals to adopt a specific technology-facilitated course [100].We included individual factors (i.e., learning goal orientation), thereby extending scholars [42] that primarily focus on external factors.Moreover, we included students' learning attitudes from the TRA model, thereby tailoring the TAM model according to the requirements of the research context.In doing so, we concur with other scholars' [51] suggestion to adapt the TAM and TRA models when evaluating the role of key variables in these two models.Drawing on current MOOC studies [15] that raise the issue of students hoarding materials and watching videos as entertainment rather than engaging in learning activities, we explored and examined three crucial factors-learning goal orientation, instructor reputation, and parasocial interaction-and proved that they were significant predictors of MOOC retention.This makes a significant step forward in considering the unique contextual characteristics of MOOC learning not only on MOOC platforms but also on social media platforms, thus recommending course developers and instructors to develop effective strategies to raise the retention rate.Second, in view of the actual challenges of insufficient interactions with instructors, a weak sense of community, and a lack of discipline in the learning process, we examined the moderating effects of tutor intervention.The results demonstrate a missing actor in the MOOC learning process, i.e., course tutors.These tutors could help eliminate the aforementioned barriers, which are innate in MOOC platforms, through timely intervention.Finally, this study was conducted in China, where the COVID-19 quarantine policy lasted until the end of 2022.We shed light on the digital learning literature [42,68] regarding the introduction of MOOCs during the COVID-19 pandemic to ensure students' learning.
Practical implication
Our results also suggest practical implications regarding how the pervasive issue of low student course retention could be addressed.First, given the importance of learning goal orientation, universities introducing MOOCs could first collect students' learning needs and match these needs with specific courses.Moreover, universities could elaborate on the actual value of introduced MOOCs to stimulate students' perceived value and shape their learning attitude.While MOOC content is important, students may not be able to appreciate such courses without understanding the associated benefits, course difficulty, and required commitment.Universities could improve students' learning quality if they import MOOCs based on the understanding of students' different learning styles and goals to improve engagement and learning experiences.
Second, while MOOCs are based on autonomous learning, students' expectations for interactions with instructors cannot be neglected.When it is unrealistic to maintain interactions with a large number of students on MOOC platforms, instructors could consider improving their exposure to social media.For instance, instructors could relate their public speeches and academic activities to the MOOCs they are teaching.Doing so could provide students with an additional channel to stay connected with instructors, especially the popular ones, thereby increasing their retention rates.
Third, the findings of this study also suggest that MOOC tutors play an important role in ensuring students' learning quality and retention on MOOC platforms.For instance, tutors could provide suggestions, based on students' previous knowledge and experience, on the course suitability, objectives, structure, time and effort commitment, and poke students to keep pace with the required learning progress.Moreover, tutors could allow university students selecting the same courses to meet online and offline, thereby helping them to develop a sense of community.In doing so, MOOC tutors could help address the low completion issue due to a lack of interaction.
Limitations of the study and future studies
This study only provides a starting point for a better understanding of students' MOOC retention.We encourage future studies to address some of the limitations of this study.First, we only surveyed university students from China in this study.To better understand and generalize our findings, additional research across regions is encouraged because respondents' perceptions and attitudes may vary by region.Second, common method bias is a possibility because this study was conducted using a self-reported survey.Even though this study passed the common method bias test, we advise future studies to use longitudinal research or experiments.Furthermore, this study only included student respondents.A comparative study is necessary to determine the differences between students, instructors, and tutors to generalize the MOOC course retention.
Conclusion
In this study, an integrated TAM-TRA model was utilized to investigate the MOOC retention of Chinese university students.Our ten hypotheses were significantly supported by the empirical data.Specifically, learning goal orientation, instructor reputation, and parasocial interaction can make Chinese university students appreciate the value of MOOCs and shape their learning attitudes toward those courses.Perceived course value and learning attitude accessibility can contribute to university students' MOOC retention; these relationships are enhanced by tutor moderation.
Hypothesis testing.
The bootstrapping method (two-tailed test) and the evaluation of T values (significance level p < 0.05 & T value > 1.96) are both used to test the model (seeFig 2).
001) have significant positive effects on perceived course value.As a result, H1, H3, and H5 are supported.Moreover, learning goal orientation (β = 0.402, p<0.001), instructor reputation (β = 0.346, p<0.001), and parasocial interaction (β = 0.291, p<0.001) have significant positive effects on learning attitude.PU (β = 0.279, p<0.001) and LA (β = 0.346, p<0.001) have significant positive effects on course retention.As a result, H2, H4, and H6 are supported.The interaction between tutor intervention and perceived course value (TI*PU) has a significant positive effect on course retention (β = 0.133, p<0.05), thereby supporting H9.The interaction between tutor intervention and learning attitude (TI*LA) has a significant positive effect on course retention (β = 0.129, p<0.05), thereby supporting H10.Simple slope plots are presented in Figs3 and 4. According to these Figures, this study found that the positive relationship of perceived course value (PU) and course retention is stronger when tutor intervention is high; the positive relationship between learning attitude and course retention is stronger when tutor intervention is high.These results further support H9 and H10. | 7,991.2 | 2024-09-12T00:00:00.000 | [
"Education",
"Computer Science",
"Psychology"
] |
Linear Increments with Non‐monotone Missing Data and Measurement Error
Abstract Linear increments (LI) are used to analyse repeated outcome data with missing values. Previously, two LI methods have been proposed, one allowing non‐monotone missingness but not independent measurement error and one allowing independent measurement error but only monotone missingness. In both, it was suggested that the expected increment could depend on current outcome. We show that LI can allow non‐monotone missingness and either independent measurement error of unknown variance or dependence of expected increment on current outcome but not both. A popular alternative to LI is a multivariate normal model ignoring the missingness pattern. This gives consistent estimation when data are normally distributed and missing at random (MAR). We clarify the relation between MAR and the assumptions of LI and show that for continuous outcomes multivariate normal estimators are also consistent under (non‐MAR and non‐normal) assumptions not much stronger than those of LI. Moreover, when missingness is non‐monotone, they are typically more efficient.
Proof of Theorem 1
Whenβ ls t is fixed to equal β t , the least-squares estimator (α ls t ,γ ls t ) of the remaining parameters (α ls t , γ ls t ) using only those individuals with R t = 1 is given by Therefore, assuming that equations (2) and (3) hold, that the measurement error process is independent of the other processes, and that measurement errors have mean zero, we have Hence For consistency of (α ls t , γ ls t ), we see that Similarly to equation (29), it can be shown that when equation (4) holds,
Proof of Theorem 2
In this proof we omit the superscript 'ls' from α ls t and γ ls t .
In their Section 3.3, A&G describe their imputation method. Adapting their formulae for Y est t and ∆Y est t to make them apply to the outcomes observed with error rather than to the underlying outcomes, we have We use induction to prove that E(Y est t − Y t | X, Y 1 ) = 0 for all t = 1, . . . , T .
Proof of Theorem 3
Line (38) follows because of the assumption of dDTIC and the autoregressive assumption of equation (1), as we now show. and So, from equations (40) and (42), we have that for k < s ≤ t, It follows from equations (1) and (9) that So, using equations (39), (43) and (44), we have So, line (38) follows.
The following example shows that independent return does not imply strong independent return. However, it does not show that this matters for inference.
Note that dDTIC does hold in this example.
The MLE of θ obtained by fitting the MVN model to the observed data and ignoring the missingness mechanism is the value of θ for which the derivative with respect to θ of the log likelihood function N i=1 L i (θ) equals zero, where and r k means the sum over all possible k-vectors whose elements are zero or one.
Even if (Y 1 , X, ǫ 2 , . . . , ǫ T ) is not normally distributed, equation (46) still holds, provided that ǫ t has mean zero and its variance does not depend on F t−1 . This is because, treated as a function of y it (i = 1, . . . , N), N i=1 h t (y it | r s , G is (r s ); θ) depends only on (and is a linear combination of G s (r s )) and Var(Y t | G s (r s )), and hence only depends on the distribution of (X ⊤ , Y 1 ) ⊤ and ǫ 2 , . . . , ǫ t through their means and variances. Now, for any r t−1 with final element equal to one, Using equation (13), equation (47) reduces to which, by equation (46), equals zero at the true value of θ, because, as stated Similarly, using equation (14), it follows that also has expectation zero at the true value of θ.
Therefore equation (45) has expectation zero at the true value of θ. So, under standard regularity assumptions, the MLE of θ from the unstructured MVN model is consistent (Stefanski and Boos, 2000).
The proof that autoregressive MVN yields consistent estimators when independent return holds is analogous. The parameters δ tj are removed from θ, since they are whenever the final element of r s equals one. The proof for unstructured MVN continues to apply for autoregressive MVN, once h t (Y t | r s , G s (r s ); θ) have been replaced by h t (Y t | r s , Y s ; θ) and equations (13) and (14) have been replaced by versions of those equations with (X, Y t−1 ) and X, Y k in place of G t−1 and G k , as Theorem 5 allows.
Proof of Theorem 5
By equation (1) and dDTIC, we have which implies that equation (13) holds.
Next we prove that equation (14) holds.
which implies that equation (14) holds. Lines (48) and (50) follow from strong independent return. Line (49) follows from the same argument used above to prove equation (13).
The proofs are analogous when independent return, rather than strong independent return, holds.
Proof of Theorem 6
The complete-data least-squares estimators of β t and γ t are Solving these simultaneous equations yieldŝ The least-squares estimator of α t iŝ If β t is constrained to equal I, then equations (54) and (55) still hold.
Proof of Theorem 7
When data are monotone missing and the δ tj 's are constrained to equal zero, where 1 t−1 denotes a (t − 1)-vector of ones. The maximum likelihood estimate of θ can be obtained by fitting the models defined by equations (84) and (85) with the δ tj 's omitted and estimating µ 1 , µ T +1 , Σ 1,1 , Σ 1,T +1 and Σ T +1,T +1 by the corresponding sample means, variances and covariances. Fitting by maximum likelihood the models given by equations (84) and (85) with the δ tj 's omitted is equivalent to fitting them by least squares, which is the method proposed by A&G.
When data are monotone missing, the imputed value of Y t obtained using au- Therefore, aMVN imputation is equivalent to the iterative imputation procedure that is LI imputation.
Proof of Theorem 8
We begin by proving b). So, assume that mortal-cohort independent return and independent death hold. Then Equations (20) and (26) imply that Line (60) follows because of independent death. Line (62) follows because of mortal-cohort independent return. Line (63) follows by induction. Line (64) follows by equation (57). Hence, from equation (64), It follows from equations (56) and (65) that Hence, independent return holds in the supplemented process. Line (69) follows from mortal-cohort independent return and equation (66).
The proof of c) is analogous to that of b). The changes are as follows. Replace X by G k−1 in equations (56), (58)-(63) and (66)-(70). Replace equation (57) by and replace equation (65) by Finally, we prove a).
Proof of Theorem 9
In order also to be able to discuss the use of MVN imputation for mortal-cohort inference (below), we prove the following more general version of Theorem 9.
Theorem 10. If equation (20), mortal-cohort dDTIC, mortal-cohort independent return and independent death hold, then for k < s < t, When mortal-cohort strong independent return and strong independent death hold, (X, Y k ) on the left-hand side of these equations can be replaced by (X, G k ).
Proof
First, consider a).
Line (77) follows by mortal-cohort independent return and independent death.
Appendix S3: Proof of equation (19) Before giving a formal proof, we provide some intuition as to why this constraint arises. The unstructured MVN model can be reparameterised as If it assumed that equation (2) holds, then each δ tj must equal zero. Since there are (T − 1)(T − 2)/2 matrices δ tj , each of which has m 2 elements, constraining δ tj = 0 reduces the number of free parameters by (T − 1)(T − 2)m 2 /2. Returning to equation (19), we note that there are T (T −1)/2 matrices Σ st (s < t) and T −1 matrices β t , and each of these matrices has m 2 elements, so the constraint of equation (19) reduces the number of free parameters by the same number: (T − 1)(T − 2)m 2 /2.
The relation between the parameters of the original and reparameterised models
is as follows. For t ≥ 2, and, for 1 ≤ s < t ≤ T , Σ s,t is given by equation (19). Conversely, for t ≥ 2, β t , γ t and α t are given by equations (15)- (17) without the hats, and We now provide a formal proof of equation (19).
For 1 ≤ s < t ≤ T , we have from equation (1) that [Throughout this proof, terms beginning t−1 j=s+1 should be interpreted as being equal to zero if s = t − 1.] using equations (54) and (55). Note that equation (90) still holds when β t is constrained to equal I.
Appendix S4: Random-walk MVN (rMVN) methods
Here we describe the LI-rMVN imputation and rMVN imputation methods introduced in Section 4.3.
The relation between the parameters of the original and reparameterised models is given by equations (86)-(88) and by equations (16) and (17) This model can be fitted by maximum likelihood to the outcomes Y t + e t observed with error, thus treating them as though they were the underlying outcomes Y t , and ignoring the missingness mechanism (see Section S5 for fitting algorithm). We call this the 'random-walk MVN (rMVN)' method.
Like Theorem 4, Theorem 11 does not require that the data actually be normally distributed. Equations (86) and (87) can then be used to obtain consistent estimates of µ t and Σ t,T +1 . Note that the maximum likelihood estimates of Σ t,t obtained using equation (88) are not consistent unless there is no measurement error. For example, the maximum likelihood estimator of Σ 11 converges to Var(Y 1 )+Var(e 1 ) as N → ∞, rather than to Σ 11 = Var(Y 1 ). This is not a problem for LI imputation using the random-walk MVN estimates of α t and γ t ('LI-rMVN imputation'). It is also not a problem when imputation is carried out using equation (18) with the random-walk MVN estimates of µ and Σ ('rMVN imputation'), because the complete-data maximum likelihood estimator of the parameters of a linear regression of Y on t and/or X is not a function ofΣ t,t (see Appendix S6 for details).
Proof of Theorem 11
To avoid confusion in this proof, we shall denote the true values of α t and γ t in equation (2) as α 0t and γ 0t , and the true values of µ t = E(Y t ) and Σ st = Cov(Y s , Y t ) as µ 0t and Σ 0st . The model being fitted is Note that we are not assuming in this proof that the model given by equations (91) and (92) describes the true relation between the random variables.
= 0 Equation (95) uses the assumptions that {e t : t = 1, . . . , T } is independent of all other processes, e t is independent of e s for all t = s, and E(e t ) = 0 for all t. Equation (96) follows from equation (14) with G k replaced by (X, Y k ).
Appendix S5: EM algorithm for MVN methods
As explained in Section 4 of our paper, the standard (unstructured) MVN method does not respect the constraints on the variance given by equation (19). Schafer with c s,t (1 ≤ s, t ≤ T ) denoting the sample covariance of Y s and Y t , and c T +1,t denoting the sample covariance of X and Y t , and c T +1,T +1 denoting the sample variance of X.
For the random-walk MVN model, the constraints on the variance given by equation (19) with β = I need to be imposed. Again, the norm package can be used to carry out the E step of the EM algorithm, but the M step needs to be modified.
The M step can be carried out by fitting the model given by equations (83)- (85) with δ tj = 0 and β t = I and then calculatingμ andΣ using equations (86)-(88). However, we did not actually do this. Instead, we used a Newton-Raphson algorithm to maximise the observed-data likelihood directly.
Appendix S6: MVN imputation as a method for estimating parameters of a linear regression model For simplicity, assume that Y is univariate (i.e. m = 1). However, in the following, Y could easily be replaced by one of the m univariate elements of a vector Y .
First, we shall show that the complete-data maximum likelihood estimates of the linear regression model are functions of the complete-data statisticsX,Ȳ t , c T +1,t and c T +1,T +1 . Second, we shall show that the values ofX,Ȳ t , c T +1,t and c T +1,T +1 (t = 1, . . . , T ) in the imputed dataset are equal to, respectively,μ T +1 ,μ t ,Σ T +1,t andΣ T +1,T +1 . This implies that performing MVN imputation and then fitting the linear regression model of equation (98) to the imputed data gives the same estimates of ψ 0 , ψ 1 and ψ 2 as applying the forementioned functions toμ T +1 ,μ t ,Σ T +1,t andΣ T +1,T +1 directly. and Thereforeψ is a function ofX,Ȳ t , c T +1,t and c T +1,T +1 .
At convergence of the EM algorithm for fitting the MVN model (whether unstructured, autoregressive or random-walk) the expected values ofX,Ȳ t , c T +1,t and c T +1,T +1 given the observed data are equal toμ T +1 ,μ t ,Σ T +1,t andΣ T +1,T +1 (see Section S5). Since X is fully observed,X and c T +1,T +1 are observed, andμ T +1 andΣ T +1,T +1 are equal to them. The values ofȲ t and c T +1,t are not observed, but because X is fully observed their expected values given the observed data can be calculated by application of equation (18). Since this is precisely what is done in MVN imputation, the values ofȲ t and c T +1,t calculated from the imputed data will equalμ t andΣ T +1,t . Thus, whether one applies equations (99)-(101) to the imputed data or substitutesμ T +1 ,μ t ,Σ T +1,t andΣ T +1,T +1 forX,Ȳ t , c T +1,t and c T +1,T +1 in equations (99)-(101), one gets the same value ofψ.
(U, V, {R 0,t = 0}), which means the same as conditioning on (U, V )] that and so equation (102) implies that as required.
If ǫ it is not normally distributed, formula (18) will not correspond, in general, to the conditional expectation of Y t given G T when R l = 1 for some l > t. Nevertheless, multiple imputation using the unstructured MVN model has been found often to work well in practice when data are MAR even when not normally distributed (Schafer, 1997;Schafer and Graham, 2002;Lee and Carlin, 2010;Demirtas et al., 2008), and so there is cause to think that uMVN imputation may also work well in practice.
Appendix S8: Further results from Simulation Studies 1 and 2, and Simulation Study 3 Tables S1 and S2 show the results of fitting the linear regression model in simulation studies 1 and 2, respectively, of Section 6 of our paper.
For Simulation Study 1 we also modified the return mechanism so that the independent return assumption was violated. In particular, logit{P (R 0,t = 1 | R 0,t−1 = 0, R 0,t−2 , F T )} = φ t + X + (Y t−1 + Y t−2 )/2, where φ t is chosen to make P (R 0,t = 1 | R 0,t−1 = 0) = 0.5. The results are shown in Tables S3 and S4. In Simulation Study 3, data were generated from the same model as in Simulation Study 1 except that β t = 1.2 was replaced with β t = 1, and independent measurement error e it was added to the underlying outcomes. The errors e it were generated from the same bimodal distribution as ǫ it . The ω t and φ t values were again chosen so that P (R 0,t = 0 | R 0,t−1 = 1) = 0.5 and P (R 0,t = 1 | R 0,t−1 = 0) = 0.5. For each of 1000 simulated datasets we applied the same methods as in Section 6.1. We additionally applied these methods constraining β t = 1. Table S5 and Table S6 show the means and empirical SEs of the estimators of µ t and (ψ 0 , ψ 1 , ψ 2 , ψ 3 ), respectively. As expected, the methods that constrain β t = 1 are approximately unbiased and the methods that do not impose this constraint are biased. LI-LS imputation is more efficient than estimating the compensator, and LI-rMVN imputation is yet more efficient. There is little gain from using rMVN imputation compared to using LI-rMVN imputation.
Appendix S9: Software for LI methods
The LI-LS imputation method can be applied using the FLIM package in R (Hoff, 2014 Table S3: Means and empirical SEs of estimated µ t in Simulation Study 1 when return mechanism is modified to violate independent return assumption. Table S6: Means and empirical SEs of estimated ψ 0 , ψ 1 , ψ 2 and ψ 3 in in Simulation Study 3. LI-LS imputation is applied both with β t estimated and with β t constrained to equal 1. | 4,243.4 | 2016-04-06T00:00:00.000 | [
"Mathematics"
] |
Efficient Retrieval of Top-k Weighted Triangles on Static and Dynamic Spatial Data
Due to the proliferation of location-based services, spatial data analysis becomes more and more important. We consider graphs consisting of spatial points, where each point has edges to its nearby points and the weight of each edge is the distance between the corresponding points, as they have been receiving attention as spatial data analysis tools. We focus on triangles in such graphs and address the problem of retrieving the top-k weighted spatial triangles. This problem is computationally challenging, because the number of triangles in a graph is generally huge and enumerating all of them is not feasible. To overcome this challenge, we propose an algorithm that returns the exact result efficiently. We moreover consider two dynamic data models: (i) fully dynamic data that allow arbitrary point insertions and deletions and (ii) streaming data in a sliding-window model. They often appear in location-based services. The results of our experiments on real datasets show the efficiency of our algorithms for static and dynamic data.
A. MOTIVATION AND CHALLENGE
Given a set P of spatial points and a distance threshold r, a spatial neighbor graph of P consists of a set of vertices that correspond to points in P and a set of edges where an edge is created between two points iff the distance between them is not larger than r and the weight of this edge is the distance. Graph-based structures provide intuitive relationships between spatial points, so techniques that mine some patterns (i.e., sub-graphs) from spatial neighbor graphs are often required. In graph contexts, triangles are particularly considered as triangle is one of the simplest yet important primitive sub-graph patterns having many applications [17], [22]. For instance, spatial triangles can be utilized in group search [14], co-location pattern mining [34], and urban planning [12], [13]. Note that the number of triangles in a spatial neighbor graph is generally huge. It is not feasible to enumerate all of them, and the output size should be controllable (by a userspecified parameter k) [13], [17]. In spatial databases, given a subset of points in P , the cohesiveness of the subset is a factor in measuring its importance [14], [34].
The above applications and observations motivate us to address the problem of retrieving the top-k weighted spatial triangles. The weight of the triangle formed by points p x , p y , and p z is defined as dist(p x , p y )+dist(p y , p z )+dist(p x , p z ), where dist(·, ·) measures the Euclidean distance between two points, which takes into account the cohesiveness. Then, given P and k, this problem retrieves k spatial triangles with the minimum weight among all triangles in the spatial neighbor graph of P . For example, this problem formulation yields the following observation: EXAMPLE 1. We ran the above problem on a real dataset Places, a set of POIs in the U.S.A., by setting k = 100, VOLUME 4, 2016 and found some co-location patterns. First, we observed an intuitive pattern: industrial and precision manufacturing facilities exist near machine shops. We secondly observed that ⟨dentist, psychologist, consultant⟩ appears multiple times in the top-k triangles, suggesting that (psychological) consulting services tend to exist near clinics (or hospitals). In addition, we found that consultant services tend to exist near capital and risk management services, such as investment and stocks, in the top-k result.
As seen above, this problem helps analysts and experts mine (i) relationships between points and (ii) patterns/knowledge hidden in spatial datasets, and they also help consider where to open a new store (or service).
However, this problem is computationally challenging. A straightforward approach is to enumerate all triangles and then output k triangles with the minimum weight. The number of triangles in the spatial neighbor graph is exponential to the dataset size, suggesting the infeasibility of this approach. To alleviate this cost, DHL [17], which is a heuristic algorithm and was proposed originally for graph databases, can be used. DHL needs to sort edges in order of weights, because it greedily accesses the edges in this order to avoid enumerating triangles with large weights. However, if we employ DHL, we face substantial time incurred by sorting a large amount of edges of the spatial neighbor graph of P .
B. CONTRIBUTION
To solve the above issues, we propose an efficient algorithm that returns the exact answer. We observe that a subset of the spatial neighbor graph, which usually contains the topk weighted triangles, can be built offline. From this partial graph, for each point p ∈ P , we can enumerate a triangle having p with a small weight in O(1) time offline. These n triangles provide a tight threshold for the top-k result, which helps filter unnecessary points and triangles, accelerating online computation. Thanks to these observations, our algorithm does not need to correctly build the spatial graph and sort all edges.
We moreover consider insertions of new points and deletions of existing points, because this case often appears in location-based services [2], [10]. In this case, the top-k result may change, thus we need to efficiently update the result whenever we have an update (insertion or deletion). We show that our filtering idea for static data is still effective for dynamic data. Furthermore, we consider a sliding-window model for applications that focus only on recently generated points [2], [3], [19], [20]. We also design an efficient and exact algorithm for this case.
We summarize our main contributions below.
• We address the problem of retrieving the top-k weighted spatial triangles. To the best of our knowledge, this is the first work to tackle this problem in spatial databases. • We propose a simple yet efficient algorithm for solving this problem exactly.
• We show how to deal with fully dynamic data to efficiently update the top-k result. • We design an efficient and exact algorithm for monitoring the top-k result under a sliding-window model. • We conduct experiments on real datasets, and the results show that (i) our solution for static data is up to three orders of magnitude faster than a baseline algorithm and (ii) our solutions for dynamic data can quickly update the top-k result. This article significantly extends our conference paper [26]. Compared with this paper, this article provides • more detailed explanations of our solution for static data with examples and pseudo codes, • an exact algorithm for fully dynamic data, • an exact algorithm for streaming data in a slidingwindow model, • a detailed performance statistics of our solution for static data, • experimental results of our solutions for dynamic data, and • surveys about related works.
C. ORGANIZATION
The rest of this article is organized as follows. Section II introduces preliminary information. We present our solutions for static data, fully dynamic data, and sliding-window data, in Sections III, IV, and V, respectively. We report our experimental results in Section VI. We review related work in Section VII. Finally, in Section VIII, we conclude this article.
II. PROBLEM DEFINITION
Let P be a set of spatial (or geo-location) points in a Euclidean space. A spatial point p ∈ P has 2-dimensional coordinates ∈ R 2 . The Euclidean distance between p and p ′ is denoted by dist(p, p ′ ). Given a distance threshold r, we can build a spatial neighbor graph of P defined below: DEFINITION 1 (SPATIAL NEIGHBOR GRAPH). Given a set P of points and a distance threshold r, the spatial neighbor graph of P is an undirected graph consisting of a set of vertices that correspond to the points in P and a set of edges where an edge is created between p i and p j iff dist(p i , p j ) ≤ r. The edge between p i and p j is represented as e i,j and has a weight w(e i,j ) where w(e i,j ) = dist(p i , p j ).
In the spatial neighbor graph, there are triangles consisting of three points fully connected to each other. We define their weight: DEFINITION 2 (WEIGHT OF A TRIANGLE). Given a triangle △ x,y,z consisting of three points p x , p y , and p z , the weight of this triangle, w(△ x,y,z ), is: w(△ x,y,z ) = dist(p x , p y )+dist(p y , p z )+dist(p x , p z ). (1) Section III addresses the problem defined as follows: DEFINITION 3 (TOP-K WEIGHTED TRIANGLE RETRIEVAL PROBLEM). Given a set P of points, an output size k, and
Notation
Meaning p a 2-dimensional (geo-spatial) point P a set of n points dist(p, p ′ ) the Euclidean distance between p and p ′ △x,y,z the triangle formed by px, py, and pz w(△x,y,z) the weight of △x,y,z k a result size r a distance threshold B a batch size T a set of triangles τ a weight threshold θ an edge weight threshold R a top-k result set N (p) a set of neighbors of p W a sliding window size a distance threshold r, this problem is to retrieve at most k triangles in the spatial neighbor graph of P with the minimum weight.
We assume that r is reasonably specified so that there are many triangles in the graph. When P is a dynamic set of spatial points, it is required to update the top-k result. This problem, which we address in Section IV, is formally defined as follows: . Given a dynamic set P of points, an output size k, and a distance threshold r, this problem is to monitor (or update) at most k triangles in the spatial neighbor graph of P with the minimum weight, whenever P has updates (insertions and/or deletions of points).
Last, when P is a set of streaming points, a sliding-window model, which takes only the most recent W points into account, is usually employed [2], [3], [19], [20]. Section V assumes this case and addresses the following problem. DEFINITION 5 (TOP-K WEIGHTED TRIANGLE MONITOR-ING PROBLEM ON A SLIDING-WINDOW MODEL). Given a set P of streaming points, an output size k, a windows size W , and a distance threshold r, this problem is to monitor (or update) at most k triangles in the spatial neighbor graph of P W with the minimum weight, where P W contains the W most recently generated points in P . Table 1 summarise notations used frequently in this article.
III. OUR SOLUTION FOR STATIC DATA
This section presents our proposed solution. Section III-A introduces our main idea. In Sections III-B and III-C, we detail our offline and online algorithms, respectively.
A. MAIN IDEA
To efficiently output the result, pruning points that do not contribute to the top-k result is important. Assume that triangle △ x,y,z is included in the top-k result. From Equation (1) and Definition 3, it is intuitively seen that, for p x , edges e x,y and e x,z would be (two of) the t nearest neighbors (t- NNs) of p x , where t is a small constant. This suggests that the top-k triangles can be retrieved from the t-NN graph and that correct building of the spatial neighbor graph of P is not necessary. It can be seen that we can obtain the result from a sparser graph than the spatial neighbor graph.
Now assume that we have the t-NN graph of P , then, we can enumerate a promising triangle having p, i.e., the triangle formed by p and its 2 nearest neighbors offline, for each p ∈ P . Even if these triangles are not included in the topk result, they have small weights, leading to a tight threshold for online computation that helps prune unnecessary points (and triangles). Our algorithm is designed based on the above ideas and consists of a one-time offline computation and online computation.
B. OFFLINE PROCESSING
Algorithm 1 describes our offline algorithm. The objectives of this offline processing are to (i) build a B-NN graph of P , where B ≥ 3 is a batch size, and (ii) enumerate triangles with small weights. The batch size B is tuned empirically, and we show that a small constant (e.g., 10) is enough in Section VI-A. We use p.E to denote the set of edges held by a point p ∈ P . Given P and B, for each p x ∈ P , we compute the B-NNs of p x in P \{p x } by using a kd-tree [6]. The B-NNs are maintained in p.E and sorted in ascending order of weight (i.e., distance). Moreover, for each p x ∈ P , we compute the triangle △ x,y,z , where p y and p z are respectively the NN and 2-NN of p x . This triangle is maintained in T , so T has at most n triangles (we remove duplicated triangles). Last, we sort the triangles in T in ascending order of weight.
Remark. Our offline algorithm needs O(n 1.5 ) time [26]. Let s avg be the average number of edges held by each point. Building the spatial neighbor graph of P incurs O(n( √ n + s avg )) time. Our offline algorithm is hence cheaper, and it is VOLUME 4, 2016 Algorithm 1 OFFLINE PROCESSING Require: P (set of points) and B (batch size) 1: T ← ∅ // a set of triangles 2: for each p x ∈ P do 3: edges are sorted in ascending order of weight 4: T ← T ∪ {△ x,y,z } where p y and p z are respectively the NN and 2-NN of p x 5: end for 6: Sort the triangles △ ∈ T in ascending order of w(△) general to any k and r.
C. ONLINE PROCESSING
To efficiently retrieve the top-k weighted spatial triangles, we consider edge access order. Let τ be an intermediate threshold of the top-k result (i.e., the weight of the intermediate top k-th triangle). From τ and triangle inequality, for any edges, we can obtain a weight θ that has to be satisfied to form the top-k weighted spatial triangles. That is, any triangles that have edges with weights larger than θ do not have to be enumerated. We exploit this observation along with the triangles in T and the B-NN graph obtained offline.
Algorithm 2 overviews our online algorithm. Let P cand be the set of points that may form top-k triangles, and P cand = P at initialization. Our online algorithm has the following steps: 1) We first initialize the top-k result R and the threshold τ from the n triangles obtained offline in DETERMINE-THRESHOLD(P cand , r). Then, from τ , we compute a threshold θ for edges. As seen later, any edges with weights larger than θ cannot form top-k triangles. 2) (If necessary, we update the B-NN graph by increasing B.) In REDUCE-CANDIDATES(P cand , i, θ), we remove points with no edges satisfying θ any more from P cand . 3) For each point in P cand , we additionally enumerate triangles that could be in the top-k result and update R if necessary. 4) We repeat steps 2 and 3 until we have P cand = ∅, and then R is returned.
• Step 1. Recall that T is a sorted set of triangles obtained offline. Each triangle in T is formed by a point p, its NN, and 2-NN. (We remove all triangles in T that have edges with weights larger than r.) In DETERMINE-THRESHOLD(P cand , r), we initialize R by the first k triangles in T , and τ is the weight of the k-th triangle. Let △ x,y,z be the k-th triangle. We set the threshold θ for edges as follows: This is used in the next step.
Algorithm 2 ONLINE PROCESSING Require: P (set of points), k (output size), r (distance threshold), B (batch size), and T (a set of triangles) Ensure: R (set of k triangles with the minimum weight) 1 R ← ENUMERATE-TRIANGLES(P cand , r, i)
15:
Execute lines 4-6 16: Step 2. We next filter unnecessary points in P cand by using θ. Let p xj be the j-th NN of p x . Consider the i-th iteration of REDUCE-CANDIDATES(P cand , i, θ). For p x ∈ P cand , if w(e x,xi+2 ) > θ, triangles including e x,xi+2 can be ignored. (Recall that NN and 2-NN were considered in the offline processing.) PROPOSITION 1. For a point p x ∈ P cand , if w(e x,xi+2 ) > θ, any triangles that have e x,xi+2 cannot be the top-k weighted spatial triangles.
Proof. See [26]. □ From this observation, we see that, if w(e x,xi+2 ) > θ, all unseen triangles having p x do not have to be enumerated and p x can be safely removed from P cand . REDUCE-CANDIDATES(P cand , i, θ) does this point removal.
The triangles enumerated offline practically have small weights, as they are based on NN and 2-NN. Therefore, τ and θ are tight even when i is small, and we can effectively reduce the size of P cand in early iterations. EXAMPLE 3. We use Figure 2 to understand our point filtering. Assume that DETERMINE-THRESHOLD(P cand , r) returns the triangle formed by the red edges, and θ is also obtained as depicted in this figure. Focus on p x and p y that are described by green and blue, respectively. The edge between p x (p y ) and its 3-NN is described by the same color, and its weight is shown in the right part of this figure. We have w(e x,x3 ) > θ and w(e y,y3 ) > θ, and unseen triangles that have p x or p y cannot be the top-k result. Therefore, we can remove them from P cand .
• Step 3. After filtering unnecessary points in the above step, we enumerate triangles that may become the top-k result in ENUMERATE-TRIANGLES(P cand , r, i). Consider the i-th iteration of this step. For each p x ∈ P cand , we enumerate triangles formed by p x , p xi+2 , and p xj , where j ∈ [1, ..., i + 1], while updating the top-k result R, τ , and θ.
W.r.t. p xj , we access it in order of p x1 , ..., p xi+1 . Then, it is important to notice that w(e x,xj )+w(e x,xi+2 ) monotonically increases. When we have w(e x,xj ) + w(e x,xi+2 ) ≥ τ , we see that triangles with these edges cannot be the top-k result, thus we can stop enumerating triangles without losing correctness.
Analysis. Let n i be the size of P cand at the i-th iteration of step 2. In addition, let n ′ i be the size of P cand at the i-th iteration of step 3. Our online algorithm needs where I is the number of iterations of step 3. (The detail appears in [26].) In Section VI-A, we show that our algorithm has a small n ′ i and I in practice, yielding O( . This suggests that our algorithm practically beats any approaches that build the spatial neighbor graph of P , as they need at least Ω(n 1+ϵ ) time where ϵ > 0.
IV. OUR SOLUTION FOR FULLY DYNAMIC DATA
We next consider the case where P is subjective to updates (point insertions and deletions), and address the problem defined in Definition 4. In this case, the top-k result R may change because of the update of P . We below consider how to minimize the result update cost while keeping the correct answer, and show that our approach in Section III-C can actually deal with point insertions and deletions flexibly. Hereinafter, we assume that the top-k result R is initialized by our algorithm in the previous section.
A. INSERTION CASE
Assume that we have a new point p x . It is important to note that triangles which can newly become the top-k result are limited to the ones having p x . We use this observation to incrementally update the top-k result.
1) Given p x , we run a range search on a kd-tree where its query point is p x and radius is r, to update the B-NN graph. (Observe that the points whose B-NNs may be updated exist within the distance r from p x , due to the constraint of r.) For each point p y in this range search result, if p x becomes a new B-NN of p y , we add p x into the edge set p y .E. Also, for p x , we make p x .E from this range search result. 2) We next consider the triangle △ x,x1,x2 . If w(e x , e x1 ) > θ or w(e x , e x2 ) > θ, the weights of new triangles having p x are larger than τ . Hence, we terminate the update. 3) Otherwise, we run lines 7-16 of Algorithm 2 by setting P cand = {p x }. The main cost of this case is incurred by the range search, which needs O( √ n + s), where s is the size of the range search result. The second operation needs O(1) time. Also, the third operation needs a trivial cost ≪ O( √ n) because it has a few iteration numbers in practice.
B. DELETION CASE
We next assume that a point p x is removed from P . We have two cases incurred by this point removal.
No triangles are removed from R. If no triangles having p x are in the top-k result, it is trivial to see that the top-k result does not change. In this case, we simply remove the edges corresponding to p x from the B-NN graph.
Some triangles are removed from R. In this case, we need to update the top-k result. Note that this case is essentially the same as the static case, because R has less than k triangles. Therefore, to update R, we update the B-NN graph, update R via DETERMINE-THRESHOLD(P, r) 1 , and then verify R through lines 4-16 of Algorithm 2.
Clearly, the former case needs O(1) time. The cost of the latter case is the same as our online algorithm in Section III-C. It is intuitively seen that the latter case rarely occurs for datasets with a large n. This implies that the amortized update cost for a deletion can come close to the former cost.
V. OUR SOLUTION FOR SLIDING-WINDOW MODEL
This section addresses the problem in Definition 5. Different from the fully dynamic case in Section IV, we need to consider insertion and deletion at the same time in the slidingwindow model. This is because a window slide removes the oldest point and inserts a new point. Therefore, under this model, the top-k result has to be updated when 1) the weights of triangles having a new point are less than the threshold of the current top-k result and 2) the removed point has triangles included in the current top-k result. To efficiently deal with these cases, we maintain the following triangle for each point p ∈ P W . (Recall that P W is a set of points in the current window.) DEFINITION 6 (△ min p ). Consider a point p ∈ P W , and △ min p represents the triangle that has the minimum weight among the set of triangles having p but not being included in the top-k result. Remove all such triangles from R 3: end if 4: for each p y ∈ N (p x ) such that p x ∈ △ min Although △ min p may be updated when the window slides, it supports efficient top-k result update. For case 1), we can focus only on points p having w(△ min p ) < τ (recall that τ is the threshold of the top-k result) and can ignore the other points. For case 2), by adding △ min p into an intermediate topk result, we can obtain a tight τ , which also supports pruning unnecessary points. Since we maintain only a single triangle △ min p for each point p ∈ P W , the space complexity is only O(W ). Below, we show how to maintain △ min p when p x is removed from and is added to the window.
A. DEALING WITH REMOVED POINT
When p x is removed from the window, we confirm whether p x ∈ △ min of some points in P W . Let N (p x ) be a set of neighbors of p x . If p x ∈ △ min y for a point p y ∈ P W , we have to update △ min y . To achieve this, we need to enumerate triangles containing p y , and this can be done by essentially the same operation in step 3 of our algorithm for static data, see Section III-C.
Algorithm 3 describes how to deal with p x when it is removed from the window. We first remove invalid triangles from the current top-k result R. Then, we update △ min y for each p y ∈ N (p x ) in the way explained above, which corresponds to UPDATE-△ min (P W , p y ).
B. DEALING WITH NEW POINT
When a new point p x is inserted into the window, we evaluate whether triangles having p x and p y can be △ min y for each p y ∈ N (p x ). (We retrieve N (p x ) through a range search.) Algorithm 4 describes this procedure. We first compute △ min x . Then, for each p y ∈ N (p x ), we update △ min y if necessary. How to enumerate triangles follows the same way as in Section V-A.
C. TOP-K RESULT UPDATE
Recall that we need to update the top-k result R when (i) w(△ min x ) < τ where p x is a new point and (ii) triangles having p y , where p y is a removed point, are included in R.
Algorithm 5 UPDATE-TOP-k Require: P W , r, T (set of W triangles with △ min ) and R (the current top-k result) Ensure: R 1: Run Algorithms 3 and 4 in order 2: Sort the triangles △ min ∈ T in ascending order of w(△ min ) 3: l ← k − |R| 4: if l > 0 then R ← R ∪ {l triangles with the smallest weight in T } 5: end if 6: △ x,y,z ← triangle with the k-th smallest weight in R 7: τ ← w(△ x,y,z ) 8: i ← 1 9: while i ≤ l do Taking into account this fact, we update R in the following three steps, which are summarized in Algorithm 5.
1) We first remove invalid triangles from R in Algorithm 3. Then, if |R| < k, we add (k − |R|) triangles △ min with the minimum w(△ min ) to R to obtain an intermediate top-k result with a (probably) tight threshold. 2) If △ min z was inserted into R in the previous step, there may exist other triangles △ having p z such that w(△) < τ . We hence enumerate such triangles and update △ min z and R in lines 11-12. 3) Last, due to the update of τ , there may exist other points p ∈ P W such that w(△ min p ) < τ . If so, we do the same operations in the second step for p in lines 16-20. The number of triangles enumerated in Algorithm 5 cannot be bounded (and the worst case can be similar to our static algorithm) because it depends on data distributions. Nevertheless, it is practically small because the top-k result does not change so frequently. In Section VI-C, we show that Algorithm 5 never reaches the worst case.
D. OPTIMIZATION
Assume that, for a point p a ∈ P W , △ min and △ min c . These degrade the performance of Algorithm 5. To avoid such redundancy, we employ a directed spatial neighbor graph.
DEFINITION 7 (DIRECTED SPATIAL NEIGHBOR GRAPH).
Assume that points in P W are maintained by the generation order, and o(p i ) ≺ o(p j ) shows that p i was generated before p j . Then, given P W and r, in the directed spatial neighbor graph of P W , there is a direct edge e i,j between p i and p j if and only if dist From this definition, hereinafter, N (p i ) is also re-defined as a set of points p j such that dist(p i , p j ) < r and o(p i ) ≺ o(p j ). Below, we present why this structure can remove the redundancy.
When p x is removed. Recall that the sliding-window model removes the oldest point, so it is important to notice that o(p x ) ≺ o(p) for every p ∈ P W . We then see that p x / ∈ N (p) and △ min p never contains p x for every p ∈ P W . Therefore, when p x is removed, we do not have to update △ min p for every p ∈ P W . EXAMPLE 4. We explain this observation by using Figure 3. Figure 3(a) illustrates a directed spatial neighbor graph consisting of P W = {p 1 , p 2 , p 3 , p 4 , p 5 , p 6 }, where o(p i ) ≺ o(p j ) for i < j. Now assume that the window slides and p 1 is removed. As shown in Figure 3(b), the other points do not have direct edges to p 1 and do not change N (p). Hence, △ min p also does not change.
When p y is added. In this case, we update the directed spatial neighbor graph by using a range search. Then, for each p x ∈ P W such that dist(p x , p y ) < r, we update △ min y by enumerating triangles that have both p x and p y (if necessary). Note that we have △ min y ̸ = △ min z for p y and p z such that o(p y ) ≺ o(p z ), since N (p z ) does not contain p y .
Top-k result update. Thanks to the above optimization, we have no duplication w.r.t. △ min thus can avoid unnecessary triangle enumerations. We incorporate this optimization into Algorithms 3-5.
VI. EXPERIMENT
For experiments, we used a Ubuntu machine equipped with 3.6GHz Intel Core i9-9900K CPU and 128GB RAM. In addition, all algorithms were compiled by g++ 9.3.0 with -O3 flag and ran in a single thread mode.
A. EVALUATION ON STATIC DATA
This section evaluates our algorithm for static data. We compared it with DHL [17], which can compute the exact answer from the spatial neighbor graph of P . As mentioned in Sections I-A and VII, DHL is the only existing algorithm that can deal with our problem. For DHL, we used the original implementation 2 .
Dataset. We used two real large datasets, CaStreet 3 and Places 4 , to investigate how efficiently our algorithm runs on large datasets. Recall that one of our objectives is to design an efficient (and exact) algorithm for the problem defined in Definition 3. CaStreet consists of the minimum bounding rectangles of road segments in the U.S.A. We used bottomleft and upper-right points, and its cardinality is 4,499,454. Places consists of the geo-locations of public places in the U.S.A, and its cardinality is 9,356,750.
Parameter. We set n = 1, 000, 000 (via random sampling), k = 100, and r = 0.01 by default. Impact of r. Note that as r increases, the number of neighbors also increases. Table 2 shows the result of our experiment with varying r. The computation time of our algorithm is essentially the same even when we use a larger r than the default one. This result shows the robustness of our algorithm against r. Offline time. We report the offline time of our algorithm at the default parameter. On CaStreet and Places, our offline algorithm took 21.05 and 26.54 seconds, respectively. Since our offline algorithm is general for any k and r, the offline time is reasonable. (Actually, even if our algorithm begins from offline computation, it took less time to return the answer than DHL.) Impact of n. Figure 4 studies the scalability of our algorithm to the cardinality of dataset n. Our algorithm has a linear scalability to n, while DHL is superlinear w.r.t. n. This clarifies the advantage of our algorithm. When we used all points of CaStreet and Places, our algorithm is 2807 and 6193 times faster than DHL on CaStreet and Places, respectively. To understand the linear scalability of our algorithm, we investigated the size of P cand and the number of triangles enumerated in each iteration. Table 3 shows the result on Places when n = 1, 000, 000 and n = 9, 356, 750. (We omit the result on CaStreet, because it is similar to the one in Table 3.) It is important to note that the numbers of iterations and triangles enumerated are both very small. This also clarifies the effectiveness of our idea. Recall that the time complexity of our online algorithm is O(
Impact
. In practice, I and n ′ i are sufficiently small. In addition, when i ≥ 2, n i = n ′ i−1 and n i is also sufficiently small. Notice that n 1 = n, then we have O( . Now it is clear why we have the linear scalability. This section evaluates our solution for fully dynamic data. Because no existing works have addressed this problem so far, we compared our solution with our static algorithm that computes the result from scratch whenever we have an update.
Dataset. We used the same datasets as the ones in Section VI-A, and we used 1,000,000 points for initialization.
Workload. We used 10,000 updates as a workload. This workload consisted of (1 − α) × 10, 000 insertions and α × 10, 000 deletions. (Given an update is a deletion, we removed a random point in P .) To investigate the result update efficiency of our solution, we conducted experiments with varying α (i.e., deletion rate). We set k = 100.
Result. We measured the time to complete the workload, and Figure 5 depicts the result. Due to the incremental update, our algorithm for dynamic data, which is represented by "Our-Dynamic", completes the workload significantly faster than the algorithm for static data (represented by "OurStatic"). For example, in the case of CaStreet and α = 0.1, OurDynamic completes the workload in about 400 seconds. Its average update time per an update is hence about 40 milliseconds, whereas that of OurStatic is about 9000 milliseconds 5 . When α is larger, the performance difference becomes more bigger. We see that, as α increases, OurDynamic needs less time to complete the workload. In most deletion cases, the top-k result did not change, meaning that OurDynamic incurs only O(1) time in each of these cases. We had these cases more as the deletion rate increases, thus its time becomes shorter.
C. EVALUATION ON SLIDING-WINDOW MODEL
Last, we evaluate our algorithms for the sliding-window model. This problem also has no existing works, so we compared our algorithms with our static algorithm that computes the result from scratch whenever the window slides. We use "Ours", "Ours-Opt", and "Static" to respectively denote Algorithm 5 without the optimization in Section V-D, Algorithm 5 with the optimization, and the static algorithm.
Dataset. We used the same datasets in Section VI-B.
Workload. After the first W points were contained in the window, we ran 10,000 slides. We set r = 0.1 and k = 100.
Result. We measured the total time to deal with 10,000 window slides. Figure 6 shows the result of experiments with different window sizes. We observe that larger W needs a longer time. However, Ours and Ours-Opt keep short update time. When the window size is 1,000,000, Ours is about 700 (10,000) times faster than Static on CaStreet (Places). Furthermore, even when the window size is 2 million, Ours-Opt needs only 82 [msec] and 2 [msec] on average to update the top-k result per slide on CaStreet and Places, respectively, suggesting that it scales well to large window sizes. It is also seen that Ours-Opt is always faster than Ours, thanks to the optimization.
VII. RELATED WORK
This section reviews existing works that relate to the problem of retrieving the top-k weighted spatial triangles.
Graph-based Spatial Data Mining. Graph is a simple yet effective structure for representing relationships between data. Spatial points usually have relationships if they locate in close positions. Therefore, graph-based spatial data mining has been receiving attention. (Note that our work is different from works for road networks, e.g., [8], because these assume that graphs are given and P is constrained by the road networks.) Literature [12] considers spatial pattern matching. Given P and a query that is a graph pattern, it finds all subsets of P that matches the query. Different from our problem, this spatial pattern matching requires to specify a sub-graph of P . Clearly, the graph structure of P is not pre-known, so it is not an easy task to specify a concrete query. Moreover, the query result size is not controllable. Literature [13] considers a top-k version of spatial pattern matching, but it still has the former drawback. Spatial maximal clique in the spatial neighbor graph of P is considered in [34]. Since a triangle is a 3-clique, this problem is similar to ours. The authors of [34] found that the finding a spatial maximal clique corresponds to doing a maximal convex polygon. Their solution is based on this observation, and they do not consider the weight of polygons. Therefore, their technique cannot be employed for finding the top-k weighted spatial triangles.
Given a set W of location-based service providers and a set U of users with locations, [27] tackled the bipartite matching between W and U . Unlike the above works that try to "mine interesting sub-graphs", this problem focuses on "building a graph". Recently, [30] designed a system that builds spatial proximity graphs (e.g., a k-NN graph and Delaunay graph) from a given set P of points for multicore processors. It also supports other operations, such as clustering and computing minimum spanning trees on spatial proximity graphs. However, retrieving the top-k weighted spatial triangles is not supported, and we are the first to study this problem in spatial databases.
Spatial Data Analysis. Because spatial point analysis is well known to be important, much efforts have been made to develop query processing techniques, machine-learning models [23], [29], and systems [32], [33]. We below review some examples of analytical techniques.
The problem of maximizing range sum queries was addressed in [11]. Given a rectangle, this problem finds the location of the rectangle that maximizes the weight of points enclosed by the rectangle. A streaming version of this problem was also considered in [2], [3]. Such location selection problems have been extensively studied, e.g., in [15]. The interaction between spatial points was addressed in [4]. Some works considered spatial data visualization. In [16], to achieve interactive visualization of spatial points, the authors proposed an efficient algorithm that incrementally updates the visualization result from the previous one. Moreover, [7] proposed an efficient bounding technique for kernel density visualization.
Triangle Enumeration/Counting. Because the problem of triangle enumeration/counting is one of the classic problems in graph databases, many works tackled it. State-of-the-art algorithms for static and dynamic graphs can be found in [1], [18], [24], [25], [31]. Unfortunately, existing works for graph databases generally assume unweighted graphs and do not consider any ranking of triangles.
Similarly to our problem, DHL addressed the problem of retrieving the top-k weighted triangles in graph databases. It was originally proposed for weighted graphs, so it can deal with our problem by building the spatial neighbor graph of P . (DHL originally retrieves k triangles with the maximum weight, but it is straightforward to focus on triangles with the minimum weight.) However, because of the overhead incurred by dealing with the spatial neighbor graph, DHL is significantly outperformed by our algorithm. VOLUME 4, 2016 TAKAHIRO HARA received the B.E, M.E, and Dr.E. degrees in Information Systems Engineering from Osaka University, Osaka, Japan, in 1995, 1997, and 2000, respectively. Currently, he is a full Professor of the Department of Multimedia Engineering, Osaka University. His research interests include distributed databases, peer-to-peer systems, mobile networks, and mobile computing systems. He is a distinguished scientist of ACM, a senior member of IEEE, and a member of three other learned societies. VOLUME 4, 2016 | 9,620.8 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Computationally efficient permutation-based confidence interval estimation for tail-area FDR
Challenges of satisfying parametric assumptions in genomic settings with thousands or millions of tests have led investigators to combine powerful False Discovery Rate (FDR) approaches with computationally expensive but exact permutation testing. We describe a computationally efficient permutation-based approach that includes a tractable estimator of the proportion of true null hypotheses, the variance of the log of tail-area FDR, and a confidence interval (CI) estimator, which accounts for the number of permutations conducted and dependencies between tests. The CI estimator applies a binomial distribution and an overdispersion parameter to counts of positive tests. The approach is general with regards to the distribution of the test statistic, it performs favorably in comparison to other approaches, and reliable FDR estimates are demonstrated with as few as 10 permutations. An application of this approach to relate sleep patterns to gene expression patterns in mouse hypothalamus yielded a set of 11 transcripts associated with 24 h REM sleep [FDR = 0.15 (0.08, 0.26)]. Two of the corresponding genes, Sfrp1 and Sfrp4, are involved in wnt signaling and several others, Irf7, Ifit1, Iigp2, and Ifih1, have links to interferon signaling. These genes would have been overlooked had a typical a priori FDR threshold such as 0.05 or 0.1 been applied. The CI provides the flexibility for choosing a significance threshold based on tolerance for false discoveries and precision of the FDR estimate. That is, it frees the investigator to use a more data-driven approach to define significance, such as the minimum estimated FDR, an option that is especially useful for weak effects, often observed in studies of complex diseases.
INTRODUCTION
False Discovery Rates (FDR) have become a widely used multiple testing strategy that is much less conservative than family-wise error rate (FWER) methods such as the Bonferroni and Šidák corrections when multiple null hypotheses are false (Benjamini and Hochberg, 1995;Yekutieli and Benjamini, 1999;Efron and Tibshirani, 2002;Farcomeni, 2008). Storey and Tibshirani (2003;Storey, 2002) proposed an approach (denoted below as ST) in which FDR is estimated for a fixed rejection region, in contrast to the more traditional approach in which FDR is controlled that is, the error rate is fixed and the rejection region is estimated. Their approach incorporates an estimator of the proportion of true null hypotheses, π 0 , which increases power over the original Benjamini and Hochberg (1995) method when a substantial proportion of null hypotheses are false.
Permutation-based testing approaches are especially important in genomic studies because severe multiple testing conditions require parametric tests to rely exclusively on the extreme tails of the distribution, which are notoriously inaccurate models of real data. Parametric FDR methods can be implemented as nonparametric permutation-based approaches by computing empirically approximated p-values in a preliminary step (Yekutieli and Benjamini, 1999;Storey and Tibshirani, 2003;Yang and Churchill, 2007;Efron, 2010b) assuming exchangeability across tests under the null (Efron, 2007b). Ironically, it is often difficult to apply permutation approaches in ultra-high dimensional testing settings where they would seem to be most useful due to their intensive computational requirements. In view of this limitation, it is clearly important to address the question of the precision of the FDR estimate when just a small number of permutations have been conducted, and more generally, how precision depends on the number of permutations.
Also, the framing of FDR as an underlying quantity that can be estimated naturally leads to the question of the precision of the estimate. In the case of the ST and similar estimators, there is no explicit control of the FWER inherent in the estimate (Ge et al., 2003), and unlike a p-value, the magnitude of the estimate does not directly reflect the probability that the observed results are due to chance alone. It is therefore of paramount importance to know the precision of the FDR estimate. However, despite interest in quantifying uncertainty in the FDR estimate (Yekutieli and Benjamini, 1999;Storey, 2002;Owen, 2005;Efron, 2007bEfron, , 2010aSchwartzman, 2008;Schwartzman and Lin, 2011), none of this work has resulted in a practical permutation-based CI estimator for FDR under large-scale testing conditions where there are dependencies between tests.
We propose a permutation-based tail-area FDR estimator that incorporates a novel tractable estimator of π 0 , which is a simple function of counts of observed and permuted test outcomes. The development of a novel FDR CI estimator is then achieved by leveraging the tractability of the proposed point estimator, treating positive test counts as binomial random variables, and including a novel overdispersion parameter to account for dependencies among tests. Because the CI estimator explicitly incorporates the number of permutations conducted, indirect guidance is provided regarding whether that number is sufficient.
Evidence has been found in mice linking DNA variation to variation in 24 h REM sleep, possibly mediated by chronic differences in gene expression (Winrow et al., 2009;Millstein et al., 2011). Here we report an application of the method to identify gene expression features in the hypothalamus associated with variation in 24 h REM sleep in a segregating population of mice. Not only is FDR estimated and uncertainty quantified using the proposed approach, but a significance threshold is also selected a posteriori, in a data-driven manner.
PERMUTATION-BASED FDR POINT ESTIMATOR
Positive FDR is the expected proportion of tests called significant that are actually true null hypotheses given that the number of significant tests is greater than zero, and m 1 the number of true and false null hypotheses, respectively, S the total number of tests called significant, F the number of rejected null hypotheses that are true (false discoveries), and T the number of rejected null hypotheses that are false (true discoveries). The goal is to estimate FDR for a fixed significance threshold, thus S, F, and T depend on that threshold. The null distribution for a test statistic can often be approximated using a permutation procedure where the data are permuted repeatedly, with a set of test statistics generated for each replicate permuted dataset. Permuted test results will be identified here with a * and a subscript, e.g., S * i denotes the count of positive tests for the ith permuted dataset of B permutations. By design there are no false null hypotheses for tests of permuted data, consequently, The principal assumption underlying most permutation testing approaches is exchangeability of observations under the null hypothesis, implying that the expected proportion of positive By the properties of Table 1 we can express the expected proportion of observed false positives among true null hypotheses as, which introduces the term, (m 1 − E [T]), corresponding to the lower right cell of Table 1, the number of false null hypotheses called not significant. To facilitate the construction of a tractable estimator, we use the approximation that m 1 − E[T] = 0. Below, we show in simulated data and provide additional arguments that this approach yields a conservative estimator relative to the ST approach yet anti-conservative relative to Benjamini and Hochberg (1995), and moreover, when m 0 /m is close to one, the bias is extremely small. Rearranging Equation 4, we can generate an expression for E [T] as, In results from permuted data, by design, m * 1 = 0 ⇒ T * = 0, m * 0 = m, and F * i = S * i . Thus, we can express the expected number of false null hypotheses called significant as, Storey and Tibshirani (2003) when m is large, where the right hand expression has been described as the "marginal" FDR (mFDR; Tsai et al., 2003;Storey et al., 2007). We derive the following point estimator by using the mFDR expression, the fact that Equation 7, can be related to the framework described by Storey and Tibshirani (2003) for a permutation-based FDR estimator. Their approach was chiefly described for a set of test results in the form of p-values, but they also proposed a permutation testing implementation that involved empirically adjusting the pvalues using results from the permuted data prior to application of the proposed method. By rewriting their expression in terms of observed and permuted test results, FDR =π 0S * /S, whereπ 0 is the estimator of the proportion of true null hypotheses, m 0 /m. Equation 7 can be related to this framework by describing the factor on the far right as an estimator of the proportion of true null hypotheses that is,π A relation can also be described between the estimator of 8 and π 0 proposed by Storey (2002), where p i is a p-value for the ith test and λ is a tuning parameter often chosen by a smoothing algorithm (Storey and Tibshirani, 2003). A similar formula and heuristic parameter for determiningπ 0 were also proposed by Efron (2010b). The expressions in 8 and 9 are equivalent if λ, bounded by 0 and 1, is fixed at the empirically adjusted p-value significance threshold. An important advantage of fixing lambda as proposed is that the assumption of a uniform p-value distribution under the global null is not required, unlike the ST approach. Storey (2004) showed that for the estimator in 9, E[π 0 ] > π 0 when p-values corresponding to true null hypotheses are uniformly distributed and E[FDR] = FDR, a potentially conservative bias. The bias occurs if there are false null hypotheses with p-values greater than λ and this bias tends to increase as λ decreases, though the variance ofπ 0 decreases as λ decreases (Storey, 2004). Efron (2010b) proposed the equivalent of fixing λ = 0.5. The ST smoothing algorithm also results in a choice of λ substantially greater than the significance threshold, therefore theπ 0 and consequently FDR proposed here are more conservative yet with smaller variance than those proposed by Storey and Tibshirani (2003). However, the FDR estimator proposed here is less conservative than the Benjamini and Hochberg (1995) approach, which implicitly assumesπ 0 = 1 (Storey and Tibshirani, 2003). We show in Appendix A that the proposed estimator,π 0 , is consistent in n and m.
FDR CONFIDENCE INTERVAL ESTIMATOR
The variance of FDR depends not only on its magnitude but also on other factors such as the number of positive tests. Unlike a p-value, the magnitude of FDR does not necessarily correspond closely to the likelihood that an observed result, i.e., an observation of FDR that is less than one, is due to chance alone, and the CI estimate can be informative in this way. The FDR CI estimator is especially useful when there is substantial uncertainty in the precision of the point estimate. For instance, suppose hypothetically that a specific high-throughput experiment yielded a minimum FDR = 0.5, corresponding to a set of 100 potential gene targets. It is possible that the observed value is due to chance alone (no false null hypotheses), however, if it is known that the FDR estimate is reasonably precise and follow-up validation experiments are not prohibitively expensive, then despite the high FDR these results could be quite valuable, implying that ∼50 of the 100 tests are true discoveries (false null hypotheses). The CI estimator could be used to distinguish between the two scenarios, potentially salvaging useful results from a study that might otherwise be dismissed as not significant. That is, an investigator may occasionally be willing to tolerate a relatively large proportion of false discoveries if the estimated proportion of true discoveries is known to be reasonably precise. The closed-form structure of FDR (Equation 7) permits the development of a CI estimator by treating positive test counts as binomial random variables (Appendix B) and applying the delta method after a log transformation (Appendix C). The resulting estimator has the simple form, The expression for FDR in 7 can be recognized as having the simple form of an odds ratio between the observed and permuted test results (Appendix C), and the second form of the expression for the variance in 10 can likewise be recognized as analogous to the well-known variance estimator for the log odds ratio (Woolf, 1955). Interestingly, under conditions that will often hold in large-scale testing paradigms, a small number of positive tests relative to the total number of tests, expression 10 simplifies to, Though we recommend using expression 10 for practical applications, 11 provides some useful insight. By increasing the number of permutations, the contribution from the term on the left can be reduced, however, if it is already small relative to the term on the right, then the benefits of additional permutations will be minimal. Also, it becomes clear that when the total number of tests conducted is large relative to the number of positive tests, the variance in FDR is almost strictly a function of positive test counts and not dependent on the total number of tests conducted.
A confidence interval (CI) estimator for FDR can be developed in a manner analogous to the approach commonly used for the odds ratio that is, an exponential back-transform with a normal approximation, It is important to note that the variance and thus the CI is undefined when the number positive test results in the permuted data is zero. When this occurs we take the conservative approach of setting this number to one for estimation of the CI. The development of the variance estimator relies on the assumption that the positive test counts follow a binomial distribution. Thus, tests are assumed to be i.i.d. Bernoulli variables. This assumption has two parts, (1) the tests are independent and (2) identically distributed that is, the probability of a positive result is the same for all tests.
The second property can be described as exchangeability across tests in the sense that each test is assumed to yield a positive outcome with the same probability p. In theorem 1 of Appendix B, "variance inequality of a binomial sum," we show that a cryptic binomial mixture may cause an upward but not a downward bias in the variance estimate, implying that a departure from exchangeability across tests could cause the variance estimator to be more conservative but not more anti-conservative. We also found in simulations that the binomial variance estimator is highly robust to departures, and that in extreme cases where substantial departures do occur, the estimator does indeed become more conservative (data not shown).
On the other hand, the independence assumption (1) does present a major concern and is addressed here by modifying the variance estimator with an over-dispersion parameter to account for dependencies. This parameter can be estimated directly from counts of positive tests and thus does not require an additional analysis of the raw data or even the full set of test results. In contrast, Efron (2007aEfron ( , 2010a proposed a correction based on an estimator of root mean squared correlation in an underlying dataset. However, there is the requirement that dependencies among tests are represented by pairwise correlations between variables represented in a dataset, which is often not the case, e.g., eQTL analysis. Also, an additional analysis must be conducted using the primary data. Our approach is more general, does not require revisiting the primary data, and is more efficient in terms of data storage requirements because it uses positive test counts only.
OVER-DISPERSION ESTIMATOR
In practice, most genomic datasets include dependencies between features that ultimately result in dependencies between tests, although the correspondence can be quite complex. For typical hypothesis tests that evaluate associations between molecular and phenotypic traits, positive or negative correlations between traits lead to positive correlations between tests causing overdispersion in the variance of positive test counts (Edwards, 1960), which in turn causes over-dispersion in the variance of FDR. We introduce an over-dispersion parameter to account for these dependencies.
The over-dispersion parameter is used to scale the variance estimate for log(FDR) and is not needed (fixed at 1) if tests are known to be independent. Replicate positive test counts in the permuted data provide a convenient opportunity to assess dependence-induced over-dispersion without the necessity of revisiting the raw data or additional computationally expensive resampling procedures as proposed by Storey (2002) for FDR CI estimation. Each term in the expression for the variance of log(FDR) includes a component factor, which is a variance estimate for positive test counts (Appendix B), thus an estimate of over-dispersion of positive test counts could be used as a scalar parameter for the variance of log(FDR). The concept is to use permuted datasets to construct a ratio of the sample variance of positive test counts to the estimated variance based on the sample mean, 13) where "a" indicates adjustment for dependencies.
DATA ANALYSIS BIAS AND VARIANCE OF THE PROPOSED POINT ESTIMATOR
We compared the proposed estimator with the ST and Efron (2010a) approaches to characterize differences in bias and variance over a range of conditions. Case-control data were simulated with dependences by fixing the root mean squared correlation at three levels according to the R function "simz" (Efron, 2010b). Z-scores were simulated for 100 cases and 100 controls at 2000 "genes" with false null hypotheses created by adding a constant to case observations, as described by Efron (2010b). The constant was fixed at 0.15 and 0.3 to reflect weak vs. strong effects, which yield differing numbers of false null hypotheses with test statistics below the detection threshold, m 1 − T > 0. P-values were generated using t-tests, and for the ST and Efron (BE) estimators, they were adjusted using 10 or 100 permuted datasets.
As expected, all methods were conservatively biased in all scenarios across a range of significance thresholds (Figure 1). Also, results were very similar overall between 10 and 100 permutations (B), implying that under these conditions little improvement is FIGURE 1 | Performance of the proposed FDR point estimator (JM; implemented in the "fdrci" R package) as compared to the Storey and Tibshirani approach (ST) as implemented in the "q-value" R package and the Efron approach (BE) as implemented in the "locfdr" R package. Each plot was based on 200 replicate datasets independently simulated under identical conditions using the simz software (Efron, 2010a,b), where dependencies are determined by fixing the root mean squared correlation, denoted by α, of the raw data to 0.05. From each dataset, 2000 t-tests of 100 "cases" and 100 "controls" were generated, where false null hypotheses were defined by adding a constant to the raw simulated z-scores of "cases," as described by Efron (2010b) and π 0 = 0.75. Data were simulated with 40 blocks of correlated z-scores according to α. Case-control labels were randomly permuted 10 or 100 times (B) for each scenario. Differing values of "true FDR" reflected a series of increasing significance thresholds. True FDR was computed from the simulated data as mean F /S. Bias was computed as the mean FDR-true FDR.
Frontiers in Genetics | Statistical Genetics and Methodology
September 2013 | Volume 4 | Article 179 | 4 achieved by the order-of-magnitude increase in B. This result is consistent with Equation 11 that shows a small contribution in the variance due to permutations when the number of positive tests in permuted data is substantial. When the effects were weak (constant = 0.15) the ST estimator was more conservatively biased than the others between approximately FDR = 0.1-0.2, and this divergence increased with the increased number of permutations (Figure 1). Also, variance of the ST was greater over this range. However, it was less biased than the proposed (JM) and BE estimators above this range while maintaining a similar variance. The JM and BE performed similarly under these conditions with neither out-performing the other in bias or variance across the entire range.
In contrast, when the effects were stronger (constant = 0.30), the ST was less biased than the others across the entire range but the variance was greater over most of the range. This biasvariance tradeoff is also apparent in the difference between the JM and EB estimators with the JM substantially less biased over the approximate range FDR > 0.1 but with greater variance. From FDR = 0-0.1, JM and BE performed quite similarly, but ST bias was smaller and the variance was comparable.
PERFORMANCE OF FDR VARIANCE AND CI ESTIMATORS
We compared our proposed variance estimator for log(FDR) to the estimator proposed by Efron (2010a) both under independence between tests and when dependencies were present (Figure 2). Simulations were performed as described above except that 4000 "genes" were tested for each replicate, 400 of which corresponded to false null hypotheses, with constant = 0.3.
From Figure 2 it is clear that when tests were independent (α = 0), estimates for both estimators were close to observed values both for 10 and 100 permutations. However, when dependencies were simulated (α = 0.1), both methods were conservatively biased over most of the range. Below FDR ≈ 0.3 the JM estimator was more conservative than the BE and above 0.3 it was less conservative. The EB estimator was anti-conservative for FDR < 0.07 when 10 permutations were conducted but not when the number of permutations was increased to 100.
Using the BE variance estimator, we constructed CIs as proposed in Equation 12 to compare this approach to the proposed JM CI estimator. The JM 95 percent CI estimator outperformed the BE estimator in both the independent and dependent testing scenarios (Figure 2). The poor coverage of the BE estimator under independence is mostly due to upward bias that results in the lower bound exceeding the true FDR. Coverage of the JM estimator is slightly below the 95 percent target for the same reason, an upward bias. It's important to note that exact coverage is not as important when the CI width is small, as is the case in the independent scenario. The coverage problem for the BE estimator is not as severe in the dependent testing scenario, however, it is still well-below 95% and the mean CI width is substantially larger than the proposed estimator over most of the range. The coverage of the JM CI estimator is better than that of the BE estimator in the dependent scenario as well, meeting or exceeding 95% over most of the domain even though the mean JM width tends to be smaller.
To explore the performance of the methods under a different set of realistic genomic testing conditions, SNPs and Gaussian traits were simulated with dependencies and then tested for associations using linear additive models. The HAPSIM (Montana, 2005) R package was used to randomly generate haplotypes corresponding to specified ranges of LD, from which the SNP data was constructed. Allele frequencies were sampled from a uniform (0.2, 0.5) distribution. Data were simulated under two different proportions of false null hypotheses, each employing 10 and 100 permutations (Figure 3). For each of these four scenarios, CI's were computed using the JM and BE variance estimators under a range of significance thresholds. This study scenario presented a challenge for the BE approach because there were two datasets used for testing (SNPs and Gaussian traits), both with dependencies. In contrast, the guidance given by Efron (Efron, 2010a,b) dealt with just a single underlying dataset of correlated variables yielding a one-to-one mapping from variables to tests. In lieu of a formal method to compute an overall alpha (mean squared correlation) for the multiple dataset scenario (required by the BE method to adjust for dependencies) we used the mean alpha across datasets. In contrast, no alteration of the JM approach was necessary, since the over-dispersion parameter is computed strictly from positive test counts.
Biases of the point estimators were small and the JM estimator was slightly conservative where the bias was noticeable, as expected (Figure 3). Coverages of the CI estimators were generally conservative as well, hence the proposed overdispersion parameter demonstrated an adequate ability to correct
FIGURE 3 | Performance of the JM (black) and BE (gray) 95% CI estimators in the presence of dependent tests. Each plot represents 200
replicate datasets independently simulated under identical conditions. The true fdr ranged along the x-axis due to applying a variety of significance thresholds. Each dataset corresponded to 5050 tests. The number of false null hypotheses (m 1 ) was fixed at either 15 or 50. The thin solid black line along the diagonal represents unbiasedness and the thicker solid lines denote FDR point estimates. Means for upper and lower 95 percent confidence bounds are shown as dotted lines. The target confidence interval coverage of.95 is displayed as a solid horizontal line at 0.95 and actual coverage by dashed lines. SNPs were generated in "LD blocks" with 5 SNPs per block and composite LD ranging from 0.4 to 0.9 within each block, and traits were generated in "modules" of correlated traits with 5 traits per module and correlations ranging from 0.4 to 0.9 within each module. Twenty LD blocks and 10 gene modules were included in each replicate dataset.
for dependencies. However, mean widths of the BE CI's were extremely wide compared to the JM widths, implying that the heuristic approach of taking the mean alpha across datasets was not adequate. This problem highlights the sensitivity of the BE variance estimator to the type of data and tests conducted due to the computation of alpha, and in this case an appropriate method has not yet been described.
There was one small region where coverage of the JM CI was slightly low. The low coverage occurred where FDR was small, the number of false null hypotheses was small (15), and the number of permutations was 100 (bottom left panel of Figure 3). The somewhat low coverage in this region can be explained by the conservative bias of the point estimator combined with small CI widths, thus it is unlikely to be a problem in practice. When the number of false null hypotheses was increased to 50, coverage was more conservative and no longer low over this region. In general, increasing the number of false null hypotheses had a substantial decreasing effect on CI widths, as implied by Equation 11, but the effect of increasing the number of permutations from 10 to 100 was very modest. It is important that FDR CI coverage is good in the case where all null hypotheses are true, and we found that coverage of the JM estimator was conservative under these conditions (data not shown).
MOUSE GENE EXPRESSION IN HYPOTHALAMUS IS PREDICTIVE OF REM SLEEP
We investigated the relationship between rapid eye movement (REM) sleep and transcriptome-wide gene expression variation in male mice from a genetically segregating back-cross population of inbred mouse lines, C57BL/6J and BALB/cByJ, both the breeding scheme and sleep measures described previously (Winrow et al., 2009). These datasets were downloaded from a public database hosted by Sage Bionetworks (www.synapse.org; dataset IDs for the sleep phenotypes and hypothalamus gene expression were syn113322 and syn113318, respectively). One hundred and one mice were hand scored for sleep at 11-13 weeks of age using electroencephalogram (EEG) and electromyogram (EMG) data collected over a 48 h period (Winrow et al., 2009;Brunner et al., 2011;Millstein et al., 2011;Fitzpatrick et al., 2012). Hypothalamus tissue was collected from each mouse and profiled following sleep recording to identify chronic gene expression variation associated with variation in 24 h REM sleep. After an extensive quality control process applied to the gene expression data that included removal of probes containing SNPs and probes that were not considered to be poly-A reliable, a total of 17,404 probes remained for analysis.
For all 17,404 probes, F-tests of coefficients from linear models were used to test for associations between gene expression and mean 24 h REM sleep across the 48 h recording period, where both gene expression and REM sleep duration were coded as continuous variables with a single observation per animal. None of the resulting p-values achieved a typical Bonferroni significance level for family-wide α = 0.05 (p < 2.87e-6) or even a BH FDR equal 0.05 significance level. There is very little guidance in the literature regarding what to do when this happens, publish a negative finding? The problem here is that although there may be some evidence in the data of a true biological signal that signal may be too weak to achieve a Bonferroni or BH 0.05 significance level. However, using the proposed FDR CIs, the investigator is able to relax the significance threshold if necessary to capture and quantify evidence for relatively weak biological signals. Figure 4 shows FDR generated according to the proposed method plotted with CIs based on 1000 permutations over a range of potential p-value significance thresholds. Each permuted dataset was created by randomly permuting the individual labels corresponding to expression data. This approach preserves observed dependencies between transcripts. Ultimately, an investigator often choses a single "significance" threshold (typically a Bonferroni adjusted .05 alpha level) and reports those findings that meet the criterion, considering these to be "discoveries" that are worth further investigation. Unlike FWER control, where a universal threshold such as .05 can function as a single interpretable criterion to define significant features and quantify uncertainties, applying a FDR estimation approach may yield a range of thresholds over which FDR is significantly less than one but the number of discoveries and the magnitude of FDR varies. There is a trade-off between the number of true discoveries and FIGURE 4 | Estimated FDR and 95% CI for a series of significance thresholds applied to 17,404 tests of association between gene expression features and 24 h REM sleep. A final set of "significant" genes was identified using a threshold, shown as a vertical black dashed line that corresponded to the minimum FDR and minimum upper confidence limit. Numbers in the field denote counts of positive tests at the specified p-value significance threshold.
the FDR, and the final choice should reflect the objectives of the study and the costs vs. benefits of false vs. true discoveries. In these results, a minimum FDR and minimum upper confidence limit coincided approximately to define a natural threshold at p < 0.0001 [FDR = 0.15 (0.08,0.26)], yielding 11 transcripts. At this FDR level we would expect roughly 2 of the 11 to be false discoveries. Using this threshold, the BH method also determines FDR to be 0.15, suggesting that the parametric assumptions of the test are likely to be justified in this application. It is interesting to note that a consequence of choosing a minimum FDR is that among tests that achieve the chosen significance threshold, there is no evidence that smaller p-values are more likely to be true discoveries. In view of the small differences in FDR demonstrated above between 10 and 100 permutations, we did not believe that additional permutations would substantially improve our estimate or affect our ultimate choice of a significance threshold.
Though the 11 identified transcripts (supplementary Table S1) do not include genes well-known to regulate sleep, what is known about these genes does include some plausible links. For example, the two genes with the smallest p-values are secreted Frizzledrelated proteins, Sfrp1 and Sfrp4 (p = 1.1e-5 and 3.1e-5, respectively), known to be involved in wnt signaling (Bovolenta et al., 2008) as well as dopamine neuron development (Kele et al., 2012). Wnt signaling has been linked to pathologies, mood and mental disorders, as well as neurodegenerative disease (Oliva et al., 2013), all of which commonly include sleep indications as comorbidities. Also, Irf7 and Ifit1 are involved in interferon signaling, a process found to affect both REM and non-REM sleep (Bohnet et al., 2004). Iigp2, a member of the p47 GTPase family, may also play a role in interferon signaling (Miyairi et al., 2007).
Interferon induced with helicase C domain 1 (Ifih1) is upregulated in response to beta-interferon, and genetic variation in this gene has been found to be associated with type 1 diabetes (Winkler et al., 2011), which includes sleep disturbances as part of the long-term syndrome (Van Dijk et al., 2011).
DISCUSSION
The proposed method provides an accessible and computationally efficient approach for FDR CI estimation that accounts for dependencies among tests and the number of permutations conducted. Thus, it can easily be applied to genomic data, where dependencies are pervasive and the number of permutations often limited by computational resources. The method presents a major advance in addressing the oft-asked question, "how many permutations are required?" Even if a small number of permutations have been conducted, the investigator can be confident that this source of variance is reflected in the CI estimation, thereby adequately quantifying uncertainty in the FDR. The ability to apply this approach using only counts of tests that meet some threshold of interest is an important advantage that allows the method to be easily applied in very high dimensional testing settings such as trans eQTL, where storage of all test results or an additional analysis of raw data would be a computational burden. Also, the approach can be applied directly to statistics with uncharacterized distributions, bypassing the need for p-values entirely. Thus, there is no assumption of uniform or unbiased pvalues. The main assumption is that permuted results accurately reflect the null.
The appropriateness of parametric distributions becomes a much more challenging issue in large-scale inference settings because the investigator is forced to work in the extreme tails to adjust for multiplicity. This problem is sometimes addressed by severe transformations such as quantile normalization (Becker et al., 2012), which can cause a loss in power due to a loss of information. The use of permutations in the proposed approach provides a flexible as well as powerful multiple-testing approach, which does not require loss-of-information transformations. Also, without permutations, it would be necessary to go back to raw data to account for dependencies in the quantification of FDR uncertainty. Thus, the method is useful even when all parametric assumptions are completely justified.
Simulation analysis demonstrated that variance of FDR estimators increased when there were dependencies between tests, in agreement with Schwartzman and Lin (2011). However, the proposed over-dispersion parameter adequately adjusted the CI under the conditions explored to account for this inflation. We showed both theoretically and via simulations that variance of the proposed FDR point estimator was more sensitive to the numbers of positive tests than the numbers of permutations. Indeed, there was little change in variance from 10 to 100 permutations. The proposed point estimator performed well, showing moderate and stable characteristics with regard to the bias-variance tradeoff, out-performing the BE method in bias and the ST method in variance.
Both the proposed and BE estimators for log(FDR) performed well when tests were independent but conservatively when dependencies were present (the anti-conservative behavior of the BE estimator was not present when permutations were increased to 100). Coverage of the proposed CI was mostly conservative, and it almost uniformly out-performed the CI constructed from the BE estimators.
We showed that the precision of the proposed point estimator depends primarily on the number of positive tests (and dependencies among tests), which is not directly related to the magnitude of FDR. The ability to estimate a CI for FDR allows the investigator to identify sets of positive tests that are highly enriched for true positives yet are characterized by what would often be considered unreasonably high FDR, such as 0.2 and above. Undoubtedly, there are many such datasets with true biological signals that have gone unpublished due to an inability to achieve statistical significance with conventional FWER or FDR thresholds. Conversely, results may have been published that were not justified by the strength of the evidence. The proposed CI estimator thus allows decoupling of "statistical significance" from the magnitude of the FDR estimate. However, caution should be used in treating the CI as a hypothesis test for determining whether FDR is statistically significantly smaller than one. When an investigator uses a post-hoc strategy for identifying the significance threshold (such as the threshold that yields the minimum FDR or minimum upper CI bound), the upper CI bound should be substantially below one to conclude that FDR is statistically significantly below one. Based on our experience in simulated data and permuted real data (data not shown), we suggest a rule-of-thumb defined by an upper bound below 0.7 where there are at least 5 positive tests at the chosen significance threshold (smaller upper bound if there are fewer) is likely to be sufficiently conservative for most situations. However, a thorough treatment of this important question is beyond the scope of this report. We leave it to future studies to elucidate just how this criterion depends on factors such as the number of permutations, the number of positive tests, and dependencies among tests.
Not only were suggestive links found in the literature between REM sleep and gene expression for the set of 11 genes whose expression was significantly associated with 24 h REM sleep, but the signal-to-noise ratio was also quantified in the form of FDR, along with a measure of uncertainty in the estimate. From the sleep data analysis, it is clear that there is evidence of association between gene expression and REM sleep, and we are able to identify many of the genes likely to be involved. If a typical FWER approach or a BH FDR approach had been applied to these data, the investigator would have failed to reject the global null hypothesis of no association between gene expression and REM sleep. Though 11 genes may seem like a small number, it is important to remember that these associations reflect chronic differences in expression and sleep between individuals (all individuals were sacrificed at the same point in the light/dark cycle) as distinct from detecting genes that cycle with sleep state changes. Also, we set out to identify genes that explain normal sleep variation in individuals who are relatively healthy, unlike many differential expression studies that are conducted by comparing a diseased or perturbed population, e.g., sleep deprivation, to a healthy one.
The migration to non-parametric approaches in genomic analyses may be inevitable as investigators are faced with seemingly insurmountable challenges of satisfying parametric assumptions in the context of many thousands of sample distributions. In addition, the typically stringent significance thresholds used in multiple testing on a genomic scale results in the need to draw inferences based on the extreme tails of an assumed distribution, which are notoriously inaccurate. Permutation-based approaches are attractive in their flexibility and accuracy but are computationally expensive. We have described a method (with software freely available as an R package, "fdrci": http://cran.r-project. org/web/packages/fdrci/index.html) where permutations can be used to estimate FDR including CIs in a fully non-parametric approach, which is computationally parsimonious and robust to dependencies among tests.
ACKNOWLEDGMENTS
This work was partially supported by Merck & Co. Inc and Sage Bionetworks, who are currently providing the sleep data freely to the public (https://www.synapse.org). Eric Schadt provided useful advice in discussions and review. Discussions with Eugene Chudin also provided useful insight. The study that generated the sleep data was funded in part by the Defense Advanced Research Projects Agency (DARPA) and the Army Research Office (ARO), award number DAAD 19-02-1-0038, as well as by Merck & Co., Inc (USA), and the animal procedures, sleep recording and scoring was conducted at the Northwestern University by the Fred W.
APPENDIX A FDR CONSISTENCY
If we assume that individual hypothesis tests are consistent, then as sample size, n, goes to infinity, power of each individual test goes to 1, therefore, By design, the permuted dataset should accurately represent a realization from the complete null. If this is the case, then, and assuming that π 0 is fixed, Due to binomial properties, the variances of the above proportions go to zero as m goes to infinity. Therefore, as m and n go to infinity,π Thusπ 0 is a consistent estimator in m and n. Even if m does not go to infinity, the above shows that bias inπ 0 will go to zero as n goes to infinity.
APPENDIX B VARIANCE OF S
The development of a variance estimator for log(FDR) depends on an estimator for the variance of S. We use the approximation that S is a binomial random variable, which has an obvious rational under the global null but is more complicated under the alternative, where T > 0. In this case S can be thought of as a sum of two binomial variables, F ∼ Bin(m 0 , E[F]/m 0 ) and T ∼ Bin(m 1 , E[T]/m 1 ), where the sum, S = F + T, is not necessarily binomially distributed. However, the proposed binomial variance approximation will be a conservative estimator.
Variance inequality of a binomial sum
Suppose the sum, Z, of two independent binomial random variables, X ∼ B(m 0 , p 0 ) and Y ∼ B(m 1 , p 1 ), Z = X + Y. Then the variance of Z is less then or equal to its variance under a binomial distribution that is, Var(Z) <= E [Z] (1 − E[Z]/ (m 0 + m 1 )). Proof. The random variables X and Y are independent; therefore the variance of the sum is the sum of the variances,
≤ (E[X] + E[Y])(1 − (E[X] + E[Y])/(m
which clearly is true for all independent binomial distributions of X and Y. Though theorem 1 was developed for the sum of two variables, it easily generalizes to k > 2.
APPENDIX C VARIANCE OF LOG(FDR)
The variance of the log FDR estimate can be described as the variance of the sum of two independent quantities that is, thus due to independence between S and S * , the variance of the sum is the sum of the variances. Using the Delta method and the normal approximation to the binomial, we know that each term and the sum of terms converge to a normal distribution. It is true that S is actually a mixture distribution from true and false null hypotheses, but to the extent that this fact biases the variance, it will be a conservative bias. This follows from theorem 1 (above) and the resulting expression from the Taylor approximations (below). | 9,751 | 2013-08-03T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Research on Optimization Method of VR Task Scenario Resources Driven by User Cognitive Needs
: Research was performed in order to improve the efficiency of a user’s access to information and the interactive experience of task selection in a virtual reality (VR) system, reduce the level of a user’s cognitive load, and improve the efficiency of designers in building a VR system. On the basis of user behavior cognition-system resource mapping, a task scenario resource optimization method for VR system based on quality function deployment-convolution neural network (QFD-CNN) was proposed. Firstly, under the guidance of user behavior cognition, the characteristics of multi-channel information resources in a VR system were analyzed, and the correlation matrix of the VR system scenario resource characteristics was constructed based on the design criteria of human–computer interaction, cognition, and low-load demand. Secondly, analytic hierarchy process (AHP)-QFD combined with evaluation matrix is used to output the priority ranking of VR system resource characteristics. Then, the VR system task scenario cognitive load experiment is carried out on users, and the CNN input set and output set data are collected through the experiment, in order to build a CNN system and predict the user cognitive load and satisfaction in the human–computer interaction in the VR system. Finally, combined with the task information interface of a VR system in a smart city, the application research of the system resource feature optimization method under multi-channel cognition is carried out. The results show that the t est coefficient CR value of the AHP-QFD model based on cognitive load is less than 0.1, and the MSE of CNN prediction model network is 0.004247, which proves the effectiveness of this model. According to the requirements of the same design task in a VR system, by comparing the scheme formed by the traditional design process with the scheme optimized by the method in this paper, the results show that the user has a lower cognitive load and better task operation experience when interacting with the latter scheme, so the optimization method studied in this paper can provide a reference for the system construction of virtual reality.
Introduction
The research on the user experience and cognitive load of human-computer interaction in virtual reality system has attracted attention.In the virtual reality (VR) system task scenario, the mapping relationship analysis between the visual expression of multi-channel information resources and the user's hidden cognitive needs is an important part of studying the user's cognitive load and user experience.At the same time, it is very important for designers to predict a user's cognitive load and satisfaction during the system construction process [1][2][3].In the field of human-computer interaction, in order to accurately grasp the level of user experience perception, many scholars have provided a valuable research basis for the analysis of user cognitive behavior and satisfaction.
In the cognitive theory of user behavior, scholars have studied the cognitive load of humancomputer interaction from a cognitive mechanism and an information coding aspect.Scholar Cheng Shiwei [4] and others proposed a resource model based on distributed cognition.In the research of user cognition theory, Lu Lu [5] and other scholars proposed a multi-channel information cognition processing model.Li Yang [6] et al. designed badminton experimental scenes under VR conditions and added seven modal clues to study the influence of multi-channels on moving target selection performance and subjective feelings.Paquier Mathieu [7] et al. discussed the self-centered distance perception method of users under the alternation of visual and auditory peaks of virtual objects in distance dimension.Lei Xiao [8] et al. summarized the use of tactile clues to interact with other sensory stimuli to predict potential perceptual experiences in multi-sensory environments.Geitner Claudia [9] and others extended the research on multimodal warning performances.The above research shows that the user's information perception ability in multi-channel is greater than that in single channel, so this paper divides the information input in VR system into three channels: visual, auditory and tactile.
The quality function deployment (QFD) method plays a bridge role in the research of user satisfaction between user requirements and design elements.QFD is a process that dynamically converts user requirements into design, parts and manufacturing.The analytic hierarchy process (AHP) is usually used to process the collected user requirements, and matrix tools are combined to integrate various data, and the house of quality is used to form visual engineering feature proportions.Kathiravan [10] and others have improved the performance of QFD in the process of user-oriented product design.Shi, Yanlin and Qingjin Peng [11] et al. improve the feedback capability of QFD by distinguishing the needs of different passengers on a high-speed rail.Geng Xiuli [12] and others proposed a customer demand-driven module selection method for product service systems.Li Fei [13] et al. proposed a method for calculating and transforming the importance of user requirements based on double-layer correlation.In cognitive psychology, user cognitive load assessment is mainly divided into subjective assessment and physiological measurement.Lu Kun [14] and others have carried out experimental measurement and mathematical modeling research on mental load for the prediction of user mental load in an aircraft cockpit display interface.Shengyuan Yan [15] et al. analyzed the cognitive psychology of users in the emergency operation procedures of nuclear power plants through NASA mission load index and eye movement experiments, and then optimized the layout of the operation interface.Emami [16] et al. optimized the operation interface through brain-computer interface (BCI) to reduce visual interference and thus reduce the cognitive load of users.At the level of predicting user satisfaction, the use of the neural network method has attracted attention.Yan Bo [17] et al. used product usage data to establish a user perception evaluation model and predicted user perception satisfaction through back propagation neural network (BP).Diego-Mas Jose A [18] proposed a user experience modeling method based on neural network prediction.
To sum up, previous studies have not established a predictive feedback mechanism between resource elements and cognitive behaviors in the field of VR system resource optimization, and lack hierarchical analysis of the correlation between design resource elements and user's cognitive behaviors in VR systems.Considering this, in order to coordinate information capacity and user cognition in human-computer interaction, Based on the research results and theories of previous scholars combined with the existing problems, this paper proposes a cognitive load forecasting model based on the mapping of user cognitive behavior and system design resource elements under VR system multi-perception channels.Taking a smart city as an example, a model is established to sort out explicit design resources in order to obtain implicit user needs.
Research Framework
(1) Building VR system cognitive resource space: extracting user behavior characteristics and corresponding design resource characteristics from the visual perception channel, auditory perception channel, and tactile perception channel, and then analyzing the mapping relationship between explicit coding and implicit cognition of information representation under multi-channel.
In the mapping relationship analysis, users receive feedback information from virtual reality software and hardware through physical channels, then generate cognitive behaviors through feedback information, and then make decisions on tasks in the system.On this basis, users' VR system cognitive resource space is built.(2) Establishing QFD design element feature transformation space: focusing at a user's cognitive lowload demand, AHP and QFD are used to analyze the relevant importance of VR system visual resources, auditory resources and tactile resources, and obtain the importance ranking.Designers can refer to the ranking of the importance of each design resource aiming at the user's cognitive load demand when making design decisions, thus assisting designers to carry out efficient design.(3) Neural network model predicts user's cognitive load: according to the characteristics of the convolution neural network (CNN)'s nonlinear expression of variable relations, the cognitive load of users in VR system task scenarios is predicted and analyzed, thus assisting designers in building a VR system efficiently and accurately.In the neural prediction results, the system configuration scheme with the highest cognitive load value and the system configuration scheme with the lowest cognitive load value are retrieved, which can provide scheme reference for designers.
The research framework is shown in Figure 1.The specific research content is two to three chapters.
Channel Theory of Cognitive Resources
The theoretical basis of user cognition is mainly the theory of limited resources and graphic perception, which expresses the explicit resources and implicit cognition of VR system information representation.Due to the limited capacity of the user's cognitive resources, it is necessary to reduce a user's cognitive load through multiple channels during information identification [6,8,9], thus improving the cognitive efficiency of a user's experience and task operation scenarios.Therefore, this paper selects visual channels, auditory channels and tactile channels to study a user's cognitive behaviors and design resource characteristics.In system information reading and task operation, the computer perceives the user's behavior and converts it into encodable data.The operation process is a multi-channel perceived information input: firstly, the user receives information stimulation through multi-channel senses and stores it; then, short-term and long-term memory is called through operation perception to compile information and make decisions.Finally, the user executes corresponding actions according to the decision results to realize information output, and the user's cognitive information flow is shown in Figure 2.This paper analyzes user behavior through information multi-channel fusion to deconstruct VR system resource characteristics.
Construction of Cognitive Behavior Design Feature Model
Based on the user's cognitive psychology and VR resource characteristics in information transmission, the mapping relationship between the user's cognitive behavior and resource features can be analyzed, and a cognitive behavior-design element feature model framework can be established, as shown in Figure 3. Firstly, a low-load cognitive channel domain, a cognitive behavior feedback domain and a design resource feature domain are established in order to obtain physical perception information, and then the importance of the user's cognitive low-load requirements is transferred to the importance of design resource feature elements.1. Low-load cognitive channel domain: in VR task scenarios, explicit visual codes such as interface data pass through visual perception channels, background music and voice reminders pass through the auditory perception channel, and VR handle vibration feedback and task operation pass through the tactile sensing channel.The reception of explicit knowledge in the three channels affects each other, and there is a parallel, dependent and enabling relationship, which reduces the cognitive resources in a single channel dimension, thus reducing the cognitive load of users.The low-load cognitive channel domain is shown in Figure 4, wherein P represents the user's cognitive experience, V, A, and T respectively represent the visual channel, auditory channel, and tactile channel, {PV1, PV2... PVN} represents the user's cognitive experience under the visual channel, {PA1, PA2... PAN} represents the user's cognitive experience under the auditory channel, and {PT1, PT2... PT3} represents the user's cognitive experience under the tactile channel.2. Cognitive behavior feedback domain: the human-machine system based on task requirements uses the cognitive behavior-design resource feature network modeling method [19] to establish the user cognitive-behavior library.The user behavior selected by VR task is modeled and regulated in time sequence, and the operation and information channels are organically combined to intuitively reflect the interrelation between various behavior elements.The behavior elements are shown in Table 1 and the feedback field is shown in Figure 4.The numbers represent the sequence of the user's actions during VR operation, and V, A, and T represent visual channel, auditory channel, and tactile channel respectively.Through the decomposition of user behavior, the corresponding behavior element requirements are obtained, such as easy discovery, easy understanding, convenient regulation, etc. 4, VR task scenario design resource features are deconstructed, where visual channel information is expressed as {FV1, FV2 … FVN}, and includes schema shape, color, etc. Auditory channel information is represented as {FA1, FA2... FAN}, and includes background music, prompt tones, etc.; tactile channel information is expressed as {FT1, FT2... FT3}, and includes the frequency and amplitude of the operating lever vibration.
Mapping Relationship between Domains of Cognitive Behavior-Design Feature Model
The mapping relationship between domains is required to transfer the importance of a user's cognitive load to the resource characteristics of each information channel.In the model, P represents the user's cognition of low-load targets, C represents the tacit knowledge characteristics of VR cognitive criteria under target constraints, N represents the user's behavioral needs under each information perception channel, and , where α, β, and γ represent a class of cognitive channels.The user feedback behavior under the action of each cognitive channel is expressed by X, assuming that the user behavior set X is expressed as: For the virtual reality research object, under the cognitive visual, auditory, and tactile channels, the user's behavioral needs under each information perception channel are expressed as follows: If F is the explicit knowledge feature of the design resource feature under each perception channel, then the general model of ontology knowledge of a VR system selection task scenario is formally characterized as follows based on Backus-Naur form (BNF): Among them, the mapping relationship is an abstract expression of the relationship between cognitive channels, user behaviors and design resource features.The specific implementation method is QFD transfer and the allocation of cognitive low-load user demand value elements.Mapping relationships have one-to-one, one-to-many and many-to-many relationships and interrelated influence relationships within the hierarchy, as shown in Figure 4.The P layer is the cognitive load layer, which transfers the user requirements of the P layer to the C layer, and the C layer is the virtual reality sensory criterion layer.The criterion layer will have intra-group association influence, and the importance of the criterion layer is transferred to the user behavior requirement layer of the N layer.The user behavior is fed back into the virtual reality resources, and the virtual reality resource layer is the F layer.
Design Resource Feature Priority Calculation Model with Cognitive Low Load
This paper uses the analytic hierarchy process (AHP) to analyze a user's cognitive load and design resource characteristics in VR task selection scenarios.The AHP is a hierarchical weight decision analysis method, which integrates expert experience and theoretical data, and can realize the effective and unified combination of qualitative and quantitative aspects, and more objectively transfer the importance of user cognitive load in virtual reality.Based on the research of virtual reality cognitive theory, the specific implementation steps are as follows: Step 1: A correlation model for calculating the importance of a user's cognition of low-load demand is established.The model consists of four levels: target P, criterion level Ci (i=1, 2, …, n), cognitive behavior requirement level Ni1, …, Nin and design feature level Fi.
Step 2: Taking the target layer P as the judgment criterion, the criterion layer correlation matrix is constructed, the criterion layer elements C1, C2 … Cn are compared with C1 in turn, and the correlation comparison matrix A of user cognitive low-load demand N11 based on VR situation is established.
The matrix A is weighted to judge the correlation between each element in the criterion layer for P, and the correlation degree value is , which is assigned by a 0-9 scale method.The importance judgment index is shown in Table 2 The square root method is used to calculate the maximum eigenvalue λ and eigenvector W of the judgment matrix.First, we calculate the product of each row in matrix A, M = ∏ , i = 1, 2, …, n.Then we find the orientation quantity = is the eigenvector of the judgment matrix A, and the eigenvector is the importance degree of the criterion layer , ⋯ with P as the judgment standard.Secondly, the maximum eigenvalue is calculated, λ = ∑ [ ( • ) ] , where ( • ) i is the i-th component of the product of the judgment matrix A and the eigenvector W.
Finally, the eigenvectors are accumulated into the calculation of the lower correlation degree.
Scale Meaning 1
The two elements are of equal importance compared to each other.3 Compared with the two elements, the former is slightly more important than the latter.5 Compared with the two elements, the former is obviously more important than the latter.7 Compared with the two elements, the former is much more important than the latter.9 Compared with the two elements, the former is more serious and important than the latter.2, 4, 6, 8 The intermediate value of the above-mentioned adjacent judgment Step 3: Based on the criterion layer C, the elements {N11, N12, …, Nin} in the cognitive behavior demand layer are compared with N11 in turn, and the matrix B of N11 correlation comparison based on cognitive criteria is constructed.
Step 4: Method obtains the maximum eigenvalue of each judgment matrix and its corresponding eigenvector, and classifies the eigenvectors of cognitive behavior demand layer into the matrix Wij formula in turn, so that matrix Wij represents the correlation information between user cognitive behavior demand Nin and Njn.By comparing the link relations of cognitive behavior requirements in turn, the weightless relation matrix Wp is obtained to gain the priority weight value.
Step 5: Taking the cognitive behavior demand layer N as the judgment criterion, the correlation degree of design features {F1, F2, …, Fn} and F1 is compared in pairs in turn, and a judgment matrix of F1 correlation degree comparison based on the cognitive criterion is constructed to express the link relationship of the design features {F1, F2, …, Fn}.
According to the above analysis, the design feature importance (b = 1,2,…,a) can be obtained through the criterion importance and the correlation degree between the user's cognitive behavior requirements and technical features, and the design feature layer feature vector and priority ranking can be calculated according to the above steps.
The above four steps consider the correlation between the user's cognitive behavior requirements and design resource features in a VR task selection scenario system, which requires low cognitive load.In this paper, the importance of a user's cognitive behavior needs is transformed into the importance of specific design resource features.According to the quantified priority ranking, under the scenario demand of low cognitive load, the resource features with higher importance are considered first, and the unimportant design resource features and conflicting design resource features are considered second.In this process, through objective and accurate analysis of a user's experience and cognitive needs and clear design direction, innovative methods are used to realize the VR system task scenarios and build a specific design direction, thus ensuring the effectiveness of the design scheme.
Forecast Model Task Flow
In the prediction model, the convolution neural network is used to predict the cognitive load value of users.CNN is a kind of artificial neural network with a high efficiency recognition ability.It adopts the method of local linking and sharing weights.It obtains representations from the original data by alternately using each pooling layer of convolution layer, automatically extracting local features of the data, and establishing feature vectors.The application of the CNN method is to enter the convolution layer first, and extract the spatial information between features through the convolution + pooling method.The convolution layer convolves the overall data and extracts the spatial information through the convolution kernel.The pooling layer reduces the parameter dimension of the model and improves the training efficiency of the model.CNN's training algorithm is divided into two stages: the forward propagation stage, which takes a sample from the sample set and inputs it into the network, and then calculate the corresponding actual output, and the backward propagation stage, which calculates the difference between the real result and the expected result, and then adjusts the weight matrix according to the method of minimizing the error.
In the convolution layer, a different convolution check input sets are used for convolution operation, and the corresponding feature data will be obtained by activating the function.The general mathematical expression of convolution is as follows: where: is the Layer network; is biased; l is the weight matrix; is the Layer output; −1 is the Layer input; Mj is the jth convolution region of -1 layer characteristic graph; f(•) is an active function.In CNN, ReLU is usually selected as the activation function, and its mathematical expression is: After passing through the convolution layer, the number of features will increase.If multiple convolution operations are carried out, the feature dimension will explode.In order to solve this problem, the common method is to add a pooling layer after the convolution layer.Its function is to reduce the amount of data processing while maximizing the retention of effective information.Common pooling methods include: meaning pooling, maxing pooling, stochastic pooling, The general mathematical expression for pooling is: where: xi is the input, xi+1 is the output, β is multiplicative bias, b is additive bias, down() is pooling function, and f(•) is activation function.
The data of the input set will obtain the advanced features of the input set after convolution pooling operation.The full connection layer weights these advanced features and then obtains the output through the activation function.The general mathematical expression of the full connection layer is: where: x k−1 is the input of the full connection layer, y k is the output of the full connection layer, ω k is the weight coefficient, b k is the additive bias, k is the serial number of the network layer, and f(•) is the activation function.In the full connection layer, Softmax activation function is often used for multiclassification prediction tasks.
The logical task process of the prediction model is a process of predicting and evaluating an interactive selection through the neural network according to the relationship between VR system resource characteristics and cognitive load.The specific process is shown in the Figure 5. Step 1: Takes the VR system information interface scheme feature target as input, and based on it, the design resource features of interface scenario and multimodal perception channel are selected.
Step 2: Based on AHP-QFD, the priority ranking of design resource features is taken as a reference-aided design for design schemes.
Step 3: Involves building the virtual reality task selection scenario system.
Step 4: Use the input of the CNN neural network to detect whether the built design scheme meets the user's need to recognize low-load and design requirement constraints, and return to Step 2 if it does not.
Step 5: If the design constraints are not violated, the scheme is saved and implemented.
Application Case
In the information interface scenario of a VR system in a smart city, the main task requirement of users is to understand the general city layout and various index information.The user behavior is distributed as; reading, listening, searching, and tactile perception behavior modules.Because the information in the smart city system interface is dense, and the users who carry out the interactive experience are mostly users who are unfamiliar with VR system operation, they are prone to confusion and obstructions in their experience.Therefore, reducing the cognitive load of users and then improving the operation efficiency of users to help them complete tasks in a VR system is an urgent problem to be solved in VR interface task scenarios.
The ontology knowledge of virtual reality system is as follows: the VR system in this study is modeled by Rhinoceros, and the virtual reality development environment is as follows: the experimental platform is built with AMD 1800X CPU; GPU is NVIDIA ® GeForce ® GTX 1070; 16 GB of RAM; the operating system is Windows 10; the system development platform is UNREAL ENGINE 4.21. 1. Corresponding virtual reality equipment: HTC VIVE/HTC VIVE PRO; VIVE headmounted equipment; VIVE control handle; VIVE locator.The construction of cognitive system is based on user cognitive theory and user cognitive behavior analysis.
Acquisition of User Cognitive Behavior Requirements in VR Task Selection System
In this paper, the principle of analytic hierarchy process is selected to carry out stratification, analytic hierarchy process (AHP) takes a complex multi-objective decision-making problem as a system and decomposes the target into several levels of multiple indexes.This method obtains the priority weight of each element in each level to a certain element in the previous level, and finally calculates the single ranking and the total ranking of the levels after weighting.The cognitive and design features of a VR system are layered, and the analytic hierarchy process (AHP) representation method is used to divide the VR system into four levels, namely, target layer (P), criterion layer (C), cognitive behavior requirement layer (N), and design features layer (F).
Firstly, the VR system cognitive low-load is set as the target layer and secondly, the criterion layer is set.This paper collects the key words of VR system usage cognitive criteria through literature inquiry, system construction expert interview, VR system user interview and other channels, and 16 subjects were invited to determine the most suitable words in expressing the virtual reality situation.According to the number of votes, 33 criteria images were preliminarily screened, as shown in Table 3, and then the structure relation of virtual reality scene criterion words was further excavated.The subjects were invited to carry out the semantic grouping experiment.After evaluation and scoring, the subjects used artificial classification to classify the words they thought had similar meanings into a group (the number of words in each group could be different).After counting the same number of groups, Matlab obtained the matrix through the following operations, as shown in Table 4, and then imported the obtained data into SPSS data statistical software for clustering analysis to obtain the tree diagram shown in Figure 6.In this figure, the criterion words are divided into four groups in Table 5, with the group representative being the one closest to the center point, which are "immersion", "visualization", "fluency" and "pleasure".Then, through interviews with system building experts and users, we use behavioral needs to set the cognitive behavioral needs layer.The user cognitive behavior requirements for the VR system interaction criteria are "natural interaction operation", "real scene space", "data visualization", "matching of functional scene elements", "clear information level", "timely feedback", "visual aesthetics", "easy mastery" and "learning".Finally, the design features of visual channel, auditory channel, and tactile channel in the virtual reality system are deconstructed respectively.Table 6 show a list of relationships establish according to design objectives and criteria.Vibrating tactile sensation (F9)
Recognition of Design Feature Priority Analysis
Taking the target layer's cognitive low-load P as the judgment index, the correlation between the criterion layer {C1, C2, C3, C4} and P is analyzed to establish a judgment matrix, i.e., the correlation between immersion, visualization, fluency, pleasure, and cognitive low-load, and the eigenvector of the P-C judgment matrix is calculated, as shown in Table 7.In the same way, the whole elements in the cognitive behavior demand layer are set in the criterion layer, and each element in the criterion layer corresponds layer by layer to establish a judgment matrix; they are the judgment matrices of cognitive behavior demand layer in immersion, visualization, fluency, and pleasure, namely the N-C1 judgment matrix, the N-C2 judgment matrix and the N-C3 judgment matrix.The N-C1 judgment matrix and its eigenvectors are shown in Table 8.The CR values of N-C1, N-C2 and N-C3 judgment matrices are 0.03565, 0.09576, 0.09336 and 0.9886 respectively, and the CR values are all less than 0.1, which verifies the validity of the matrix.According to Table 9, in the process of building the VR system information interface, users have higher requirements for natural interactive operation, clear information level, matching of situational functional elements, and easy mastery and learning in their cognitive behavior requirements.The whole element set in the design feature layer and each element in the cognitive behavior demand layer correspond layer by layer to establish a judgment matrix; they are, respectively, the design feature layer and the judgment matrix with natural interactive operation, real scene space, visual expression of data, clear information level, real-time feedback, beautiful vision and easy learning and mastering.The judgment matrices established are the F-N1 judgment matrix, F-N2 judgment matrix, F-N3 judgment matrix, F-N4 judgment matrix, F-N5 judgment matrix, F-N6 judgment matrix, F-N7 judgment matrix and F-N8 judgment matrix, of which F-N1 judgment matrix and its eigenvectors are shown in Table 10, and the CR values of other seven groups of judgment matrices are 0.061452, 0.06794, 0.03961, 0.046382, 0.039876, 0.08085, 0.063423 and 0.082875 respectively.CR values less than 0.1 verify the validity of the matrix.Based on the above research and Table 11, a QFD model for VR task selection clean interface design is established to build the importance of design features, and the correlation between design features is qualitatively analyzed, as shown in Figure 7.As can be seen from Table 11, in the design of the VR system information interface task scenario, we need to first consider the layout of the information interface.once the task selection area and the data information reading area are rationally arranged, we must then consider the setting of the prompt tone, as the perceptual setting of bimodal information fusion under audio-visual consistency will reduce the cognitive load users, and improve the correct rate of user operation.Then there is the design of the visual browsing sequence, where the focus is on the frequency of text and graphics.Next, setting the tactile vibration of the handle gives the user behavior feedback, and the comparison between the task area and the overall tone will affect the correct rate of the user's reading information and task selection.Then, the sensory experience of the interface color and the setting of the interface transparency will be considered.A VR interface with transparency will increase the spatial authenticity of the scene through which one passes, and the setting of background music will affect the user's pleasure.Therefore, designers can refer to the importance ranking provided in Table 12 for scheme design when building the system.
Forecast Model Input Set Data Collection
The input set data uses the design element analysis method to deconstruct and reanalyze the design element features of the virtual reality interface samples.Under the same interface size and font format/size, the design items are shown in Table 13 as the layout of the operation area, visual browsing sequence, color, transparency, prompt tone, handle vibration, etc.On this basis, the design elements of the virtual reality interface are extracted and the distribution of each design category is determined according to their corresponding elements.Visual channel resource features are processed by artificial intelligence (AI), and auditory channel and tactile resource features are edited by UE4, as shown in Figure 8.
Project Characteristic Graph Interpretation
Interface Layout When sorting out the input set, due to the fact the design feature module is an explicit knowledge feature and belongs to the feature classification value, the one-hot coding method is adopted to extend the values of discrete features to the Euclidean space, with the number 0 representing irrelevant options and the number 1 representing relevant options.Taking the experimental sample 1 as an example, the design feature is decomposed into 21,112,111 corresponding configuration onehot coding vectors (01000000, 10, 1000, 100, 01, 010, 10, 10, 10, 10).Thirty-nine scene design elements of VR system information interface are processed with input set information according to the one-hot coding mode.
Data Acquisition of Forecast Model Output Set
The output set data are the VR system task situational cognitive load value and the task response time.For experimental preparation, 39 VR task information interface samples were selected and processed, imported into the UE4 system for task scenario construction, and set horizontal variable parameters.In the formal experiment, 16 people aged between 20 and 26 years old, nine boys and seven girls, were recruited for the cognitive load test.All the subjects had normal or corrected vision, with no defects in visual, auditory, and tactile perception, and were right-handed.Ten subjects had experience in using a VR system and six subjects had no previous experience in using VR.There were two experimental tasks; one was to read the interface data information, and the other was to click the "enter the system" selection area.The experimental scene is shown in Figure 9.We recorded the time (seconds) that the user clicked the task button and measured and recorded the cognitive load value using the NASA-TLX(National Aeronautics and Space Administration-Task Load Index) scale,.The cognitive load was the average cognitive load value and the average reflection time value of the 16person experiment with the reflection time length, as shown in Table 14.
Construction of CNN Prediction Model
Based on the feature analysis of input set and output set data, the CNN model structure oriented to the VR system information interface scenario mainly included the following levels with the following functions: the function of the convolution layer was feature scanning and extraction; the function of the pooling layer was feature filtering; the function of Flaten was to realize data flattening and dimension reduction.The input data of the neural network for predicting the cognitive load of the VR system users was 28 rows and one column.Firstly, six convolution layers were constructed, the first layer taking the form of a one-dimensional convolution, with the number of convolution kernels being 2048, the size set to seven, and the output information dimension being (222,048).The input value of the second layer convolution was the output of the first layer, the number of convolution kernels was 1024, the size was 5, and the output dimension was (181,024).The number of convolution kernels in the third layer was 512, the size was 5, and the output dimension was (14,512).The number of convolution kernels in the fourth layer was 256, the size was 5, and the output dimension was (10,256).The number of convolution kernels in the fifth layer was 128, the size was 3, and the output dimension was (8128).The number of convolution kernels in the sixth layer was 64, the size was 3, and the output dimension was (664).A pooling layer with a step size of two was set up, and the data reading information of 1*2 matrix in 1*n was averaged, which reduced general parameters and improved learning efficiency.The output dimension is (364), followed by 192 neuron tiling layers in the eighth layer, 128 neuron full connection layers in the ninth layer, 20 neuron full connection layers in the tenth layer, and one neuron output layer in the eleventh layer.
Validation of Model Results
Samples were selected as test sets for performance testing, and the data in the output layer were normalized and then detected by the function mean square error (MSE).The function expression is: If the MSE value is less than 0.01, the CNN model of VR task selection scenario can be proved to be reliable.The user cognitive load test data and the output layer value of the established CNN model were detected, and the measured value was 0.00424 calculated by mean square deviation MSE.Given the MSE value was less than 0.01, test performance of the CNN model was proven to be good.The fitting situation is shown in Figure 10, which shows that the output cognitive load value is basically consistent with the cognitive load value data of the test, and it can be concluded that the established model can complete the correct mapping of the user cognitive load and the design features under the multi-channel behavior analysis.When the input design feature resource code was 000100001001001001010101010, the cognitive load prediction value was the smallest, 42.74226.The design features are mainly as follows: the task selection area in the interface layout is mainly distributed in the lower right part of the interface, it is convenient for users to select tasks, the shape chamfering in the interface is mainly round chamfering, give users a soft feeling.Cold and light tones are adopted in the overall tone.Moreover, the task selection area has a lightness contrast with the overall tone.This makes it easier for the user to identify the target task.At the same time, when the system is built, it is set to have a transparent interface.So that the surrounding environment scene can be seen through the interface to increase the immersion experience of the user, interface graphics and characters should be properly matched to increase visual expression of the interface.In multi-channel information setting, information of auditory channel and visual channel should be added, such as setting prompt tone, background music and handle vibration, so that users can have natural interaction in VR system.When the input code was: 00000100100010101010110011001, the value of cognitive load forecast was the largest; 125.55457.The design feature ratio of the two types of data provides reference for designers.Compared with the BP prediction, as shown in Figure 11, the error between the predicted value and the actual value of the CNN prediction model in this paper is small, the accuracy is slightly higher than that of BP neural network, and the comprehensive performance is better.
Comparative Analysis of Design Scheme Results
When building VR system interface scenarios, it is very important to reduce the user's cognitive load, considering that VR users with dense information and less experience are prone to depression when performing task operations in virtual reality scenarios.The traditional design process mainly relies on the designers' subjective experience to make decisions and judgments.The design method proposed in this paper enables designers to refer to the priority information of design resources under multi-mode to assist in the design of decision-making when building a system.Secondly, the completed design scheme can be input into the neural network for prediction before it is put into practical application, thus obtaining the user's cognitive load value, which can reduce the time required for users to receive feedback the cognitive experience of the scheme, thus improving the design efficiency and reducing the design cost.
The neural network predicts the situational cognitive load of VR system task selection, and extracts a corresponding better scheme.The design process is a process of cyclical improvement, as the better schemes are further refined and improved.The results of the traditional design method and the optimization design method proposed in this paper are compared to verify the effectiveness of the proposed method (see Table 15 for the comparison chart).For the construction of virtual reality task scenarios with the same design requirements, the resource characteristics corresponding to the traditional designed interface scenario system were 0000001010000110001010010101, the cognitive load predicted by CNN was 85.637 and the task response time was 1.429.The traditional design scheme was then brought into the model of this paper for optimization design, and by doing so it was improved.Firstly, the design feature library of key factors was retrieved through QFD importance ranking for redesign.The corresponding resource design feature of the improved interface scenario system was 000000101000011001001010101010, the cognitive load value was 62.06667, and the task response time was 1.12 s; these values were lower than the design scheme before improvement.Therefore, the effectiveness of the proposed method is verified.The interface layout is neat and balanced, and the operation area is in the upper right corner of the interface.
Graphic area chamfer is characterized by ramp chamfer
The main color is cryogenic The contrast between the mission area and the overall tone is lightness contrast.The visual browsing order is from left to right and from top to bottom, with balanced pauses and illustrations.
No interface transparency
The interface layout is neat and balanced, and the operation area is in the middle and lower part of the interface.Graphic area chamfer is characterized by ramp chamfer The main color is cryogenic The contrast between the mission area and the overall tone is lightness contrast.The visual browsing order is from left to right and from top to bottom, with balanced pauses and illustrations.Interface transparency
Blueprint Setting of Hearing Elements and
Tactile Elements There is a warning tone, no background music, no tactile vibration.
There are warning tones, background music and tactile vibrations.
Conclusions
In view of the large delay and lag of a user's cognitive load feedback that occurs in the current virtual reality system, there is a belief that the design and setting of the questionnaire may be affected by the subjective prejudice of designers, which can lead to a long VR system construction process, a high cost, and low user satisfaction in the design scheme.This paper introduces VR multi-perception channel mapping to design resource features to establish a user cognitive load assessment and prediction model based on QFD-CNN, which leads to accurate modeling of user perception and timely feedback of interface scene cognitive load data features; this paper implements this through a smart city virtual reality system.
1.The application of cognitive psychology in a VR system is expanded: In this paper, visual, auditory, and tactile perceptual information is integrated into the task scenario research of the VR interface.Guided by cognitive psychology theory, the mapping relationship between the explicit coding of the visual representation of information and the implicit cognition of users under the VR system task selection operation is analyzed, and the user cognitive behavior demand model of virtual reality system is established.2. The design cycle is shortened and the accuracy of the design scheme is increased: AHP-QFD is used to analyze the relevant importance of the design resource elements in the VR space, and key influencing factors are retrieved to assist designers in system construction.According to the user's cognitive behavior stratification and its corresponding VR system resource characteristics, the cognitive load of users in VR system interface selection is learned through the nonlinear expression of variable relationship characteristics of a neural network, which helps achieve the user experience cognitive low-load demand of prediction design, thus reducing the time cost and increasing the accuracy of the designer's scheme.
In future research, the influence of designing resource features on user goal finding and task learning in a VR task context is deeply discussed.VR resource features can be dynamically optimized according to user feedback, and the optimal interval value of receiving resource feature information from each perception channel under the condition of low cognitive load can be determined.
Table 1 .
Elements of user behavior feedback in cognitive channel.
Table 6 .
Hierarchical list of the user's cognition of low-load demand.
Table 9 .
Sequence list of cognitive behavioral needs priority.
Table 11 .
The overall design features priority sequence table.
Table 12 .
Priority ranking of technical features.
Table 13 .
Virtual reality (VR) information interface scenario design element table.
Table 14 .
Experimental data set.
Table 15 .
Comparison table of optimization schemes. | 9,723 | 2020-01-26T00:00:00.000 | [
"Computer Science"
] |
Radiative metasurface for thermal camouflage, illusion and messaging
Thanks to the conductive thermal metamaterials, novel functionalities like thermal cloak, camouflage and illusion have been achieved, but conductive metamaterials can only control the in-plane heat conduction. The radiative thermal metamaterials can control the out-of-plane thermal emission, which are more promising and applicable but have not been studied as comprehensively as the conductive counterparts. In this paper, we theoretically investigate the surface emissivity of metal/insulator/metal (MIM, i.e., Au/Ge/Au here) microstructures, by the rigorous coupled-wave algorithm, and utilize the excitation of the magnetic polaritons to realize thermal camouflage through designing the grating width distribution by minimizing the temperature standard deviation of the overall plate. Through this strategy, the hot spot in the original temperature field is removed and a uniform temperature field is observed in the infrared camera instead, demonstrating the thermal camouflage functionality. Furthermore, thermal illusion and thermal messaging functionalities are also demonstrated by resorting to using such an emissivity-structured radiative metasurface. The present MIM-based radiative metasurface may open avenues for developing novel thermal functionalities via thermal metasurface and metamaterials. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
In the radiative scheme, when observed by an infrared (IR) camera, an upwardly projected cross section of a target can be detected based on the preset emissivity and its real temperature. According to the working principle of an IR camera, the observed temperature is based on the detected radiation from the target surface, which include three parts, i.e., target radiation, environment reflection radiation, and air radiation. The detected temperature-dependent radiation luminance L λ is [27] L λ = ε λ L bλ (T o ) + ρ λ L bλ (T a ) = ε λ L bλ (T o where T o and T a are the target temperature and air temperature, and ε λ , ρ λ , and α λ are the surface emissivity, reflectance and absorptivity, and L bλ is the radiation luminance of blackbody. The first term on the right hand is the surface spectral radiation luminance, and the second term is the reflected environment spectral radiation luminance. The detected surface illuminance is [27] where A 0 is the minimum viewing area of the target, d is the distance between the IR camera and the target, τ aλ and ε aλ are the air spectral transmittance and emissivity. The radiation power is P λ = A R E λ and the corresponding signal voltage is V s = A R ∫ ∆λ E λ φ λ dλ where A R is the lens area of the IR camera and φ λ is the spectral response function (SRF). The observed temperature is then interpreted into cloud or contour figure based on the signal voltage (integrated radiation power) in certain working wavelength range, such as 2∼5 µm or 8∼13 µm. Outside these two IR windows, the IR radiation is strongly attenuated due to the absorption and scattering effect by the CO 2 and H 2 O vapor in the air [28].
Understanding such working principle of an IR camera, we can find that, to generate equivalent pseudo detected temperature for thermal camouflage, one way is to maintain the temperature close to the ambient, and the other way is to change the surface emissivity. Comparably, since the surface temperature is not easy to maintain, the latter one attracts more attentions recently, and several radiative camouflage devices have been achieved [29][30][31]. Most of them are achieved by tuning the surface emissivity, absorptivity, and reflectivity based on specific phase-change materials, surface nanostructures, external optical, mechanical or electrical stimuli [32][33][34][35][36]. Actually, surface metal/insulator/metal (MIM) grating microstructures, including 1D/2D binary gratings and photonic crystals, have been comprehensively studied to tune the surface emissivity by employing the mechanism of surface plasmon/phonon polaritons (SPPs) or magnetic polaritons (MPs) for increasing applications like thermophotovoltaic (TPV) system, near-field microscopy and spectroscopy, and polarization manipulation [37][38][39][40][41][42][43][44][45][46][47][48][49]. Compared to the conductive counterparts, the dimensions of the MIM structures are usually in the µm scale, thus the temperature modulation based on MIM can be more flexible with high resolution. 1. Schematic for the radiative metasurface for thermal camouflage. A heat source on a plate is observed in an IR camera by interpreting the surface radiation energy into temperature field. With MIM structure engineering, a uniform temperature field is observed and the real heat source is thermally camouflaged.
Is it possible to extend the radiative MIM structures to achieve the out-of-plane counterpart functionalities of the conductive thermal metamaterials? If feasible, we can tune the surface radiation pixel by pixel or point by point. As shown in Fig. 1, when a heat source is located on a plate with non-uniform temperature distribution, we may still observe a uniform temperature field from an IR camera as long as the plate surface is predesigned with proper MIM structures. To examine this feasibility is the exact motivation behind this study. Note that, because MPs are insensitive to direction by nature in stark contrast to SPPs [50], MIM structures are more suitable for thermal camouflage, illusion and messaging. In this paper, we propose a general strategy to design radiative metasurfaces with different surface emissivity by controlling the MIM grating structures via tuning MPs, so as to realize these functionalities. The rigorous coupled-wave algorithm (RCWA) method is used to calculate the surface emissivity of the surface with varying MIM grating structures. The LC circuit model is applied for further discussions on the related mechanisms.
Methodology
The schematic of the proposed 1D MIM microstructure in a unit cell is shown in Fig. 2(a). The grating ridge is made of gold (Au) and germanium (Ge) patches with their thicknesses of d 1 = 0.05 µm and d 2 = 0.2 µm. The periodic arrays of patches are deposited on an opaque gold substrate with the thickness of d 3 =0.5 µm, so that the transmittance can be neglected. The period Λ is fixed as 3.8 µm, and the grating width w is tuned to control the surface radiation for desired emissivity. Such MIM microstructures can be fabricated with nanoimprint, interference lithography, and electron-beam lithography techniques [42]. The RCWA program is used to calculate the wavelength-dependent reflectance R λ and transmittance T λ with considering a total of 101 diffraction orders. According to Kirchhoff's law, the wavelength-dependent surface emissivity is obtained as ε λ = 1−R λ −T λ . For 1D MIM structures in this work, all diffracted waves lie in the x-z plane, so that MPs can be only excited for transverse magnetic (TM) waves. Consequently, only the thermal emission for TM waves is considered here. Note that, such a strategy can be easily generalized to a more universal application by constructing a 2D metasurface which supports MPs for both TM and TE waves [51].
The emissivity spectra with varied grating widths are shown in Fig. 2(b). With the increase of grating width from 0.9 µm to 1.4 µm, the emissivity peak is red-shifted from ∼7 µm to ∼14.8 µm, while the emissivity intensity increases from 0.78 to 0.98. Furthermore, we find the lower and upper bounds for the grating widths are 0.7 and 1.65 µm, whose emissivity spectrum peaks are less than 8 µm or much higher than 13 µm and the infrared radiation power integrations in this range can be neglected. These peaks originate from MPs supported by the MIM structure, which refer to the strong coupling of the magnetic resonance in the MIM structures with the external EM fields [51]. The MPs resonance wavelength can be predicted by the inductor-capacitor (LC) circuit theory, and Fig. 2(c) gives the equivalent LC circuit for the MIM structure, in which the arrows point out the direction of electric currents. In the LC circuit model, L m = 0.5µ 0 wd 2 /l, denotes the parallel-plate inductance separated by the intermediate Ge layer, where µ 0 is the permeability of vacuum and l is the patch length in the y direction. C g = ε 0 d 1 l/(Λ − w) is the capacitance of the gap between the neighboring grating ridges, where ε 0 is the permittivity of vacuum. C m = c 1 ε Ge ε 0 wl/d 2 refers to the parallel-plate capacitance between two layers induced by the Ge layer, where c 1 =0.19 is a numerical factor which considers non-uniform charge distribution, and ε Ge = 16 is the dielectric function of Ge [52]. Because the drifting electrons also contribute moderately to the total inductance in this MIM structure, and the kinetic inductance is expressed as L e = −w/(ω 2 d eff lε 0 ε Au ), where ω is the angular frequency, ε Au is the real part of dielectric function of Au which can be obtained from the Drude model where the detailed parameters are taken from Ref. [53], and d eff is the effective thickness for electric currents in the Au patch layer as d eff = δ for δ < d 2 , and d eff = d 2 otherwise, where the power penetration depth δ = λ/4πκ with λ the incident wavelength and κ the extinction coefficient of Au. Then, the impedance of a single MIM structure can be expressed as [39,43] Then, by zeroing Z single , resonance wavelengths of λ R = 8.38, 10.25 and 13.0 µm can be obtained for w = 0.9, 1.1 and 1.4 µm, respectively, which match well with the RCWA results. According to Lenz's law, due to the time-varying magnetic field in the y direction, an oscillating current can be produced which endows the MIM structure with the diamagnetism by generating a reversed magnetic field. In order to elucidate the underlying mechanism, Fig. 2(d) presents the magnetic field distribution at the MP resonance wavelength of 8.39 µm. The white lines denote the profile of the MIM structure. The contour shows the magnetic field intensity in the y direction, i.e., |H y | 2 . It can be observed that, the strong magnetic field is confined in Ge layer which demonstrates the excitation of MPs and corresponds to the emissivity peak in Fig. 2(b).
Results and discussion
Let us now focus on the integrated radiation power (calculated by thermal emission for TM waves) in the working wavelength range 8∼13 µm which is directly related to the observed temperature in the IR camera and differs greatly due to the frequency shift of the emissivity peak. Initially, a certain real temperature distribution is pre-generated, as shown in Figs. 3(a) and 4(a). The temperature curve in Fig. 3(a) is the center line of Fig. 4(a). The dimension of the Si plate is 200 mm × 200 mm × 5 mm, and the size of the central 15 Watt heat source is 5 mm × 5 mm. All the surfaces are immersed in the air with natural air convective coefficient of 2 W/(m 2 ·K) at room temperature of 20°C. As a result, the maximum temperature along the center line is 392.1 K and the boundary is at 381.7 K. To camouflage the heat source via emissivity engineering, we divide the surface into M × N unit cells (101 × 101, here), and we deposit different MIM structures on each unit cell. For an emitter with a finite length along the axis of strips where the Stefan-Boltzmann's law does not apply [54], the absorption cross section, which is calculated through considering the total scattering fields, is required to characterize its emission property. These considerations have limited effects on the performance evaluation, but vastly increase the difficulty of predesign. In this work, the emitter has a period of 3.8 µm which is far less (∼50 times) than the strip length of a unit cell. This makes it reasonable to regard the unit cell as a 1D structure. Due to the invariance of the environment, geometry configuration and experimental setup, the air temperature and spectral transmittance/absorptivity, viewing cross-sectional area, distance, solid angle and lens area, thus we only consider the integrated surface radiation power and neglect the dependence of SRF for simplification hereinafter, which is enough for the proof-of-concept demonstration. According to the Planck's law, the integrated radiation power is calculated as [55] where C 1 = 3.743×10 −16 W·m 3 and C 2 = 1.4387×10 −2 m·K are the two Planck constants. Though the local temperature of each unit cell is different, we tune the spectral emissivity of each cell to make their integrated radiation power at the same/approximate level. Therefore, a same observed temperature distribution can be detected in the IR camera. We screen the grating width of each unit cell to maintain the standard deviation (STD) σ of all the integrated radiation power as minimum as possible by σ = k=M×N k=1 (P k −P) (M × N). Figure 3(b) shows the integrated radiation power variation of five typical unit cells along the center line of the plate with different grating widths. It is seen that higher temperature corresponds to higher radiation power in general. To quantify one desired radiation power P d that all the unit cells can achieve with selected grating width (emissivity), we decrease the P d gradually and calculate the corresponding STD until the STD is the minimum. Using this method, P d is quantified as P d = 72.403 W/m 3 , and the widths of these unit cell are selected one by one denoted by the vertical dash lines in Fig. 3(b). The observed temperature curve is shown in Fig. 3(c) according to the Stefan-Boltzmann law and the preset constant surface emissivity of the IR camera is the average value of the all the unit cells. Compared with Fig. 3(a), it is seen that the observed temperature is much uniform and the STD is only 0.144. The grating width distribution of all the unit cells is shown in Fig. 3(d), and it is seen that the width in the center is the smallest, and the width fluctuates from the center to the boundary. The distribution of the selected emissivity for all the unit cells and the observed temperature for the whole plate is seen in Figs. 4(b) and 4(e). It is seen that the observed temperature is uniform and the real heat source is camouflaged in the background, demonstrating the thermal camouflage function by such emissivity engineering. Based on the observed uniform temperature, we can further engineer the local surface emissivity to realize thermal illusions and thermal messaging. We randomly select four rectangular subareas to possess much larger local emissivity than the remaining surface. For this end, we take advantage of the LC circuit theory to reversely predesign the patch width of the MIM structure for a certain resonance wavelength, and combine them to compose a multi-band MIM structure (perfect emitter hereinafter), as shown in Fig. 5(a). Such perfect emitter structure is composed by multiple MIM gratings with 8 widths (w 1 =0.9, w 2 =0.95, w 3 =1, w 4 =1.05 w 5 =1.1, w 6 =1.2, w 7 =1.3, and w 8 =1.4 µm) of 1D single MIM structure and the layer thickness is maintained the same as previous single MIM structure. The spectral emittance of the perfect emitter is given in Fig. 5(b). Clearly, the emittance spectrum shows a plateau with the value of around 0.9 in the 8∼13 µm range even under different incident angles. Such angular insensitivity enables the similar detected temperature when the IR camera is rotated with respect to the normal direction. To explain the broad-band high emittance, Figs. 5(c)-5(e) give the magnetic field distribution at the MP resonance wavelengths of 8.25, 10.12 and 12.73 µm, respectively. In Fig. 5(c), it can be seen that, at 8.25 µm, the left top MIM of the perfect emitter shows a strong confined magnetic field in Ge layer, indicating that the emittance peak at 8.25 µm is attributed to the MPs of left top MIM unit. Also, the emittance peak at 12.73 µm is attributed to the MPs of right bottom MIM unit, of which the magnetic field distribution is shown in Fig. 5(e). While in Fig. 5(d), it can be seen that, there exist high confined magnetic fields in two MIM units simultaneously, which mean that the high emittance at 10.12 µm is due to the MPs of two MIM structures. In a word, the high multi-band emittance of the perfect emitter is attributed to the hybridization of separated MPs in the multiple MIM units. We also can understand this by the equivalent LC circuit for the multiple MIM structure. For a multiple MIM structure consisting of several MIM subunits, the enhanced electromagnetic field in one subunit is strongly confined without coupling to other subunits when MPs are excited, so that each MIM subunit behaves as an isolated unit [56]. Therefore, the total impedance of the multiple MIM structure can be expressed using the parallel LC model as 1 Z multiple = N j 1 Z single,j , where Z single,j (obtained by Z single in Eq. (3)) is the impedance of the j-th subunit (j = 1, 2,. . . , N, where N is the total number of subunits) to predict the multiple resonant peaks [57]. It can be seen that, any subunit having a zero impedance, i.e., Z single,j = 0, enables the total impedance of the multiple MIM structure to be zero, i.e., Z multiple = 0. With the perfect emitter in four separated rectangular subareas, four hot spots emerge in Fig. 4(f) from the uniform pseudo temperatures in Fig. 4(e), which implies that not only the original heat source is camouflaged, but also four illusion heat sources are generated to further confuse the observers. This is the thermal illusion functionality. Further, if we arrange the selected subarea distribution, we can further realize the thermal messaging. For instance, as shown in Fig. 4(g), the heat signature of "HELLO" emerges by properly engineering the surface emissivity and local microstructure on the plate. It is perceived that more letters, graphs, and information can be realized through the same strategy.
The primary idea in this paper is to engineer the surface emissivity distribution to realize thermal camouflage, illusion and messaging functionalities. Although these functionalities have been realized in static conductive thermal metamaterials, they have not been achieved in the dynamic radiative thermal metasurfaces based on the MIM structures. The performance of the present functionalities can be further improved if we can construct a larger database of the optional surface emissivity with varied MIM widths. Thus, the integrated power according to Eq. (4) can be varied larger, which enables to camouflage the hot spot with a larger temperature difference in respect to the whole temperature field. One may ask how much difference thermal convection and radiation will make to the original temperature field. Taking the center unit in Fig. 4(e) as an example, the heat flux transferred by thermal conduction, convection, and radiation are 0.44 W, 7.9 × 10 −4 W, and 1.87 × 10 −4 W, respectively. It is because that the temperature difference between the plate and the ambient air is not that large, so that the proposed thermal camouflage performance will be maintained even with consideration of thermal convection and radiation. Moreover, the current study is demonstrated by the 1D MIM structure, and we can generalize this strategy through 2D MIM metasurfaces to achieve more universe functionalities. It is expected that the proposed 1D MIM structures and the validated strategy can provide hints and inspirations for further optimization and practical application. We can also extend to MIM structures of other materials, say refractory materials like tungsten, for very high temperature applications [39].
Conclusions
In summary, we demonstrate the feasibility of radiative metasurfaces to realize thermal camouflage, illusion and messaging by structuring the surface emissivity through MIM microstructures. The surface emissivity of the Au/Ge/Au gratings are calculated by the RCWA algorithm with varied grating width, which generate the MIM database for surface microstructure optimization. The proper grating width distribution is quantified by minimizing the temperature standard deviation (STD) on the whole plate. Using this strategy, the hot spot in the original temperature field is removed and the observed temperature field is much uniform, which can realize more satisfying thermal camouflage effect, compared with the conductive thermal metamaterials. Further, a perfect emitter is designed by stacking multiple MIM structures, whose emissivity is much larger and broader in the 8∼13 µm and insensitive to the incident angles. The underlying mechanism for the broad emission spectrum is ascribed to the excitation of the multiple MPs which can be demonstrated by the equivalent LC theory. By properly arranging the distribution of the perfect emitter, thermal illusion and thermal messaging functionalities are also demonstrated.
The present may open avenues for developing novel thermal applications via thermal metasurface and metamaterials. | 4,839.8 | 2020-01-06T00:00:00.000 | [
"Physics",
"Engineering"
] |
Application of the PM6 semi-empirical method to modeling proteins enhances docking accuracy of AutoDock
Background Molecular docking methods are commonly used for predicting binding modes and energies of ligands to proteins. For accurate complex geometry and binding energy estimation, an appropriate method for calculating partial charges is essential. AutoDockTools software, the interface for preparing input files for one of the most widely used docking programs AutoDock 4, utilizes the Gasteiger partial charge calculation method for both protein and ligand charge calculation. However, it has already been shown that more accurate partial charge calculation - and as a consequence, more accurate docking- can be achieved by using quantum chemical methods. For docking calculations quantum chemical partial charge calculation as a routine was only used for ligands so far. The newly developed Mozyme function of MOPAC2009 allows fast partial charge calculation of proteins by quantum mechanical semi-empirical methods. Thus, in the current study, the effect of semi-empirical quantum-mechanical partial charge calculation on docking accuracy could be investigated. Results The docking accuracy of AutoDock 4 using the original AutoDock scoring function was investigated on a set of 53 protein ligand complexes using Gasteiger and PM6 partial charge calculation methods. This has enabled us to compare the effect of the partial charge calculation method on docking accuracy utilizing AutoDock 4 software. Our results showed that the docking accuracy in regard to complex geometry (docking result defined as accurate when the RMSD of the first rank docking result complex is within 2 Å of the experimentally determined X-ray structure) significantly increased when partial charges of the ligands and proteins were calculated with the semi-empirical PM6 method. Out of the 53 complexes analyzed in the course of our study, the geometry of 42 complexes were accurately calculated using PM6 partial charges, while the use of Gasteiger charges resulted in only 28 accurate geometries. The binding affinity estimation was not influenced by the partial charge calculation method - for more accurate binding affinity prediction development of a new scoring function for AutoDock is needed. Conclusion Our results demonstrate that the accuracy of determination of complex geometry using AutoDock 4 for docking calculation greatly increases with the use of quantum chemical partial charge calculation on both the ligands and proteins.
Background
The role of in silico chemistry is emerging in drug design and discovery. In an effort to find lead compounds at lower cost and greater speed, computational chemistry methods have focused on developing fast and highly efficient molecular docking methods for virtual screening [1,2]. In recent years, progress has been made in developing docking algorithms that predict ligand binding to proteins and by now several docking programs are available such as AutoDock, [3,4], GOLD, [5,6], Glide, [7,8] and FlexX [9,10]. Among these, the AutoDock program was the most popular according to a recent study [11].
Molecular docking methods include the search in space for the energetically most favorable conformation of a protein-ligand complex and the scoring of the resulting geometries with respect to binding energy [1,12]. The production of the right docking pose and the scoring of the complex geometries are often treated as two separate problems. It should be noted however, that many docking programs use the scoring function in the process of finding the complex with lowest energy [13]; thus, scoring and geometry prediction should rather be treated as one problem and it can be assumed that minimizing the RMSD between predicted and experimentally determined complex geometries would lead to more accurate prediction of binding free energies at the same time.
In AutoDock 4 energy scoring function the calculation of pair-wise atomic terms includes evaluations for different secondary interactions, dispersion/repulsion, hydrogen bonding, electrostatics, and desolvation [14]. Thus, calculation of accurate partial charges on the ligand and the protein is expected to have a profound effect on both the docking conformation and on the energy score of the resulting complex, possibly leading to more accurate estimation of complex geometry and binding energy. There are several charge calculation methods which lead to significant differences in the partial charges assigned to the different atoms [15]. AutoDockTools program enables the user to use empirical charge calculations, Gasteiger or Kollman united charges. However, this charge calculation method has been shown to yield less accurate partial charges than semi-empirical methods [16]. Additionally, Gasteiger charge calculation [17] does not handle electrons, presenting a major flaw in the docking calculation of metalloproteins.
Moreover, in a recent study analyzing the effect of various charge models in docking results it was concluded that the quantum mechanical charge calculation method yielded significantly better docking results [15,18], both in terms of binding geometry and energy. It should be noted that in these studies only the ligand charges were calculated with the quantum mechanical method, while the protein charges were calculated with the Gasteiger-Hückel method. Still, semi-empirical charge calculation on the ligand was enough to yield more accurate docking results. Quantum mechanical polarization of the ligand also has been shown to greatly improve docking accuracy [19]. Illingworth and his colleagues [20] extended this method by calculating polarization not only on the ligands, but also on the target macromolecules using Amber charges [21]. However, those implementations involve the knowledge of the structure of the complex and iteration of quantum mechanical calculations and thus cannot be treated as a practical tool in docking [20]. Raha and Merz used semiempirical QM based scoring function for predicting binding energy and binding mode of a diverse set of proteinligand complexes [22]. The authors used a scoring function designed using semi-empirical QM Hamiltonians to discriminate between native and decoy poses generated from the program AutoDock 4. Recently, a newly developed semi-empirical PM6 method was introduced that corrects major errors in AM1 and PM3 calculations and is useful for semi-empirical charge calculations of small ligands as well as proteins [23]. Besides that, all main group elements and transition metals are parameterized in PM6 in MOPAC2009 software. Thus, using the PM6 method for assigning partial charges to both the ligand and the protein would have two main advantages i.e. docking of metalloproteins can accurately be handled and semiempirical charge calculation is expected to yield more accurate docking results in general.
In the current study it was analyzed whether PM6 semiempirical charge calculation on both the ligands and their host proteins increases docking accuracy in terms of complex geometry and binding energy using AutoDock 4 software. To the author's knowledge this is the first study where MOPAC2009 software is used for semi-empirical charge calculations on proteins systematically for preparing input files for docking calculations. 53 protein-ligand complexes were analyzed for which both crystallographic structure determination and binding data were available. The partial charges of the ligands and proteins were calculated using 1.) Gasteiger 2.) PM6 charge calculation methods and the ligands were docked using AutoDock 4 software back into their host proteins. The resulting complex geometries were analyzed for their RMSD as compared to the available X-ray structure and their binding energies as calculated by the AutoDock 4 scoring function (docking result defined as accurate when the RMSD of the first rank docking result complex is within 2 Å of the experimentally determined X-ray structure). Our results indicated that the use of the PM6 semi-empirical charge calculation method for assigning partial charges to both the protein and the ligand atoms greatly increases docking accuracy as compared to the Gasteiger charge calculation method (available in AutoDockTools) in terms of complex geometry.
Results
In Table 1 structural and experimental data of the investigated protein-ligand complexes are summarized. The 53 complexes used in this study were all characterized by a resolution below 3.2 Å. The complexes were chosen partly from the AutoDock 3.0 calibration set [68], from a recently published paper examining different docking software [13] and from the core set of PDBbind Database [69]. The chosen structures possess structurally diverse ligands in complex with a heterogeneous collection of proteins (see Table 1). It should be noted that for some structures with lower resolution (although chosen from the AutoDock 3.0 calibration set), an RMSD-based comparison of docked versus experimental structure might not always lead to a meaningful result as partial occupancies might occur that are not reflected by a single ligand structure. Using this dataset, ligand and protein structures were setup using two different methods, (i), calculating Gasteiger charges on both the ligands and the proteins using AutoDockTools and (ii), calculating PM6 charges on both the ligands and the proteins using MOPAC2009 [70] on Docking Server (see Method section for details). Docking calculations were performed twice on the dataset (in case of both ligand and protein set up methods) and the results were then compared to the experimentally determined complex structures. Figure 1 shows the correlation between experimentally determined and predicted binding energies as calculated by AutoDock 4. The correlation between the predicted and observed binding energy is rather limited (correlation coefficient is about 0.51). In most cases the binding energy is underestimated by the prediction. It should be noted that the correlation coefficient increases by considering only the hits where the first rank result yielded an RMSD within 2 Å of the X-ray structure (correlation coefficient is about 0.60) in cases where Gasteiger partial charge calculation method is used. Thus, it can be concluded that good geometry prediction does contribute to accurate binding energy estimation. Compared to Gasteiger, PM6 had somewhat lower regression constant (R = 0.41 for all cases, R = 0.46 with an RMSD below 2 Å) in the docking studies ( Figure 1). Thus, the change in the method of partial charge calculation even decreased the predicted total binding energy using the current Auto-Dock 4 scoring function. This result is not surprising considering the fact that AutoDock 4 scoring function was optimized using the Gasteiger charge calculation method.
Geometry prediction in docking studies using Gasteiger charges on both ligand and protein atoms
The results of the docking calculations using Gasteiger charges on both the ligand and the protein are summarized in Table 2. Analyzing the first rank results, 28 of the 53 complexes resulted in an accurate docking result (lowest RMSD within 2 Å as compared to the X-ray structures). Considering the most populated cluster (and not the lowest energy) as a first rank result, 30 of the 53 dockings were able to successfully predict the experimentally observed binding mode. Among these successful predictions, 24 were found to be the lowest energy and most populated result at the same time. Docking calculations using Gasteiger charges resulted in 14 "dominant" (more than 60% of the dockings in the same cluster) first rank results in successful predictions. In 8 cases there were no accurate dockings (RMSD below 2.0 Å in any of the clusters) among the docking runs. The average RMSD of the first rank results was 2.34 Å (1.83 Å without outliers).
Geometry prediction in docking studies using PM6 charges on both ligand and protein atoms
Docking calculations were performed for the same dataset, using the PM6 method for both protein and ligand setup [23]. Comparing partial charges on a selected atom, PM6 method gives higher absolute value for partial charges than the Gasteiger calculation. In the case of proteins, the absolute value of partial charges returned a value of about 1.6 times greater on average than in the case of Gasteiger charge calculation. i.e. the sum of the absolute value of partial charges was 628.8 in case of Gasteiger charge calculation, while it was 1014.2 using PM6 in case of the protein with PDB entry 4HMG; the sum of absolute values of partial charges was 354.1 with Gasteiger charge calculation and it increased to 570.8 with PM6 partial charge calculation in case of the protein with PDB code 1HVR. The ratio of absolute partial charge values using Gasteiger and PM6 calculation methods was found to be constant among the investigated proteins; Gasteiger/PM6 0.619 ± 0.020 (data not shown). In the AutoDock 4 scoring function, the sum of absolute partial charges effects the solvation parameter, which is calculated using the absolute value of partial charge for a given atom: In the above equation (Scheme 1) ASP and QASP are the atomic solvation parameters. The ASP was calibrated using six atom types; while a single QASP is calibrated over the set of charges on all atom types. Since the partial charges calculated with Gasteiger method are 0.619 times lower than the ones calculated with PM6, the QASP parameter in AutoDock 4 and Autogrid source code was reduced by 0.619. Thus, the final QASP parameter used in our AutoDock 4 calculation was 0.00679 (instead of 0.01097) when the PM6 method was used for partial charge calculation.
In Table 3 the results of docking calculations with the new QASP parameter and using PM6 charges on both the ligand and the protein can be seen. 38 out of 53 docking calculations resulted in best energy-lowest RMSD as compared to X-ray. Considering the most populated clusters (and not the lowest energy), 42 first rank and 9 second rank results were observed. In 36 of the 42 successful dockings, the first rank results were "dominant" (at least 60% of the runs resulted in the same cluster) in the docking calculations. In all cases, where the most populated cluster's frequency was above 50 out of 100 runs, the result was accurate (with an RMSD below 2.0 Å as compared to the X-ray structure). It should be emphasized that in all cases where the PM6 charge calculation was used, an accurate docking could be achieved (in one of the clusters there was a result with an RMSD below 2 Å as compared to the X-ray) in contrast to the Gasteiger partial charge calculation dockings, where in eight cases no successful dockings were found ( Table 2). The average RMSD of the first rank results is 1.71 Å (1.61 Å without outliers).
Discussion
The development of docking software that is able to accurately predict binding geometry and binding energy represents a great challenge in computational chemistry [68,[71][72][73]. In the current study it was explored whether calculation of electrostatic potentials of both the ligands and the proteins using semi-empirical PM6 method increases docking accuracy. With the recent implementation of Mozyme, a PM6 semi-empirical method to the MOPAC2009 software [70], a semi-empirical calculation of partial charges on ligands as well as on larger molecules such as proteins has become possible; therefore, in our study semi-empirical charges were also computed on the protein as well as on the ligand atoms. In the course of the study, 53 experimentally determined protein-ligand complexes were chosen; the ligands were docked back to their host proteins with AutoDock 4 program using 1.) empirical Gasteiger method and 2.) semi-empirical PM6 method for calculating electrostatic potential. The results were then analyzed and compared to the experimentally determined crystal structure in order to evaluate docking accuracy.
Correlation between experimental and predicted binding energies using Gasteiger and PM6 charge calculations Figure 1 Correlation between experimental and predicted binding energies using Gasteiger and PM6 charge calculations. The average value of the RMSD between the lowest energy result and experimental structure and the total number of successful first rank predictions (based on lowest energy and highest cluster population, respectively) are indicated at the bottom of the table. It is important to note that AutoDock 4 scoring functionwhich is used both during and at the end of the dockings thus influencing both the geometry and binding energy estimation -was optimized using the Gasteiger charge calculation method. Since the semi-empirical PM6 calculation method gives higher absolute value for partial charge on a selected atom than the empirical Gasteiger method (Figure 2), each term in the AutoDock equation was carefully considered for expected changes as a result of the semi-empirical method used for partial charge calculation before carrying out docking calculations. Besides the electrostatic term, which is naturally expected to change, absolute values of partial charges are included in the solvation term (Scheme 1) [14].
In order to achieve accurate intermolecular energy calculation, the solvation term should not change as a result of the use of different methods for partial charge calculation. Therefore, the extent of change in the absolute value of partial charges using Gasteiger and PM6 methods were analyzed. Our results showed that the partial charges calculated with the Gasteiger method were on average 0.619 times lower than the ones calculated with Mozyme, thus, the QASP parameter was reduced by 0.619. This way the solvation term in the energy calculation of AutoDock 4 has not changed as a result of semi-empirical partial charge calculation.
Our docking results showed that as a consequence of the partial charge calculation with the PM6 method, a dramatic increase was observed in 1.) the number of accurate dockings (i.e. ligand's RMSD is within 2 Å of the actual Xray structure 2.) population of clusters with the accurate docking result. However, the accuracy of the binding energy prediction has not increased.
The latter finding is not surprising because of several reasons: Although the electrostatic term for a given atom pair is greatly increased when the semi-empirical PM6 method is used for charge calculation compared to Gasteiger charge calculation, this increase is equally present for both negatively and positively charged atoms. Thus, as the positively and negatively charged atoms partly extinguish, and the final electrostatic term summed up for all ligand Best cluster rank docking results of redocking of the PDB entry 2FDP Figure 2 Best cluster rank docking results of redocking of the PDB entry 2FDP. Protein surfaces are colored by partial charges (a, PM6 charges, RMSD from coordinates in PDB: 0.75 b, Gasteiger charges, RMSD from coordinates in PDB: 2.35). The darker color of the protein surface colored by PM6 partial charges as compared to the colors of Figure 2b reflects the higher calculated absolute value of semi-empirical partial charges. This "sharper" surface defines the possible binding geometry of the ligand more, than in the case of Gasteiger charges.
atoms; the energy related to the electrostatic term will not change to a great extent causing only a minor change in binding energy estimation. Additionally, the weighting constants of these terms in AutoDock 4 were optimized using the Gasteiger charge calculation method; therefore, the final energy calculation is not expected to give a more accurate result using the current weighting constants in the scoring function. Moreover, the method used for partial charge calculation does not influence the torsional entropy term of the equation, which is one of the limiting factors in the accuracy of binding energy estimation in the AutoDock 4 program [74]. In addition, considering the high number of false positives in docking calculations, it is reasonable to assume that in addition to stabilizing contributions, there are destabilizing contributions to ΔG which are in most scoring functions not taken into account [75].
The dramatic change in the number of accurate docking poses is a significant achievement. As discussed above, the electrostatic term for a given atom pair is greatly increased when the semi-empirical PM6 method is used for charge calculation compared to Gasteiger charge calculation and this change is present at different extents in case of each atom type. Thus, a "sharper" electrostatic potential is present using the PM6 method for calculating partial charges ( Figure 2) as compared to the Gasteiger charge calculation method. This results in a more pronounced significance of the electrostatic interactions between each atomic pair leading to a more defined protein-ligand complex geometry as compared to the original method where Gasteiger calculation is used.
The high population of clusters with the correct geometry with the PM6 charge calculation method is also of great significance ( Figure 3). Namely, high cluster population hints at the density of states for a given complex conformation. If the energy of that state does not substantially differ from the lowest binding energy (2.5 kcal/mol is within the standard deviation of the AutoDock 4 force field) then the higher cluster population is indicative of a more probable conformation. This is important information when no experimental data exist as to where the ligand is bound. In conclusion, the use of the PM6 method for calculating partial charges has resulted in a significantly better prediction of docking geometry and in the cluster population of the right docking pose.
Besides accurate binding energy estimation, a measure of docking accuracy is a good geometry, i.e. low RMSD value of the docked ligand as compared to the crystal structure. A recent study [13] reports that developing a scoring function that predicts binding energy with good accuracy is not necessarily achieved by optimizing binding geometry. However, in our study, the correlation coefficient between the experimentally determined and calculated binding energies increased by considering only the hits where the first rank result yielded an RMSD within 2 Å of the actual X-ray structure in docking calculation. Thus, our results indicated that good geometry prediction is indeed prerequisite for accurate binding energy estimation. Although the optimization of AutoDock 4 scoring function was outside the scope of the current study, the fact that significantly more accurate protein-ligand complex geometry prediction is achieved using the PM6 method hints at the possibility of more accurate binding energy estimation using PM6 charge calculation as well.
AutoDockTools, a software used for setting up ligands and proteins for use in AutoDock 4, one of the most popular docking softwares on market [11], is using the empirical Gasteiger method for calculating partial charges for protein and ligand setup. The Gasteiger charge calculation method is based on the partial equalization of orbital electronegativity [17,76]. In the calculation only the topology of the molecule is considered, as only the connectivity of the atoms are included in the calculation. The calculation of electrostatic potential with empirical methods have the advantage of being fast, however, they possess some drawbacks as well: the Gasteiger charge calculation method as opposed to the semi-empirical PM6 method, does not handle inorganic compounds such as metal ions, frequently present in functional proteins. In a study using semi-empirical electronic wave functions like AM1 and PM3 to map experimental dipole moments for a large number of small molecules, the dipole moments were reproduced with a root mean square deviation of 0.3 D [77]. It has been shown based on a large validation set that semi-empirical methods are highly accurate in partial charge calculations and are able to reproduce experimental homo-and hetero-dimer hydrogen-bond energies [16]. Moreover, a number of papers have been published that report increased docking accuracy using the semi-empirical method for partial charge calculation of the ligand atoms [15,22,76]. Indeed, in a recent study comparing several partial charge calculation methods, semi-empirical charge calculation has been shown to increase docking accuracy compared to the use of empirical methods [15]. In that study, partial charge calculation of the ligand alone with the semi-empirical method was sufficient to slightly increase docking accuracy (the protein partial charges were still computed with the Gasteiger method). Quantum mechanical charge calculations on proteins are not very common because of the highly time consuming calculation. One possibility is to consider the effect of the protein binding site on the ligand polarization using quantum chemical methods [19,20]. However, this method still limits the quantum mechanical charge calculations on the ligand and it is difficult to apply when the complex structure is not known.
Calculation of quantum mechanical charges using the recent linear scaling Mozyme functionality of MOPAC2009 [23] allows us to calculate quantum chemical charges on protein atoms, as well. Our results suggest that calculation of PM6 charges on protein atoms has an even more profound effect on the docking accuracy.
Conclusion
In summary, our study explored the effect of the partial charge calculation method on docking accuracy calculated using AutoDock 4 software. To the author's knowledge this is the first systematic docking study where the semiempirical quantum chemical PM6 method is used for partial charge calculation on the protein as well as on the ligand. Partial charge calculation with the PM6 method has been shown to greatly increase docking accuracy and cluster population of the most accurate docking; however, no increase in the accuracy of binding energy estimation was observed. As a good pose of the ligand seems to be prerequisite for accurate intermolecular energy prediction, the use of the PM6 method presents a great improvement in the accuracy of docking calculations carried out using AutoDock 4. If the PM6 semi-empirical method is used for partial charge calculation, reoptimization of the weighting constants of AutoDock 4 scoring function is needed in order to increase the accuracy of binding energy estimation as well.
Methods
Crystal structures of the protein-ligand complexes used in this study were obtained from the Brookhaven Protein DataBank http://www.rcsb.org/pdb. When the asymmetric unit was found to differ from the biological unit, the ligand binding site was carefully checked. When the ligand was found to interact with more than one asymmetric units, the biological unit was used in the study (in cases of proteins with PDB code: 1OLU, 2XIS). Experimental binding affinities for the protein-ligand complexes were taken from the PDBBind Database [69]. The proteins and ligands used in this study were all formerly used as a test Performance of the PM6 charge calculation in docking experiments, compared to Gasteiger method Figure 3 Performance of the PM6 charge calculation in docking experiments, compared to Gasteiger method. The graph shows the number of complexes within a given RMSD of the crystallographic structure. In each case, the conformation of the most favorable estimated energy is used as the predicted conformation.
set in recently published papers [13,68] or were taken from the PDB core set [69]. The structures were chosen in order to meet the following criteria: structurally diverse ligands in complex with a heterogeneous collection of proteins; non-covalent binding between protein and ligand and crystallographic resolution lower than 3.2 Å.
All docking studies described here involved flexible docking of the ligand to the rigid receptor, both of which were derived from the complex crystal structure. Input structures were prepared by using two different methods: (i), using Gasteiger charges for both the ligands and the proteins [17]; (ii), using PM6 charges calculated by MOPAC2009 [70] for both the ligands and the proteins. Briefly, the input structures with Gasteiger charges were prepared as follows: The ligand atom types and bond types were assigned and hydrogens were added using AutoDockTools. Empirical charges were calculated with the method of Gasteiger [17]. For proteins, co-factors, such as HEME and metal ions were kept, and their atom types and bond types were assigned manually. Sulfate, halogens and water molecules were removed. Hydrogens were added in protein residues as well as Gasteiger partial charges using AutoDockTools. Non-polar hydrogens were merged and their charges were added to the heavy atoms.
No additional optimization of the protein structures was carried out.
Semi-empirical assignments were performed using the PM6 method by the Mozyme function of MOPAC2009 program [70] integrated in Docking Server http:// www.dockingserver.com. Ligand structures with semiempirical charges were setup similarly as described above, except that in the last step PM6 charges were calculated using MOPAC2009 software. Protein structures were setup as follows: First, water molecules, sulfate, and halogens were removed. Hydrogen atoms were added to the pdb structures using AutoDockTools. The total charge of the protein and partial charges of the atoms were calculated by the Mozyme function of MOPAC2009 software. The calculated partial charges were applied for further calculations.
Docking studies were subsequently performed using Docking Server http://www.dockingserver.com. Docking Server integrates Marvin http://www.chemaxon.com and MOPAC2009 during ligand set up in order to calculate partial charges at a given protonation state and for semiempirical geometry optimization; and AutoDock 4 is integrated [14] for docking calculation. In cases where protein and ligand partial charges were calculated with the PM6 method, the QASP parameter was modified (QASP = 0.00679) and used in Autogrid 4 and AutoDock 4 during docking calculations (see Results section for detailed explanation).
Briefly, the following parameters were set in Docking Server: Grid parameter files were built and atom-specific affinity maps were constructed using Autogrid 4 [14]. These map files were generated using 60 × 60 × 60 grid points and 0.375 Å spacing, with the maps centered on the experimentally determined center of the bound ligand. Docking simulations for the study were carried out using the Lamarckian Genetic Algorithm. The initial position, orientation, and torsions of the ligand molecules were set randomly, and all rotatable torsions were released during docking. Each docking experiment was derived from 100 different runs that were set to terminate after a maximum of 2,500,000 energy evaluations and had a population size of 250. After each docking calculation, the RMSD between the lowest energy docked ligand pose and the complex crystal structure ligand pose was evaluated. | 6,669.6 | 2009-09-11T00:00:00.000 | [
"Computer Science",
"Chemistry",
"Biology"
] |
Measurement of the $K_S \to \pi e \nu$ branching fraction with the KLOE experiment
The branching fraction for the decay $K_S \to \pi e \nu$ has been measured with a sample of 300 million $K_S$ mesons produced in $\phi \to K_L K_S$ decays recorded by the KLOE experiment at the DA$\Phi$NE $e^+e^-$ collider. Signal decays are selected by a boosted decision tree built with kinematic variables and time-of-flight measurements. Data control samples of $K_L \to \pi e \nu$ decays are used to evaluate signal selection efficiencies. A fit to the reconstructed electron mass distribution finds 49647$\pm$316 signal events. Normalising to the $K_S \to \pi^+\pi^-$ decay events the result for the branching fraction is $\mathcal{B}(K_S \to \pi e \nu) = (7.211 \pm 0.046_{\rm stat} \pm 0.052_{\rm syst}) \times10^{-4}$. The combination with our previous measurement gives $\mathcal{B}(K_S \to \pi e \nu) = (7.153 \pm 0.037_{\rm stat} \pm 0.043_{\rm syst}) \times10^{-4}$. From this value we derive $f_+(0)|V_{us}| = 0.2170 \pm 0.009$.
The beam energy, the energy spread, the φ transverse momentum and the position of the interaction point are measured with high accuracy using Bhabha scattering events [12].
The K S (K L ) mesons are identified (tagged ) with high efficiency and purity by the observation of a K L (K S ) in the opposite hemisphere. This tagging procedure allows the selection efficiency for K S → πeν to be evaluated with good accuracy using a sample of the abundant decay K L → πeν tagged by the detection of K S → π + π − decays. The branching fraction B(K S → πeν) is obtained from the ratio R and the value of B(K S → π + π − ) measured by KLOE [13].
The KLOE detector
The detector consists of a large-volume cylindrical drift chamber, surrounded by a leadscintillating fibers finely-segmented calorimeter. A superconducting coil around the calorimeter provides a 0.52 T axial magnetic field. The beam pipe at the interaction region is spherical in shape with 10 cm radius, made of a 0.5 mm thick beryllium-aluminium alloy. Low-beta quadrupoles are located at ±50 cm from the interaction region. Two small lead-scintillating-tile calorimeters [14] are wrapped around the quadrupoles.
The drift chamber (DC) [15], 4 m in diameter and 3.3 m long, has 12582 drift cells arranged in 58 concentric rings with alternated stereo angles and is filled with a low-density gas mixture of 90% helium-10% isobutane. The chamber shell is made of carbon fiberepoxy composite with an internal wall of 1.1 mm thickness at 25 cm radius. The spatial resolution is σ xy = 0.15 mm and σ z = 2 mm in the transverse and longitudinal projection, respectively. The momentum resolution for tracks with polar angle 45 • < θ < 135 • is σ p T /p T = 0.4%. Vertices formed by two tracks are reconstructed with a spatial resolution of about 3 mm.
The calorimeter (EMC) [16] is divided into a barrel and two endcaps and covers 98% of the solid angle. The readout granularity is 4.4×4.4 cm 2 , for a total of 2440 cells arranged in five layers. Each cell is read out at both ends by photomultipliers. The energy deposits are obtained from signal amplitudes, the arrival times of particles and their position along the fibres are determined from the signals at the two ends. Cells close in space and time are grouped into energy clusters. The cluster energy E is the sum of the cell energies, the cluster time and position are energy-weighted averages. Energy and time resolutions are σ E /E = 0.057/ E (GeV) and σ t = 54 ps/ E (GeV) ⊕ 100 ps, respectively. The cluster spatial resolution is σ = 1.4 cm/ E (GeV) along the fibres and σ ⊥ = 1.3 cm in the orthogonal direction.
The level-1 trigger [17] uses both the calorimeter and the drift chamber information; the calorimeter trigger requires two energy deposits with E > 50 MeV in the barrel and E > 150 MeV in the endcaps; the drift chamber trigger is based on the number and topology of hit drift cells. A higher-level cosmic-ray veto rejects events with at least two energy deposits above 30 MeV in the outermost calorimeter layer. The trigger time is determined by the first particle reaching the calorimeter and is synchronised with the DAΦNE r.f. signal. The time interval between bunch crossings is smaller than the time spread of the signals produced by the particles, thus the event T 0 related to the bunch crossing originating the event is determined after event reconstruction and all the times related to that event are shifted accordingly. Data for reconstruction are selected by an online filter [18] to reject beam backgrounds. The filter also streams the events into different output files for analysis according to their properties and topology. A fraction of 5% of the events are recorded without applying the filter to control inefficiencies in the event streaming.
The KLOE Monte Carlo (MC) simulation package, GEANFI [18], has been used to produce an event sample equivalent to the data. Energy deposits in EMC and DC hits from beam background events triggered at random are overlaid onto the simulated events which are then processed with the same reconstruction algorithms as the data.
3 The measurement of Γ( where N πeν and N ππ are the numbers of selected K S → πeν and K S → π + π − events, πeν and ππ are the respective selection efficiencies, and R = ( ππ / πeν ) com is the ratio of common efficiencies for the trigger, on-line filter, event classification and preselection that can be different for the two decays. The number of signal events, N πeν in Eq. (3.1), is the sum of the two charge-conjugated decays to π − e + ν and π + e −ν . These are separated in a parallel analysis of the same dataset based on the same selection criteria presented in this section, optimised for measuring the charge asymmetry Γ(π − e + ν)−Γ(π + e −ν ) Γ(π − e + ν)+Γ(π + e −ν ) [19].
Data sample and event preselection
Neutral kaons from φ-meson decays are emitted in two opposite hemispheres with λ S = 5.9 mm and λ L = 3.4 m mean decay path for K S and K L respectively. About 50% of K L mesons reach the calorimeter before decaying and the K L velocity in the φ-meson reference system is β * = 0.22. K S mesons are tagged by K L interactions in the calorimeter, K Lcrash in the following, with a clear signature of a delayed cluster not associated to tracks. To select K L -crash and then tag K S mesons, the requirements are: • one cluster not associated to tracks (neutral cluster) and with energy E clu > 100 MeV, the centroid of the neutral cluster defining the K L direction with an angular resolution of ∼1 • ; • 15 • < θ clu < 165 • for the polar angle of the neutral cluster, to suppress small-angle beam backgrounds; • 0.17 < β * < 0.28 for the velocity in the φ reference system of the K L candidate; β * is obtained from the velocity in the laboratory system, β = r clu /ct clu , with t clu being the cluster time and r clu the distance from the nominal interaction point, the φmeson momentum and the angle between the φ-meson momentum and the K L -crash direction.
The K S momentum p K S = p φ − p K L is determined with an accuracy of 2 MeV, assigning the neutral kaon mass. K S → πeν and K S → π + π − candidates are preselected requiring two tracks of opposite curvature forming a vertex inside the cylinder defined by After preselection, the data sample contains about 300 million events and its composition evaluated by simulation is shown in Table 1. The large majority of events are K S → π + π − decays, together with a large contribution from φ → K + K − events where one kaon produces a track and the kaon itself or its decay products generate a fake K L -crash while the other kaon decays early into π ± π 0 .
18 397 0.006 φ → π + π − π 0 24 153 0.008 others 4 852 0.002 The β * distribution is shown in Figure 1, for data and simulated events. Two peaks are visible, the first is associated to events triggered by photons or electrons, and the second to events triggered by charged pions. The trigger is synchronised with the bunch crossing and the time difference between an electron (or photon) and a pion (or muon) arriving at the calorimeter corresponds to about one bunch-crossing shift.
Signal selection and normalisation sample
Signal selection is performed in two steps based on uncorrelated information: 1) the event kinematics using only DC tracking variables, and 2) the time-of-flight measured with the EMC.
Time assignment to tracks requires track-to-cluster association (TCA): for each track connected to the vertex a cluster with E clu > 20 MeV and 15 • < θ clu < 165 • is required whose centroid is within 30 cm of the track extrapolation inside the calorimeter. Track-tocluster association is required for both tracks in the event.
A multivariate analysis is performed with a boosted decision tree (BDT) classifier built with the following five variables with good discriminating power against background: p 1 , p 2 : the tracks momenta; α 1,2 : the angle at the vertex between the two momenta in the K S reference system; α LS : the angle between the momentum sum, p sum = p 1 + p 2 , and the K L -crash direction; ∆p : the difference between | p sum | and the absolute value | p K S | of the K S momentum; m ππ : the invariant mass reconstructed from p 1 and p 2 , in the hypothesis of charged-pion mass. Figure 2 shows the distributions of the variables for data and simulated signal and background events. Two selection cuts are applied to avoid regions far away from the signal where MC does not reproduce well the data: p < 320 MeV for both tracks and ∆p < 190 MeV.
Training of BDT classifier is done with MC samples: 5,000 K S → πeν events and 50,000 background events. Samples of the same size are used for the test. After training and test the classification is run on both MC and data samples. Figure 3 shows the BDT classifier output for data and simulated signal and background events. To suppress the large background contribution from K S → π + π − and φ → K + K − events, a cut is applied on the classifier output: BDT > 0.15.
(3.4) Figure 2. Distributions of the variables used in the multivariate analysis for data and simulated events after preselection. From top left: track momenta (p 1 , p 2 ), angle between the two tracks in the K S reference system (α 1,2 ), angle beween K L and K S directions (α SL ), two-track invariant mass in the hypothesis of charged pions (m ππ ), ∆p = | p sum | − | p K S |.
Track pairs in the selected events are eπ for the signal and are Kπ, ππ, µπ for the main backgrounds. A selection based on time-of-flight measurements is performed to identify eπ pairs. For each track associated to a cluster, the difference between the time-of-flight measured by the calorimeter and the flight time measured along the particle trajectory is computed, where t clu,i is the time associated to track i, L i is the length of the track, and the velocity β i = p i / p 2 i + m 2 i is function of the mass hypothesis for the particle with track i. The times t clu,i are referred to the trigger and the same T 0 value is assigned to both clusters. To reduce the uncertainty from the determination of T 0 the difference is used to determine the mass assignment. The ππ hypothesis is tested first. Figure 4 shows the δt ππ = δt 1,π − δt 2,π distribution. A fair agreement is observed between data and simulation, with K S → πeν and K S → πµν distributions well separated and large part of the K + K − background isolated in the tails of the distribution. However the signal is hidden under a large K S → π + π − background, therefore a cut 2.5 ns < |δt ππ | < 10 ns (3.6) is applied. Then, the πe hypothesis is tested by assigning the pion and electron mass to where the label as track-1 and track-2 is chosen at random. Figure 5 shows the twodimensional (δt πe , δt eπ ) distribution for data and MC where signal events populate either band around δt = 0. The mass assignment is based on the comparison of two hypotheses: if |δt 1,π − δt 2,e | < |δt 1,e − δt 2,π | track-1 is assigned to the pion and track-2 to the electron, otherwise the other solution is taken; the corresponding time difference, δt e , is the value defined by min[|δt πe |, |δt eπ |]. A cut is applied on this variable |δt e | < 1 ns. (3.7) The number of events selected by the time-of-flight requirements is 57577 and the composition as predicted by simulation is listed in Table 2. The background comprises K S → π + π − , φ → K + K − and K S → πµν, the other contributions being small.
The mass of the charged secondary identified as the electron is evaluated as with p 2 miss = ( p K S − p π − p e ) 2 , E K S and p K S being the energy and momentum reconstructed using the tagging K L , and p π , p e , the momenta of the pion and electron tracks, respectively.
A fit to the m 2 e distribution with the MC shapes of three components, K S → πeν, K S → π + π − and the sum of all other backgrounds, allows the number of signal events to Figure 6 shows the m 2 e distribution for data and simulated events before the fit, and the comparison of the fit output with the data. The fit result is reported in Table 3. The number of signal events is N πeν = 49647 ± 316 with χ 2 /ndf = 76/96. The K S → π + π − normalisation sample is selected requiring K L -crash, two opposite curvature tracks, the vertex as in Eq. (3.2) and 140 < p < 280 MeV for both tracks (Figure 2(a)). A total of N ππ = (282.314 ± 0.017) × 10 6 events are selected with an efficiency of 97.4% and a purity of 99.9% as determined by simulation.
Determination of efficiencies
The signal efficiency for a given selection is determined with a K L → πeν control sample (CS) and evaluated as where CS is the efficiency of the control sample, and MC πeν , MC CS are the efficiencies obtained from simulation for the signal and the control sample, respectively. Extensively studied with the KLOE detector [20], K L → πeν decays are kinematically identical to the signal, the only difference being the much longer decay path. Tagging is done with K S → π + π − decays selected requiring two opposite curvature tracks and the vertex defined in Eq. (3.2) with the additional requirement |m ππ − m K 0 | < 15 MeV to increase the purity, ensuring the angular and momentum resolutions are similar to the K L -crash tagging for the signal. The radial distance of the K L vertex is required to be smaller than 5 cm, to match the signal selection, but greater than 1 cm to minimise the ambiguity in identifying K L and K S vertices. Weighting the K L vertex position to emulate the K S vertex position has negligible effect on the result.
The control sample composition is K L → πeν (B = 0.405), K L → πµν (B = 0.270) and K L → π + π − π 0 (B = 0.125) decays, while most of K L → π 0 π 0 π 0 decays are rejected requiring two tracks and the vertex. The distribution of the m 2 miss missing mass, with respect to the two tracks connected to the K L vertex and in the charged-pion mass hypothesis, shows a narrow isolated peak at the π 0 mass. K L → π + π − π 0 decays are efficiently rejected with the m 2 miss < 15000 MeV 2 cut. Two control samples are selected, based on the two-step analysis strategy using largely uncorrelated variables and presented in Section 3.2: the first CS kinBDT applying a cut on the TOF variables to evaluate the efficiency of the selection based on the kinematic variables and the BDT classifier, the second CS TCATOF applying a cut on kinematic variables to evaluate TCA and TOF selection efficiencies.
The CS kinBDT control sample is selected applying a cut on the two-dimensional (δt πe , δt eπ ) distribution, rejecting most of the K L → πµν events. The sample contains 0.44×10 6 events with a 97% purity as determined from simulation. The Monte Carlo BDT distributions for the signal and control sample are compared in Figure 7 The CS TCATOF control sample is selected applying a cut on the (m ππ , m 2 miss ) distribution. The sample contains 1.3 × 10 6 events with a 95% purity as determined from simulation. In the K S → πeν analysis, the T 0 is determined by the first cluster in time, associated with one of the tracks of the K S decay. Then, for the control sample the first cluster in time is required to be associated with the K L decay, in order not to bias TOF variables. Figure 7 For the K S → π + π − normalisation sample, the efficiency of the momentum selection 140 < p < 280 MeV is determined using preselected data. The cut on the vertex transverse position in Eq. (3.2) is varied in 1 cm steps from ρ max vtx = 1 cm to ρ max vtx = 4 cm, based on the observation that ρ vtx and the tracks momenta are the least correlated variables, the correlation coefficient being 13%. Using Eq. (3.8) and extrapolating to ρ max vtx = 5 cm, the efficiency is data ππ = (96.569 ± 0.004)%. Alternatively, the efficiency is evaluated using the K S → π + π − data sample (with ρ max vtx = 5 cm and MC ππ = MC pres ), the efficiency is data ππ = (96.657 ± 0.002)%. The second value, free from bias of variables correlation, is used for the efficiency and the difference between the two values is taken as systematic uncertainty. The number of K S → π + π − events corrected for the efficiency is N ππ / ππ = (292.08±0.27)×10 6 . The ratio R in Eq. (3.1) includes several effects depending on the event global properties: trigger, on-line filter, event classification, T 0 determination, K L -crash and K S identification. In Table 5 the various contributions to R evaluated with simulation are listed with statistical uncertainties only, the resulting value is R = 1.1882 ± 0.0017. Systematic uncertainties are detailed in Section 4. Table 5. Ratios of MC efficiencies common to the K S → πeν and K S → π + π − selections with statistical uncertainties. The error on R is calculated as the quadratic sum of the errors of the single ratios.
Systematic uncertainties
The signal count is affected by three main systematic uncertainties: BDT selection, TOF selection, and the m 2 e fit. The distributions of the BDT classifier output for the data and simulated signal and control sample events are shown in Figures 3 and 7. The resolution of the BDT variable predicted by simulation comparing the reconstructed events with those at generation level is σ BDT = 0.005. The analysis is repeated varying the BDT cut in the range 0.135-0.17. The ratio of the number of signal events determined with the m 2 e fit and the efficiency evaluated with Eq. (3.8) is found to be stable and the half-width of the band defined by the maximum and minimum values, ±0.27%, is taken as relative systematic uncertainty.
The number of reconstructed clusters can be different for the signal (K L -crash,πeν) and control sample (ππ,πeν), thus the TCA efficiency calculation is repeated by weighting the events of the control sample by the number of track-associated clusters. The difference, less than 0.1%, is taken as relative systematic uncertainty for the TCA efficiency.
The main source of uncertainty in the TOF selection is the lower cut on |δt ππ | in Eq. (3.6) because the signal and background distributions in Figure 4 are steep and with opposite slopes. The resolution is the combination of the time resolution of the calorimeter, the tracking resolution of the drift chamber and the track-to-cluster association and is determined by the width of the δt e distribution.
The comparison of the δt e distributions for the signal and the K L → πeν control sample is shown in Figure 8, they are fitted with a Gaussian and a 2 nd degree polynomial, obtaining σ = 0.44 ± 0.02 ns in both cases. The analysis is repeated varying the |δt ππ | lower cut in the range 2.0-3.0 ns, the half-width of the band gives a relative systematic uncertainty of ±0.28%. With the same procedure the cut on |δt e | in Eq. (3.7) is varied in the range 0.8-1.2 ns and the half-width of the band, ±0.12%, is taken as relative systematic uncertainty.
Possible effects in the evaluation of the TCA and TOF efficiencies due to a detector response different for the π + e −ν and π − e + ν final states are negligible. The fit to the m 2 e distribution in Figure 6 is repeated varying the range and the bin size. The fit is also done using two separate components for K S → πµν and φ → K + K − , the χ 2 is good but the statistical error is slightly increased. Half of the difference between maximum and minimum result of the different fits, ±0.15%, is taken as relative systematic uncertainty. The systematic uncertainties are listed in Table 6. The dependence of R on systematic effects has been studied in previous analyses for different K S decays selected with the K L -crash tagging method: K S → π + π − and K S → π 0 π 0 [13], and K S → πeν [19]. The systematic uncertainties are evaluated by a comparison of data with simulation, the difference from one of the ratio Data MC is taken as systematic uncertainty.
Trigger -Two triggers are used for recording the events, the calorimeter trigger and the drift chamber trigger. The validation of the MC relative efficiency is derived from the comparison of the single-trigger and coincidence rates with the data. The data over MC ratio is 0.999 with negligible error.
On-line filter -The on-line filter rejects events triggered by beam background, detector noise, and events surviving the cosmic-ray veto. A fraction of non-filtered events prescaled by a factor of 20 allows to validate the MC efficiency of the filter. The data over MC ratio does not deviate from one by more than 0.1%.
Event classification -The event classification produces different streams for the analyses. The K L K S stream used in this analysis selects events based on the properties of K S and K L decays. In more than 99% of the cases the events are selected based on the K S decay topology and the K L -crash signature and differences between MC and data are accounted for in the systematic uncertainties described below for the K L -crash and K S vertex reconstruction.
T 0 -The trigger time is synchronised with the r.f. signal and the event T 0 is redefined after event reconstruction. The systematic uncertainty is evaluated analysing the data and MC distributions of T 0 for the decays with the most different timing properties: K S → π + π − and K S → π 0 π 0 [13]. The data over MC ratio does not deviate from one by more than 0.1%.
K L -crash and β * selection. -The systematic uncertainty is evaluated comparing data and simulated events tagged by K S → π + π − and K S → π 0 π 0 decays which have different timing and topology characteristics. The data over MC ratio is 1.001 with negligible error.
K S vertex reconstruction -The systematic uncertainty of the requirement of two tracks forming a vertex in the cylinder defined by Eq. (3.2) is evaluated for signal and normalisation using a control sample of φ → π + π 0 π − events selected requiring one track with minimum distance of approach to the beamline in the cylinder and a well-reconstructed π 0 . Energy-momentum conservation determines the momentum of the second track. The momentum distribution of tracks in the control sample covers a range wider than both signal and normalisation samples. The efficiency for reconstructing the second track and the vertex is computed for data and simulation and the ratio r(p L , p T ) = Data MC is parameterised as function of the longitudinal and transverse momentum p L and p T . The ratios relative to the signal and normalisation events, r πeν and r π + π − , are obtained as convolution of r(p L , p T ) with the respective momentum distribution after preselection. The ratio r π + π − rπeν deviates from one by 0.45% with an uncertainty of 0.2% due to the knowledge of the parameters of the r(p L , p T ) function.
The R total systematic uncertainty is estimated by combining the differences from one of the data over MC ratios and amounts to 0.48%. Including the systematic uncertainties the factors in Eq. (3.1) are: π + π − = (96.657 ± 0.088)%, πeν = (19.38 ± 0.10)%, and R = 1.1882 ± 0.0059. The previous result from KLOE based on an independent data sample corresponding to an integrated luminosity of 0.41 fb −1 is R = (1.019 ± 0.011 stat ± 0.007 syst ) × 10 −3 [10]. Correlations exist between the two measurements in the determination of efficiencies for the event preselection and time-of-flight analysis, correlations in the determination of R and the fit being negligible. The correlation coefficient is 12%. The combination of the two measurements gives R = Γ(K S → πeν) Γ(K S → π + π − ) = (1.0338 ± 0.0054 stat ± 0.0064 syst ) × 10 −3 .
The value of |V us | is related to the K S semileptonic branching fraction by the equation where I K is the phase-space integral, which depends on measured semileptonic form factors, S EW is the short-distance electro-weak correction, δ K EM is the mode-dependent longdistance radiative correction, and f + (0) is the form factor at zero momentum transfer for the ν system. Using the values S EW = 1.0232 ± 0.0003 [21], I e K = 0.15470 ± 0.00015 and δ Ke EM = (1.16 ± 0.03) 10 −2 from Ref. [5], and the world average values for the K S mass and lifetime [22] we derive f + (0)|V us | = 0.2170 ± 0.0009.
Conclusion
A measurement of the ratio R = Γ(K S → πeν)/Γ(K S → π + π − ) is presented based on data collected with the KLOE experiment at the DAΦNE φ-factory corresponding to an integrated luminosity of 1.63 fb −1 . The φ → K L K S decays are exploited to select samples of pure and quasi-monochromatic K S mesons and data control samples of K L → πeν decays. The K S decays are tagged by the detection of a K L interaction in the detector. The K S → πeν events are selected by a boosted decision tree built with kinematic variables and by measurements of time-of-flight. The efficiencies for detecting the K S → πeν decays are derived from K L → πeν data control samples. A fit to the m 2 distribution of the identified electron track finds 49647 ± 316 signal events. Normalising to K S → π + π − decay events recorded in the same dataset, the result is R = (1.0421±0.0066 stat ±0.0075 syst )×10 −3 . The combination with our previous measurement gives R = (1.0338 ± 0.0054 stat ± 0.0064 syst ) × 10 −3 . From this value we derive the branching fraction B(K S → πeν) = (7.153±0.037 stat ± 0.044 syst ) × 10 −4 and the value of |V us | times the form factor at zero momentum transfer, f + (0)|V us | = 0.2170 ± 0.009. | 6,923 | 2022-08-09T00:00:00.000 | [
"Physics"
] |
Quantitation of cancer treatment response by 2-[18F]FDG PET/CT: multi-center assessment of measurement variability using AUTO-PERCIST™
The aim of this study was to assess the reader variability in quantitatively assessing pre- and post-treatment 2-deoxy-2-[18F]fluoro-d-glucose positron emission tomography/computed tomography ([18F]FDG PET/CT) scans in a defined set of images of cancer patients using the same semi-automated analytical software (Auto-PERCIST™), which identifies tumor peak standard uptake value corrected for lean body mass (SULpeak) to determine [18F]FDG PET quantitative parameters. Paired pre- and post-treatment [18F]FDG PET/CT images from 30 oncologic patients and Auto-PERCIST™ semi-automated software were distributed to 13 readers across US and international sites. One reader was aware of the relevant medical history of the patients (readreference), whereas the 12 other readers were blinded to history but had access to the correlative images. Auto-PERCIST™ was set up to first automatically identify the liver and compute the threshold for tumor measurability (1.5 × liver mean) + (2 × liver standard deviation [SD]) and then detect all sites with SULpeak greater than the threshold. Next, the readers selected sites they believed to represent tumor lesions. The main performance metric assessed was the percent change in the SULpeak (%ΔSULpeak) of the hottest tumor identified on the baseline and follow-up images. The intra-class correlation coefficient (ICC) for the %ΔSULpeak of the hottest tumor was 0.87 (95%CI: [0.78, 0.92]) when all reads were included (n = 297). Including only the measurements that selected the same target tumor as the readreference (n = 224), the ICC for %ΔSULpeak was 1.00 (95%CI: [1.00, 1.00]). The Krippendorff alpha coefficient for response (complete or partial metabolic response, versus stable or progressive metabolic disease on PET Response Criteria in Solid Tumors 1.0) was 0.91 for all reads (n = 380) and 1.00 including for reads with the same target tumor selection (n = 270). Quantitative tumor [18F]FDG SULpeak changes measured across multiple global sites and readers utilizing Auto-PERCIST™ show very high correlation. Harmonization of methods to single software, Auto-PERCIST™, resulted in virtually identical extraction of quantitative tumor response data from [18F]FDG PET images when the readers select the same target tumor.
Introduction
intrinsically a quantitative imaging technique, many PET assessments of cancer response are qualitative, as, for example, in lymphoma where quantitative PET data are converted into a five-point qualitative scale which is practical and highly useful [1,2]. Quantitative PET assessments of response have been deployed in many research imaging studies, especially in examining early treatment response-related changes in metabolism including breast cancer where these changes can predict much later pathological outcomes [3,4]. The PET Response Criteria in Solid Tumors 1.0 (PERCIST 1.0) were proposed in 2009 as a method to standardize the assessment of tumor response on [ 18 F]FDG PET and emphasized use of the peak standard uptake value corrected for lean body mass (SUL peak ) in contrast to the maximum standardized uptake value (SUV max ) [5,6]. The SUV max is reasonably easy to determine with many forms of software, while the SUL peak is more challenging to measure [7]. Thus, despite its attractiveness, quantitative PET utilizing PERCIST is not routinely performed for assessing response to therapy in patients with cancer in the clinic or many clinical trial settings, contrary to the routinely utilized Response Evaluation Criteria in Solid Tumors for assessment of anatomical imaging. One way to expand the use of quantitative [ 18 F]FDG PET/ CT in clinical trials and clinical practice is to reduce reader variability of SUV measurements and make the measurements rapid and automated. In a previous multi-center, multi-reader study we conducted, multiple sites assessed the same paired pre-and post-treatment [ 18 F]FDG PET/CT images in cancer patients. The intra-class correlation coefficient (ICC) of percent change in SUV max was 0.89 (95% confidence interval (CI): [0.81, 0.94]) across multiple performance sites using a variety of analytical software tools. The ICC for the SUL peak was lower at 0.70 (95% CI: [0.54, 0.80]). SUL peak is, in principle, the more statistically sound of the PET parameters, and it is the suggested metric in PERCIST [7]. However, if there is considerable variability among sites in how SUL peak is generated and measured, then the PERCIST metric potentially may introduce variability into assessments of treatment response, as opposed to reducing variability [8].
The aim of the present study was to determine whether the utilization of Auto-PERCIST ™ , a semiautomated software system for the quantitative assessment [ 18 F]FDG PET images, could lower the reader variability in quantitatively assessing pre-and posttreatment [ 18 F]FDG PET/CT studies for response in a multi-center, multi-reader, multi-national study assessing identical images.
Materials and methods
Pre-and post-treatment [ 18 F]FDG PET/CT images of 30 oncologic patients selected from a group of tumor types having representative patterns of FDG avidity contained a mix of single and multiple tumors on the pretreatment scan (1 tumor, n = 6; > 1 but < 10 tumors, n = 19; ≥ 10 tumors, n = 5), and a mix of the four major response categories using PERCIST (complete metabolic response, n = 6; partial metabolic response, n = 11; stable metabolic disease, n = 4; and progressive metabolic disease, n = 9).
Sites both with National Cancer Institute Quantitative Imaging Network affiliation and without which did not participate in the previous study with the same data set were recruited by e-mail and conference calls. The dataset was the based on a previous study of reader variability [9].
Thirty anonymized cases of pre-and post-treatment [ 18 F]FDG PET/CT studies (total 60 studies) were distributed along with directions for installing and utilizing the Auto-PERCIST ™ software. Approval from the Johns Hopkins Institutional Review Board was obtained, and the need for patient informed consent was waived for this study of anonymized image data.
Measurement
Individual measurements from coupled pre-and posttreatment [ 18 F]FDG PET/CT images from one patient were counted as a read. The coupled pre-and posttreatment measurements for all 30 cases from a single reader were counted as a set of reads. One reader from the central site (reader 1) had full knowledge of the primary tumors, treatment histories and subsequent followup results, but all other readers had no knowledge of the patients' medical histories as the reader is often intentionally blinded in the setting of multi-center trials. For statistical purpose, the measurements by reader 1 were considered as the reference standard for comparison (read reference ).
Each reader determined which tumor to measure. The Auto-PERCIST ™ loads the PET images and automatically obtains liver measurements from a 3-cm-diameter sphere in the right side of the liver to compute the threshold for lesion detection. The default setting is 1.5 × liver mean + 2 standard deviations (SD) at baseline to ensure the decline in [ 18 F]FDG uptake is less likely due to chance and to minimize overestimation of response or progression. For follow-up images, the default setting is lower at 1.0 × liver mean + 2SD, to allow detection of lesions with lower SULpeak. If a lesion was perceptible visually but not detected using the default threshold settings, the reader had the choice to manually lower the threshold for detection. The Auto-PERCIST ™ would detect all sites with SUL peak higher than the threshold (Fig. 1). It was up to the readers to determine whether the detected sites were true tumor lesions or not. The reader could also separate a detected focus of [ 18 F]FDG uptake into separate smaller lesions when needed-to exclude adjacent physiologic [ 18 F]FDG uptake or break down a large conglomeration of tumors into smaller separate lesions. The reader could also add smaller [ 18 F]FDG uptake lesions to make them a single lesion if the reader decided the separate [ 18 F]FDG uptakes were parts of a larger single lesion. The readers were instructed to select up to 5 of the hottest tumors for cases with multiple lesions. The readers could view the PET/CT images on any reading software they preferred, but the measurements came only from the Auto-PERCIST ™ . The measurements from Auto-PERCIST ™ included SUL peak , maximum and mean SUL, number of counts, geometric mean, exposure, kurtosis, skewness and metabolic volume. After the readers selected and quantified the lesions, the measurements were saved as text files and sent for central compilation and analysis to the Image Response Assessment Core at Johns Hopkins University.
Statistical analysis
The primary study metric was the percentage change in SUL peak (%ΔSUL peak ) from baseline to follow-up. Percentage change was defined as [(follow-up measurement − baseline measurement)/(baseline measurement)] × 100. For assessment of up to five lesions, the percentage change was computed from the sum of the lesions. Treating both case and site as random effects, a linear random-effects model was fit via the restricted maximum likelihood estimation method, which estimated variance components of the random effects in the model. As a measure of inter-rater agreement, the intraclass correlation coefficient (ICC) was computed using the variance components of the random effects. The ICC was computed as [inter-subject variance/(inter-subject variance + intra-subject variance + residual variance)]. The bias-corrected and accelerated bootstrap method was implemented with 1,000 bootstrap replicates to construct the 95% confidence interval of the computed ICC. The sampling unit was a read.
To assess agreement between the reference reader (read reference ) and another reader, the ICC was computed for each pair of the reference reader and 12 other readers. The mean of these ICCs and its range (minimum, maximum) were reported.
Krippendorff alpha reliability coefficient was computed as a measure of agreement between multiple readers for response outcome, which was classified into four ordered major response categories using PERCIST 1.0 as: complete metabolic response (CMR), partial metabolic response (PMR), stable metabolic disease (SMD) and progressive metabolic disease (PMD). The measurements were classified: PMD for SUL peak increase ≥ 30% (and 0.8 units) or new lesions; SMD for SUL peak increase or decrease < 30% (or 0.8 units); PMR for SUL peak decrease ≥ 30% (and 0.8 units); and CMR for no perceptible tumor lesion. Additionally, Krippendorff coefficient was computed with the response categories being dichotomized into two levels: clinical benefit (CMR/PMR/ SMD) and no benefit (PMD) or response (CMR/PMR) and no response (SMD/PMD). Krippendorff suggests 0.8 as a threshold for satisfactory reliability, but if tentative conclusions are acceptable, 0.667 is the lowest conceivable threshold [10].
All reads
Reads were received from 13 different sites from January to September of 2018. A single reader (nuclear medicine physician/radiologist/radiological scientist) at each site measured all 30 cases. Measurements were treated as missing when a reader did not submit data. Among a total of 390 possible reads by 13 readers, 347 baseline reads and 329 follow-up reads were reported, of which 297 reads were complete baseline and follow-up pairs. Such reads were used to compute the ICC with all readers and agreement with read reference for the baseline, follow-up and percentage change in SUL peak , respectively. The ICC for %ΔSUL peak was 0.87 (95% CI: [0.78, 0.92]), and agreement with read reference was 0.88 (range: [0.61, 1.00]). The ICC and agreement with read reference of other metrics are given in Table 1. The overall within-subject Fig. 2.
Reads with same target tumor
Among the 360 possible reads from the 12 readers, the readers selected a different lesion compared to the read reference in 46 reads at baseline, 43 reads at followup and 29 reads at both baseline and follow-up. The 241 reads agreeing on target selection with the read reference were used to compute the ICCs with all readers and agreement with read reference . The ICC for %ΔSUL peak among all readers was 1.00 (95% CI: [1.00, 1.00]), and agreement with read reference was 1.00 (range: [1.00, 1.00]). The ICC and agreement with read reference of other metrics are given in Table 1. The overall within-subject COV for %ΔSULpeak change was computed as 0.007. The Bland-Altman plot is shown in Fig. 3.
Sum of up to 5 SUL peak
In addition to the SUL peak measurement of a single lesion, the sum of SUL peak measurements of up to 5 of the selected lesions was used to compute the ICC and agreement with read reference for all reads and reads with the same target lesion (Table 1). Even when the same lesions were selected, the ICCs and agreement with read reference were not a perfect 1.00 due to (a) differences in the manual thresholds used for lesion detection and (b) utilization of the 'erosion option' for breaking up [ 18 F]FDG uptake volumes by the individual readers.
Inter-rater reliability of readers on responses
Among the 390 reads for all reads, 380 reads reported response categories. Among the 271 reads agreeing on target selection with the read reference , 270 reported response categories. The Krippendorff alpha coefficient of 13 readers for binary response measure (response (CMR/ PMR) versus no-response (SMD/PMD)) was 0.91 for all reads and 1.00 for only the reads with the same target lesion selection. When assessing clinical benefit (SMD/ PMR/CMR representing clinical benefit versus PMD representing no benefit), the Krippendorff alpha coefficient was 0.81 for all reads and 1.00 for only the reads with the same target selection. With the four response categories treated in an ordinal scale, the Krippendorff alpha coefficient was 0.86 for all reads and 1.00 for only the reads with the same target selection ( Table 2).
Discussion
Variability in measurements across readers and sites is an often cited hurdle to broader utilization of quantitative [ 18 F]FDG PET/CT for response assessment of cancer treatment [11]. Test-retest studies have demonstrated high repeatability of [ 18 F]FDG and other radiopharmaceutical PET parameters [12][13][14][15]. The variance of SUVs could be greater in clinical practice compared to ideal study setting [16]. In the clinical setting, measurement of SUV max was demonstrated to have high agreement in our previous paper, while the statistically more robust SUL peak showed suboptimal agreement [9]. We wanted to know whether using uniform software could eliminate the variability associated with the computation differences for SUL peak across multiple vendors and software. The localization of the liver, SUL measurements from the liver, computation of a threshold for lesion detection and identification of candidate lesions were all performed automatically on Auto-PERCIST ™ . Following detection of all sites with SUL peak higher than the set threshold, various [ 18 F]FDG uptake intensity or pattern measurements and textural features for each of the detected sites were also performed automatically. When the readers chose the same single target tumor, the measurements were identical, as could be expected. For up to five hottest lesions measurements, the agreement was near perfect. However, agreement was not a perfect 1.00 even when the readers chose the same tumors because the readers had the option to break down a single volume of [ 18 F]FDG uptake to separate parts, or add up two or more [ 18 F]FDG uptake sites to a single volume as they determined appropriate. Some readers chose to break down a lesion detected on Auto-PERCIST ™ to avoid including physiologic [ 18 F]FDG uptake, or to separate a conglomeration of multiple tumors lesions. And some readers intentionally chose a detection threshold lower than the default software setting to include lesions with relatively low [ 18 F]FDG uptake for assessment on the follow-up PET images. The agreement was lower for follow-up images for the all-reads assessment. The readers disagreed more often on what was tumor and what was physiologic or inflammatory response on the follow-up images.
A previous paper that showed excellent correlation between two different vendor software tools for SUL peak Fig. 3 Bland-Altman plot of the percentage change of tumor [ 18 F]FDG uptake from baseline to follow-up. The plot is for the percentage changes of SUL peak (%ΔSUL peak ) for only the reads with same lesion selected as the read reference . Each dot represents a case (30 cases in total). The x-axis represents the average mean %ΔSUL peak measurement by all readers. The y-axis represents the average difference between the 12 readers and the reference reader (read reference ) and the y-axis unit is one-tenth of one percent. The solid line represents the average bias, and the dashed lines represent the corresponding bias ± 2 standard deviations (SD) Table 2 Inter-rater reliability of readers on response assessment a Reads with missing response were excluded (10 for all reads and 1 for reads with same target) had the tumor sites predefined by the readers to exclude interpretive error [13]. Determining which [ 18 F]FDG uptake site is true tumor remained a challenge even for experienced readers. In the outlier case in Fig. 2 showing an average difference greater than 100%, some readers considered an intense [ 18 F]FDG uptake in the colon on the follow-up image to be new tumor lesion, while the read reference considered it physiologic in nature. Of the 360 non-reference baseline reads (including missing measurements) in this study, only 241 reads (67%) chose the same lesion and went on to make the same measurements as the read reference at both baseline and follow-up. Among the 30 cases, the target lesion (hottest tumor) on the post-therapy scan was different from the target lesion noted on the pre-therapy scan in 11 cases. For example, in one case, the target lesion was in a mediastinal node on pre-therapy scan, and then, a lung lesion became the hottest tumor in the post-therapy scan. [17,18]. Rather than relying solely on the reading experience of the local site, discussions and consensus meetings and better training methods are necessary to implement [ 18 F]FDG PET/CT to its full potential. It almost certainly is the case that the availability of more relevant patient history would result in better accuracy and consistency in tumor detection. While PERCIST 1.0 is quantitative, the category of CMR is dependent on the reader's judgment, and software quantification alone could not determine the response to be CMR. There were six cases considered to have reached CMR by the read reference . The 12 other readers categorized the case correctly as CMR in 44 reads out of 72 (12 readers × 6 cases), PMR was designated in 21 reads, SMD in 5 reads and PMD in 1 read, with 1 missing read. Thus, in addition to selection of different target tumors from the read reference , the reader's decision between CMR and PMR leaves room for variability in response categorization, even if quantitation produces identical results. Detailed definition or consensus on findings compatible with the CMR category, or addition of quantitative threshold to clarify the CMR category, is necessary for use in trials and in the clinical setting. A lesion could be considered present and thus not CMR even with very low SUL peak , for example, in the lungs, or a lesion could be considered resolved and thus CMR even with relatively high SULpeak, for example, in tonsils.
The threshold computed from liver measurements (liver SUL mean + 2SD) was viewed by the readers as too high a cutoff for CMR in this study as could be inferred by how the readers manually lowered the threshold on the follow-up images.
Revealing a potential limitation in the software, and of the PERCIST criteria, there was a small tumor with clearly perceptible [ 18 F]FDG uptake visually, which was not detectable by Auto-PERCIST ™ due to the volume below the PERCIST definition of SUL peak sphere of 1 cubic centimeter (Fig. 4). More mundane limitation of applying PERCIST includes the need to measure the patient's height. That many of the referring physicians and radiologist are not familiar with the SUL peak parameters is another limitation to overcome. When there are multiple lesions showing intense [ 18 F]FDG uptake, the lesion with the worst response may not be the target lesion, and PERCIST needs to specify how to address such poorly behaving lesions for categorizing the overall response.
Auto-PERCIST ™ has the ability to automatically detect potentially new lesions for co-registered studies based on the location of the classified lesions. Auto-PERCIST ™ also computed additional PET parameters representing tumor features, such as metabolic tumor volume, geometric mean, exposure, kurtosis and skewness, which have been reported as prognostic markers and diagnostic tools [19][20][21][22]. Discordance among readers was minimal for the additional PET parameters, and the cause for any variance arose when the reader manually changed the tumor boundary. Even with the addition of several PET parameters, the measurement took seconds to at the longest and a few minutes for cases with many lesions. In addition to reducing variability in measurement, the software reduced the measurement time radically. Auto-PERCIST ™ may become adjunct reading software the way myocardial perfusion and metabolism studies utilize cardiac image analysis software. Auto-PERCIST is available to academic researchers who register their interest with the Johns Hopkins Technology Transfer office.
Conclusion
Harmonization of methods to single software Auto-PERCIST ™ resulted in virtually identical extraction of quantitative data including the SUL peak when the readers selected the same target tumor, and should promote greater use of [ 18 F]FDG PET/CT for response assessment in cancer treatment. Nonetheless, the findings show caution remains in order as lesion selection still depends on qualitative assessments of whether a lesion is tumor or physiological uptake. Fig. 4 a PET maximum intensity projection (MIP) image of patient with right axillary node metastasis at baseline with SUL peak of 1.62 and tumor volume of less than 1.00 cc. Though visually perceptible, Auto-PERCIST ™ failed to detect the lesion due to small size. b On the follow-up MIP image, the number of metastatic nodes and the [ 18 F]FDG uptake intensity are increased to SUL peak of 2.84, allowing detection by Auto-PERCIST ™ | 5,048.4 | 2021-01-26T00:00:00.000 | [
"Medicine",
"Engineering"
] |
First principles study of M2InC (M = Zr, Hf and Ta) MAX phases: The effect of M atomic species
We have studied the physical properties of M2InC (M = Zr, Hf and Ta) MAX phases ternary carbides using density functional theory (DFT) methodology. The structural, elastic and electronic properties are revisited (and found to be in good agreement with recently reported results). The charge density distribution, Fermi surface features, Vickers hardness, dynamical stability, thermodynamics and optical properties have been investigated for the first time. The calculated single crystal elastic constants and phonon dispersion curves endorse the mechanical and dynamical stability of all the compounds under study. The calculated single crystal elastic constants Cij and polycrystalline elastic constants are found to increase with increasing atomic number of M species (M = Zr, Hf and Ta). The values of Pugh ratio and Poisson ratio revealed the brittleness of the compounds under study associated with strong directional covalent bond with a mixture of ionic contribution. Overlapping of conduction band and valence band at Fermi level notify the metallic nature of M2InC (M = Zr, Hf and Ta) MAX phases. Low values of Vicker hardness indicate the softness of the materials and easy machinability.. The thermodynamic properties, such as the free energy, enthalpy, entropy, specific heat capacity and Debye temperature are evaluated using the phonon dispersion curves and a good correspondence is found with the M atomic species. Electronically important optical properties, e.g., dielectric functions, refractive index, photoconductivity, absorption coefficient, loss function and reflectivity are calculated and discussed in detail in this study.
The ternary carbide ZIC has been synthesized and oxidation behavior with structural parameters are reported by S. Gupta et al. [24]. Manoun et al. [25] have also synthesized ZIC and measured compression under pressure. It is reported that the phase ZIC oxidizes readily to form In 2 O 3 and transition metal oxide and it cannot be used for extended time in air.
Electrical and thermal properties such as heat capacities, thermal expansion coefficients, thermal and electrical conductivities of the ternary compound HIC have also been studied [26]. Medkour et al. [27] have investigated the structural and electronic properties of ZIC and HIC. The structural and elastic properties of ZIC and HIC have been studied by He et al. [28].
The elastic properties of ZIC, HIC and TIC have also been investigated by A. Bouhemadou [29].
In order to recommend a compound especially for technological applications, detail theoretical investigations on the physical properties of the material are necessary. Study of dynamical stability of the materials is important for practical application under extreme pressure and temperature conditions. Moreover, the thermodynamic properties provide the important supplementary information regarding the behavior of materials under high pressures and temperatures, which are considered as the basis of many industrial applications [30]. The optical properties are directly related to the electronic properties of materials which exhibits the electronic response of the materials subjected to radiation. To select a material for optoelectronic devices, the information about absorption coefficient and refractive index of the materials is necessary [31]. Furthermore, study of reflectivity of MAX phases is used to predict the suitability of materials as coating materials to reduce solar heating [32]. Therefore, study of these physical attributes of M 2 InC (M = Zr, Hf and Ta) MAX phases is desirable from research as well application point of view. Therefore, we are motivated to study the dynamical stability, thermodynamic and optical properties of M 2 InC (M = Zr, Hf and Ta) MAX phases for the first time in detail. Moreover, the structural, elastic and electronic properties are revisited with some new additional information on charge density mapping, Fermi surface topology, Mulliken analysis, and Vickers hardness. The results are discussed on the basis of the electronic configuration of the M species (M = Zr, Hf and Ta).
Computational methodology
The Cambridge serial total energy package (CASTEP) code [33] is used for the firstprinciples quantum mechanical calculations wherein the pseudo-potential plane-waves (PP-PW) approach based on the density functional theory (DFT) [34] are employed. The Generalized Gradient Approximation (GGA) method with default the Perdew-Burke-Ernzerhof (PBE) formalism is adopted as the exchange and correlation terms [35]. The interactions between electrons-ions are represented by pseudopotentials within Vanderbilttype ultrasoft formulation [36]. The following parameters are used for the calculations: plane wave cut-off energy is set to 500 eV for all calculations to ensure convergence, the Monkhorst-Pack scheme [37] is used for k-points (9x9x2) sampling integration set to ultrafine quality over the first Brillouin zone for the crystal structure optimization, the tolerances for self-consistent field is 5.0×10 -7 eV/atom, energy is 5.0×10 -6 eV/atom, maximum force is 0.01 eV/Å, maximum displacement is 5.0×10 -4 Å, and a maximum stress of 0.02 GPa. The electronic wave functions and consequent charge density as well as the structural parameters of hexagonal Zr 2 InC, Hf 2 InC and Ta 2 InC are calculatedfollowing the Broyden-Fletcher-Goldfarb-Shenno (BFGS) [38] minimization technique. The total energy of each cell is calculated by the periodic boundary conditions. Elastic constants are calculated by the "stress-strain" method in-built in the CASTEP program. The calculated elastic constant tensors C ij are used to evaluate bulk modulus B and shear modulus G.
Structural properties
As mentioned earlier the MAX phases M 2 InC (M = Zr, Hf and Ta) crystallizes in the Table 1, which testifies the reliability and accuracy of our calculations. The lattice constants a and c of the Zr 2 InC are of 1.34 % and 0.3% larger than that of experimental values [25] and the lattice constants a and c of the Hf 2 InC are 1.54% & 1.06% larger than experimental values [26] but are found in very good agreement with the earlier theoretical values [27][28][29].
However, experimental results for the Ta 2 InC are not available, but the lattice constants a and c are of 2.92% and 3.18% higher than the reported values [29].
Mechanical properties
The mechanical stability, stiffness, brittleness, ductility, and elastic anisotropy of a material can be obtained from the elastic constants which are important to select a material for engineering applications. The five independent single crystal elastic constants C ij (since C 66 = (C 11 -C 12 )/2) and polycrystalline elastic moduli are given in Table 2. The compounds are mechanically stable since the single crystal elastic constants completely satisfy the Born [40] criteria: C 11 > 0, C 11 -C 12 > 0, C 44 > 0, (C 11 +C 12 ) C 33 -2C 13 > 0 for a hexagonal system. For all compounds C 11 > C 33 (Table 2), hence, the atomic bonding is stronger along a-axis compared to that of along c-axis. Since C 11 and C 33 are larger than C 44 the linear compression along the crystallographic a-and c-axis is difficult compared to the shear deformation. From [29]. The elastic constants C ij are found to increased with increasing atomic number of M (M = Zr, Hf and Ta). The polycrystalline elastic moduli (B, G, Y, and ν) have been calculated from the single crystal elastic constants through the Voigt-Reuss-Hill formula [41][42] as shown in Table 2.
In addition, the Young's modulus (Y) and Poisson's ratio (ν) can also be obtained using wellknown relationships [43,44]. The average bond strength of constituent atoms for a given compound can be understood by the bulk modulus [45]. [48]. (pd) since they are mixed together strongly. There is another covalent bond formed at the higher energy states between Zr/Hf/Ta-In (pd) states but comparatively weaker than the previous one. It is noted that the hybridization between Zr/Hf/Ta-C (pd) states is occurred at lower energy for Ta 2 InC and higher energy for Zr 2 InC which may result in the larger elastic modulii and higher Vickers hardness for Ta 2 InC compared to the other two phases. This can also be observed from the charge density distribution in the following section.
In-5s
In-5p Ta-6s Ta-5d Total E F (f) Fig. 3(a-c), consequently robust covalent bonding is formed between C-Zr, C-Hf and C-Ta atoms. (Fig. 3(a)). Another covalent bonding between In and Zr/Hf/Ta atoms is also observed but this is comparatively weaker than the previous one ( Fig. 3 (a-c)). Furthermore, it is also seen that there is signs of charge balance around Zr/Hf/Ta with the carbon atoms, exhibiting a small degree of ionic bonding. Therefore, the bonding in M 2 InC (M = Zr, Hf and Ta) is expected to be a mixture of covalent andionic. It is also noted here that the accumulation charge at Zr/Hf/Ta position follows the sequence: Zr 2 InC < Hf 2 InC < Ta 2 InC; results the stronger covalent bonding between Ta-C (pd) atoms, which is in good agreement with analysis in the previous section. The Fermi surface topology of the MAX phases M 2 InC (M = Zr, Hf and Ta) has been explored in the equilibrium structure at zero pressure as shown in Fig. 3 (right). The
(a) (b) (c)
calculated Fermi surfaces show both electron and hole-like sheets. The shape of the Fermi surfaces for the ZIC and HIC phases are quite similar, however it is somewhat different for the TIC. It is seen that the central sheet (inner blue) is of cylindrical shape with hexagonal cross section and centered along the Γ-A direction of the Brillouin zone for the three phases ( Fig. 3(right)). Second sheet of the ZIC and HIC phases (red color) is also of cylindrical shape, much closer and surrounding the first sheet while for the TIC, it has six tuning fork like shape in the six corners far from the first sheet. The third sheet is hexagonal cylinder with six little curved plane along M-L directions for the ZIC and HIC phases. Six up curved ribbon type (red accent darker color) tubes are in each corner of fourth sheet for the ZIC and HIC phases as shown in Fig. 3(right).The Fermi surface of the ZIC and HIC phases is due to the low-dispersive of Zr5d, Hf 5d and In-5p states, respectively, on the other hand, for TIC it is due to high-dispersive Ta 5d and In 5p orbitals as is assured from DOS seen in Fig. 2 (d-f).
Mulliken atomic and bond overlap populations
The Table 3.
The band overlap population (BOP) of the M 2 InC (M = Zr, Hf and Ta) MAX phases are also presented in Table 4. The high value of BOP represents the strong covalent bond while the low values represent the ionicity of chemical bonding. For instance, the C-Zr, C-Hf, and C-Ta bonding possess stronger covalent bonding than In-Zr, In-Hf and In-Ta bonding in M 2 InC (M = Zr, Hf and Ta). In the case of Ta 2 InC, C-Ta bonding is comparatively stronger than Zr-In, In-Hf, and In-Ta bonds in the M 2 InC compounds. The overall bond strength is greater in Hf 2 InC and Ta 2 InC, therefore, the hardness value is expected higher for them.
Vickers Hardness
We have already discussed about bulk and shear modulus of M 2 InC (M = Zr, Hf and Ta). But the intrinsic hardness is different from bulk modulus or shear modulus [52]. The hardness of material plays an important role in its applications. From this point of view we have also calculated using established formalism [53,54] which is fundamental criteria for an engineering point of view as the resistance to wear by either friction or erosion that increases with hardness [15]. The calculated Vickers hardness values are found to be 1.05, 3.45, and 4.12 GPa (Table 5) for the phases ZIC, HIC and TIC, respectively. It is seen again that the value of Vickers hardness is largest for TIC which is good agreement with elastic modulii, partial density of states and charge density mapping. The values are typical for the MAX phase compounds and indicating the softness of the phases which are easily machinable.
Phonon dispersion curve
Dynamical stability of material and vibrational contribution in the various thermodynamic properties such as thermal expansion, Helmholtz free energy, and heat capacity can be understood by the phonon dispersion curve (PDC) along with phonon density of states (PHDOS) [55,56]. The PDC and PDOHS of the phases ZIC, HIC and TIC along the high symmetry direction of the crystal Brillouin zone (BZ) have been calculated using the density functional perturbation theory (DFPT) linear-response method [57] and are shown in Fig.4.
The optical behavior strongly depends on the optical branches that are situated at the top in the phonon dispersive curve shown in Fig. 4. The phonon density of states (PHDOS) of ZIC, HIC and TIC phases have been calculated and illustrated in Figs. 4(b, d, f), respectively, along with PDC curves to distinguish the bands by corresponding peaks. It is observed that the flatness of the bands for TO produces prominent peaks for the phases as shown in Fig. 4 (b, d, f). The separation between top of the LO and bottom of the TO modes at the point are found to be 1.3, 1.6 and 1.5 THz for the phases ZIC, HIC and TIC, respectively. The PDC curves of MAX phases under consideration show positive phonon frequencies depicted in Fig. 4 (b, d, f) No negative frequencies exist indicating the phases are dynamically stable. Moreover, the elastic constants of the phases are also supported the condition for mechanical stability ( Table 2). [58,59] as shown in Fig. 5. The potential functions are calculated in the temperature range 0-1000 K where no phase transitions are expected. It can be seen from Fig. 5 (a) for the phases, below 100 K, the values of E, F and TS are almost zero. A nonlinear decrease of F is observed above 100 K, which is very common scenario and it becomes more negative during the course of any natural process. The thermal disturbance enhances the disorder with increasing temperature in the system results in entropy increase as shown in Fig. 5 (a). As expected for solids, an increasing trend of enthalpy (E) with the increase of temperature is observed. Fig.5 (b) illustrates constant volume specific heat C v of the phases as a function of temperature. It is seen that curves for ZIC and HIC are almost identical; however it is different for TIC phase. It follow common rend that at very low temperatures, the C v of the phases is strongly dependent on the temperature and obeys the Debye model which is proportional to T 3 [60] during the phonon coupling process of crystal lattice vibration.
Nevertheless, when the temperature is higher, C v does not depend strongly on the temperature and the the classical Dulong-Petit law is recovered [61].
The temperature dependence of Θ D for the phases has been calculated using PHDOS shown calculated from the momentum matrix elements between the occupied and unoccupied electronic states as [62] Here is the light frequency, u is the vector defining the polarization of the incident electric field, e is the electronic charge. The real part ε 1 (ω) and imaginary part of ɛ 2 (ω) of the dielectric function of the phases are depicted in Figs. 6 (a, b). The peaks in ε 2 (ω) are associated with the electron excitations. The spectra at the low energy region (infrared region) arise due to the intra-band transition of electrons. The peaks are observed at ~ 0.9, 1.9, and 2. 35 eV in the ε 1 (ω) for the ZIC, HIC and TIC phases, respectively. The ε 1 (ω) becomes zero at around 10 eV for the TIC phase which corresponds to the energy at which the absorption coefficients vanishes (Fig. 6e), reflectivity exhibits a sharp drop (Fig. 6g) and the energy loss function (Fig. 6h) shows a first peak, however, similar behavior is not observed for the phases HIC and ZIC. The values of ε 1 (ω) for ZIC, HIC and TIC phases go through zero at 2.5, 3.2 and 10 eV from below ( Fig. 6a) while the values of ɛ 2 (ω) (Fig. 6b) approach zero at 7.5, 20 and 22 eV from above, respectively, which assure that the studied phases are metallic in nature. polarization.
The refractive index n of crystals is a vital guide to design perfect electronic appliances. The frequency dependent behavior is shown in Fig. 6c. The absorption loss is defined by the extinction coefficient k. The calculated values of k of the phases are shown in Fig. 6d. The sharp peaks are obtained at 0-2 eV for the phases causes intra-band transitions of electrons (Fig. 6d).
The values of absorption coefficient, of the phases are shown in Fig. 6 Fig. 6f as expected for metals. The TIC compounds show the highest photo-conductivity at around 5 eV photon energy whereas ZIC and HIC exhibit almost same photo-conductive behaviosr at near infrared, visible and near UV regions.
The reflectivity curves show that it starts with a value of 88% for the phases and rises to maximum values of 93% at 9.26 eV, 92% at 10.4 eV and 95% at 11.5 eV for the compounds ZIC, HIC and TIC, respectively ( Fig. 6(g)). It is seen that the reflectivity spectra for all phases are almost constant in the visible light region and these values are always above 45% which is suitable for reducing solar heating in the visible light region [63]. Thus, the compounds are also promising candidates for the practical usage as a coating material to avoid solar heating. The reflectivity spectra for all phases approach zero at the incident photon energy range 19-22 eV.
The bulk plasma frequency P can be obtained from the loss energy spectrum shown in Fig. 6(h). The energy loss function measures energy loss of an electron when it passes through a material. No loss spectra were found for any compounds in the energy up to 10 eV. The effective plasma frequency P of the ZIC, HIC and TIC are found to be 12.6, 13.5 and 16.3 eV, respectively. The material becomes transparent, when the frequency of the incident light is higher than that of the plasma frequency.
Conclusions
We have presented a comparative study of the synthesized MAX phase ternary carbides Zr 2 InC and Hf 2 InC and predicted Ta 2 InC by employing the first-principles DFT calculations.
Thermodynamic potentials, optical properties, electronic charge density distribution, Fermi surface topology, Mulliken bond overlap population and Vickers hardness have been investigated for the first time. The calculated elastic constants conform to mechanical stability conditions. The compounds are brittle in nature as expected for other MAX phases.
The compounds are metallic in nature and the Zr-4d, Hf-5d and Ta-5d electrons are mainly contributing to the TDOS at the E F . The strong covalent bond exists in the compounds and covalency level found in the chemical bonds of phases are articulated as C-Zr > In-Zr, C-Hf > In-Hf and C-Ta In-Ta. The calculated Vickers hardness is found to be 1.05, 3.45 and 4.12 GPa for the phases ZIC, HIC and TIC, respectively that indicate the soft nature of studied phases. The dynamical stability of the compounds M 2 InC (M = Hf, Zr and Ta) has been confirmed using phonon dispersion curves. The optical properties of the compounds reveal several interesting properties of the phases. The reflectivity curves are always above 45% and rises to maximum values of 93% at 9.26 eV, 92% at 10.4 eV and 95% at 11.5 eV for the compounds ZIC, HIC and TIC, respectively. Therefore, the compounds are promising candidates for the optoelectronic device applications in the visible and ultraviolet energy regions and they can also be used as a coating material to avoid solar heating. | 4,592 | 2018-05-20T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Ultrafast Process Characterization of Laser-Induced Damage in Fused Silica Using Pump-Probe Shadow Imaging Techniques
This study delves into the intricate dynamics of laser-induced damage in fused silica using a time-resolved pump-probe (TRPP) shadowgraph. Three typical ultra-fast processes, laser-induced plasma evolution, shockwave propagation and material fracture splashing, were quantitatively investigated. The results indicate that the diameter of plasma is proportional to the pulse laser energy and increases linearly during the pulse laser duration with an expansion rate of approximately 6 km/s. The maximum shockwave velocity on the air side is 9 km/s, occurring at the end of the pulse duration, and then rapidly decreases due to air resistance, reaching approximately 1 km/s around a 300 ns delay. After hundreds of nanoseconds, there is a distinct particle splashing phenomenon, with the splashing particle speed distribution ranging from 0.15 km/s to 2.0 km/s. The particle sizes of the splashing particles range from 4 μm to 15 μm. Additionally, the smaller the delay, the faster the speed of the splashing particles. Overall, TRPP technology provides crucial insights into the temporal evolution of laser-induced damage in fused silica, contributing to a comprehensive understanding essential for optimizing the performance and safety of laser systems.
Introduction
Fused silica components play a crucial role in high-power laser systems.With the continuous increase in laser power, these components are susceptible to damage under intense laser irradiation, significantly affecting the laser system's lifespan and safe operation [1].Currently, damage to fused silica often occurs on the component's surface, and it is widely acknowledged in the scientific community that this is closely associated with surface defects in fused silica [2][3][4].These defects comprise two main categories: mechanical defects introduced during the fabrication process [5][6][7] and contaminative defects [8][9][10].For a long time, researchers believed that there is a close connection between mechanical defects (e.g., cracks) and laser-induced damage [5].It may affect damage in three ways [11,12].First, cracks can cause discontinuities at the interface, leading to enhanced local electromagnetic fields [13,14].Second, from a microscopic perspective, the fracture of Si-O bonds at crack sites leads to point defect enrichment, resulting in localized absorption enhancement [15][16][17].Finally, the stress concentration at the crack tip reduces the material's mechanical strength, promoting damage initiation and propagation [18].Contaminative defects introduced during the polishing process are another major cause of laser-induced damage.Usually, contamination was believed to be polishing particles (e.g., CeO 2 ).Impurity particles absorb laser energy through linear thermal absorption, and damage occurs when the temperature reaches the melting point.This explains the induced damage from impurity nano-particle energy absorption [19,20].Some other researchers believe contaminants possibly originate from ion crystal particle (e.g., NaCl) precipitation during the cleaning process.This type of grain structure may contain defects, leading to intense ultraviolet absorption.Fluorescence characterization and high-damage threshold test results support this viewpoint [8].Different types of defects exhibit distinct dynamic behaviors during the damage process, ultimately manifesting in variations in the macroscopic damage morphology and damage thresholds [21].
Optical material laser-induced damage mainly involves various laser damage mechanisms such as photoionization, avalanche ionization and impurity defect nonlinear absorption, and it can be generally categorized into intrinsic damage and extrinsic damage.Intrinsic damage mechanisms primarily address the essence of laser-induced damage without considering the thermal effects of external absorptive impurities.The damage is mainly achieved through the ionization ablation caused by the intense laser field, without involving the energy transfer process.In contrast, extrinsic damage assumes the presence of absorptive precursors, such as mechanical defects and contaminative defects.Typically, under the action of nanosecond lasers, optical component damage caused by defects falls into the category of extrinsic damage.This involves considering complex physical processes such as post-damage thermal melting, crack propagation and shock wave generation.
Therefore, investigating the dynamics of laser-induced damage serves as the foundation for understanding and, in turn, predicting damage behavior [22,23].Laser pulses interact with materials over extremely fast time intervals, typically on the order of nanoseconds [24][25][26][27].Conventional methods, such as high-speed cameras, face considerable challenges in capturing the evolution of defects into damage.A Time-Resolved Pump and Probe (TRPP) shadowgraph is an ultrafast imaging technique with sub-nanosecond temporal precision [28,29], well suited for probing ultrafast physical phenomena over a broad time range.This makes it particularly applicable for investigating the nanosecond laser-induced damage processes in fused silica.The technique of a TRPP shadowgraph was first applied to the study of shock waves and melting processes in silicon materials under laser irradiation by Russo R.E.et al. [30].Around 2006, the S.G.Demos team further improved the technique and applied it to observe ultrafast physical phenomena in optical materials interacting with lasers [22,31,32].Recently, TRPP technology achieved successful application in the characterization of transient processes in laser-induced damage, using this technique to observe important physical process such as the absorption area expansion [25], crack propagation speed [33], shock wave velocity and particle ejection [23,34,35].It provides direct experimental evidence for the plasma explosion model [2] and absorption front theory [36].
In this study, we employed TRPP to characterize the pulse laser-induced damage process in fused silica, elucidating the evolution of the early plasma, shockwave propagation, material fracture and splashing.The obtained physical images and general patterns through TRPP contribute foundational insights into the study of laser-induced damage in fused silica.
Experimental Principles and Setup
The TRPP technique is based on the pump-probe principle, and its schematic diagram is illustrated in Figure 1.
The TRPP system comprises a pump laser, a probe laser, a digital delay unit, an optical delay line, and a shadow imaging system.This study focuses on the UV laser damage of fused silica components; we chose a 355 nm wavelength nanosecond laser as the pump light source for probing the ultrafast damage process.The pump laser (wavelength of 355 nm, pulse width of 4 ns, repetition rate of 10 Hz) delivers nanosecond pulses, which are focused onto the test specimen through an optical system.By adjusting the energy of the pump laser pulses, they surpass a certain fluence threshold, inducing volumetric damage within the test specimen and triggering ultrafast physical phenomena such as plasma excitation and explosive shockwaves.Simultaneously, the probe laser (wavelength of 532 nm, pulse width of 50 ps, repetition rate of 10 Hz) emits picosecond pulses passing through the damaged region, interacting with the material in that region.Finally, the picosecond laser pulses carrying ultrafast physical information are recorded as images by the shadow imaging system.By varying the time delay between the pump and probe pulses, different time slices of the entire damage event can be captured.
excitation and explosive shockwaves.Simultaneously, the probe laser (wavelength of 532 nm, pulse width of 50 ps, repetition rate of 10 Hz) emits picosecond pulses passing through the damaged region, interacting with the material in that region.Finally, the picosecond laser pulses carrying ultrafast physical information are recorded as images by the shadow imaging system.By varying the time delay between the pump and probe pulses, different time slices of the entire damage event can be captured.To quantitatively measure transient parameters, two time slices for a single damage event need to be obtained.For this purpose, a polarization beam splitter (PBS-1 in Figure 1) is employed to split the probe laser into two beams with orthogonal polarization directions, namely, S light and P light.The relative delay between S light and P light can be adjusted by the optical delay line, as indicated by "Optical Delay" in Figure 1.PBS-2 coaxially combines S light and P light, and they pass through the damaged region together.After passing through the damaged region, the beams are separated into S light and P light by PBS-3 once again.The shadow images carrying ultrafast information are collected by dual imaging detectors.Utilizing image processing algorithms for information extraction from shadow images, transient physical quantities such as the shockwave velocity, particle ejection speed and ejection angle can be accurately calculated by precisely comparing the differences between two images.The details of the setup can be found in the reference [37].
In terms of temporal sequencing, a digital delay trigger unit (shown in Figure 1) is utilized to generate multiple channel trigger signals, T0, T1, T2 and T3, corresponding to the shutter, microscopy imaging system, pump laser and probe laser.These signals are employed for external triggering control.When the T0 trigger reaches the shutter, the shutter opens.After a delay of D01, the T1 trigger reaches the microscopic CCD, entering the exposure standby state.Following a delay of D12, the T2 trigger reaches the nanosecond laser, producing the pump irradiation pulse.After a delay of D23, the T3 trigger reaches the picosecond laser, producing the detection irradiation pulse.The delays D01, To quantitatively measure transient parameters, two time slices for a single damage event need to be obtained.For this purpose, a polarization beam splitter (PBS-1 in Figure 1) is employed to split the probe laser into two beams with orthogonal polarization directions, namely, S light and P light.The relative delay between S light and P light can be adjusted by the optical delay line, as indicated by "Optical Delay" in Figure 1.PBS-2 coaxially combines S light and P light, and they pass through the damaged region together.After passing through the damaged region, the beams are separated into S light and P light by PBS-3 once again.The shadow images carrying ultrafast information are collected by dual imaging detectors.Utilizing image processing algorithms for information extraction from shadow images, transient physical quantities such as the shockwave velocity, particle ejection speed and ejection angle can be accurately calculated by precisely comparing the differences between two images.The details of the setup can be found in the reference [37].
In terms of temporal sequencing, a digital delay trigger unit (shown in Figure 1) is utilized to generate multiple channel trigger signals, T0, T1, T2 and T3, corresponding to the shutter, microscopy imaging system, pump laser and probe laser.These signals are employed for external triggering control.When the T0 trigger reaches the shutter, the shutter opens.After a delay of D01, the T1 trigger reaches the microscopic CCD, entering the exposure standby state.Following a delay of D12, the T2 trigger reaches the nanosecond laser, producing the pump irradiation pulse.After a delay of D23, the T3 trigger reaches the picosecond laser, producing the detection irradiation pulse.The delays D01, D12 and D23 are configured through control software.The triggering and capture of CCD microscopic imaging are also conducted through control software.
Figure 2a illustrates the pulse timing results with a pump-probe delay of 13.5 ns and an S-P light delay of 11 ns.
D12 and D23 are configured through control software.The triggering and capture of CCD microscopic imaging are also conducted through control software.
Figure 2a illustrates the pulse timing results with a pump-probe delay of 13.5 ns and an S-P light delay of 11 ns.It can be observed that under nine repeated measurements, the system's temporal jitter error is approximately 1 ns, fully meeting the requirements for capturing ultrafast processes in nanosecond damage.The pump laser, focused by a lens system, forms a Gaussian spot with a diameter of approximately 30 µm on the sample surface, as shown in Figure 2b.By adjusting the position of the sample displacement stage, it is easy to focus the pump laser on the front and rear surfaces of the sample.It can be observed that under nine repeated measurements, the system's temporal jitter error is approximately 1 ns, fully meeting the requirements for capturing ultrafast processes in nanosecond damage.The pump laser, focused by a lens system, forms a Gaussian spot with a diameter of approximately 30 µm on the sample surface, as shown in Figure 2b.By adjusting the position of the sample displacement stage, it is easy to focus the pump laser on the front and rear surfaces of the sample.D12 and D23 are configured through control software.The triggering and capture of CCD microscopic imaging are also conducted through control software.
Plasma Formation
Figure 2a illustrates the pulse timing results with a pump-probe delay of 13.5 ns and an S-P light delay of 11 ns.It can be observed that under nine repeated measurements, the system's temporal jitter error is approximately 1 ns, fully meeting the requirements for capturing ultrafast processes in nanosecond damage.The pump laser, focused by a lens system, forms a Gaussian spot with a diameter of approximately 30 µm on the sample surface, as shown in Figure 2b.By adjusting the position of the sample displacement stage, it is easy to focus the pump laser on the front and rear surfaces of the sample.The blue substrate represents fused silica material.When the surface micro-defects are subjected to intense laser irradiation, the defects are prone to causing optical field modulation [22,23] and a decrease in material mechanical properties [38].Additionally, considering the possible presence of thermally absorptive impurities in the defects, a large number of free electrons will be generated locally at first.The generation of free electrons rapidly increases the material's deposition of laser energy.As the local temperature rises and the electron density increases, the surrounding material near the defect will undergo significant modification, enhancing the absorption of laser energy [24,39].The deposition of a large amount of energy can lead to the local melting or even vaporization of the crystal in a short period, resulting in the formation of a damage pit.The vaporized material, accompanied by liquid and solid substances, is ejected from the material surface into the air, creating discontinuities in parameters such as temperature, pressure and density.Consequently, distinct shockwave phenomena can be observed in the air domain [33][34][35].
Plasma Formation
Plasma formation is the earliest phenomenon observed during the evolution from defects to damage.Under the influence of the laser field, material electrons ionize, generating free electrons.These atoms undergo collision ionization and the initial free electrons cause avalanche ionization, further increasing laser absorption and leading to the phenomenon of inverse bremsstrahlung [40,41].The plasma cluster exhibits intense absorption, resembling metallic properties, when interacting with the probe laser.This "opaque" characteristic is precisely captured through TRPP shadow imaging.The varying shades in the images indirectly reflect the strength of probe light absorption by the plasma and are correlated with plasma density.
Figure 3b-f illustrate the ultra-early stages of plasma evolution during laser irradiation, where the yellow line segment represents the interface between the fused silica surface and the air side.Conclusions drawn from experimental observations include: 1.The plasma region forms in the early stages of the rising pump laser pulse, with a plasma cluster size of approximately 20 µm, initiating at around −2.5 ns.The initial electron density in the plasma is not high, and its absorption of the probe light is not significant.2. With the injection of the laser pulse, the plasma volume rapidly expands within several nanoseconds, reaching its maximum at the pulse peak (in this article, we define the peak position of the pump laser pulse as the moment t = 0 delay).3. The shockwave appears near the air side of the plasma region, occurring at approximately −1.0 ns.The diffusion velocity of the shockwave is noticeably greater than the plasma expansion velocity.Around 0.3 ns later, the shockwave separates from the plasma interface and rapidly propagates towards the air side.
From the analysis above, it is evident that plasma explosion precedes shockwave transmission and is closely related to the duration of pump laser pulse action.In contrast, the shockwave is generated early in the laser interaction process, implying that the explosion process occurs in the extremely early stages of damage.
Utilizing image measurements, it becomes possible to precisely ascertain the expansion radius of the plasma cluster over time, thereby facilitating a comprehensive understanding of its dynamic behavior.These data serve as a crucial foundation for the subsequent determination of the expansion rate of the plasma cluster.To systematically investigate the impact of varying energy of the incident laser, Figures 4 and 5 depict the expansion patterns of plasma clusters subjected to different laser excitation energies.From Figure 5, it can be observed that the plasma diameter under different laser energies varies linearly with time, with the slope representing the plasma expansion velocity, about 6 km/s.The incident laser energies are deliberately varied, arranged in ascending order as 0.5 mJ, 0.8 mJ and 1.0 mJ, respectively.The temporal evolution of the plasma cluster expansion is captured on the horizontal axis, with the zero-moment strategically aligned with the peak position of the pump laser pulse waveform.This temporal alignment ensures a precise correlation between the observed phenomena and the temporal dynamics initiated by the laser excitation.By meticulously manipulating the energy of the incident The incident laser energies are deliberately varied, arranged in ascending order as 0.5 mJ, 0.8 mJ and 1.0 mJ, respectively.The temporal evolution of the plasma cluster expansion is captured on the horizontal axis, with the zero-moment strategically aligned with the peak position of the pump laser pulse waveform.This temporal alignment ensures a precise correlation between the observed phenomena and the temporal dynamics initiated by the laser excitation.By meticulously manipulating the energy of the incident The incident laser energies are deliberately varied, arranged in ascending order as 0.5 mJ, 0.8 mJ and 1.0 mJ, respectively.The temporal evolution of the plasma cluster expansion is captured on the horizontal axis, with the zero-moment strategically aligned with the peak position of the pump laser pulse waveform.This temporal alignment ensures a precise correlation between the observed phenomena and the temporal dynamics initiated by the laser excitation.By meticulously manipulating the energy of the incident laser, a comprehensive dataset is generated, allowing for a nuanced analysis of the expansion patterns at different excitation intensities.
The initiation of the plasma cluster becomes evident at approximately −3.0 ns, marking the inception of its formation.An intriguing observation emerges as the effective diameter of the plasma cluster, quantified as the integral of all shaded areas equivalent to the circular area, demonstrates a direct proportionality to the energy of the incident laser.This implies that the magnitude of the incident laser energy directly influences the initial diameter of the plasma cluster, with higher energy inputs resulting in larger initial diameters.
As temporal evolution unfolds, the plasma cluster embarks on a phase of rapid expansion.Remarkably, during the duration of the laser pulse, the expansion velocity of the plasma cluster remains constant.However, an interesting departure from the initial proportional relationship between the size and incident energy becomes apparent.Unlike the initial stage, wherein larger incident energies lead to larger plasma cluster diameters, the subsequent expansion phase appears to decouple from the influence of incident energy.In other words, as time progresses within the pulse duration, the size of the plasma cluster ceases to exhibit an overtly proportional relationship with the incident energy.
Shockwave Propagation
The shockwave forms in the mid-term of the plasma cluster, initiating a few nanoseconds after the laser interaction and propagating simultaneously toward the optical material side and the air side.The decay behavior of the shockwave velocity directly reflects the explosive internal energy during material micro-explosions, holding significant relevance in assessing the severity of damage caused [38].The shockwave encounters numerous neutral particles at the interface with air, resulting in a higher refractive index ahead of the shockwave compared to the background air.As light passes through the shockwave, it collapses inward, causing internal refraction.This refractive effect manifests as an externally dark and internally bright shockwave structure on the imaging plane, with the degree of shadowing proportional to the second derivative of the refractive index [42].Therefore, TRPP shadow imaging can accurately capture the shockwave front position, as illustrated in Figure 6.
laser, a comprehensive dataset is generated, allowing for a nuanced analysis of the expansion patterns at different excitation intensities.
The initiation of the plasma cluster becomes evident at approximately −3.0 ns, marking the inception of its formation.An intriguing observation emerges as the effective diameter of the plasma cluster, quantified as the integral of all shaded areas equivalent to the circular area, demonstrates a direct proportionality to the energy of the incident laser.This implies that the magnitude of the incident laser energy directly influences the initial diameter of the plasma cluster, with higher energy inputs resulting in larger initial diameters.
As temporal evolution unfolds, the plasma cluster embarks on a phase of rapid expansion.Remarkably, during the duration of the laser pulse, the expansion velocity of the plasma cluster remains constant.However, an interesting departure from the initial proportional relationship between the size and incident energy becomes apparent.Unlike the initial stage, wherein larger incident energies lead to larger plasma cluster diameters, the subsequent expansion phase appears to decouple from the influence of incident energy.In other words, as time progresses within the pulse duration, the size of the plasma cluster ceases to exhibit an overtly proportional relationship with the incident energy.
Shockwave Propagation
The shockwave forms in the mid-term of the plasma cluster, initiating a few nanoseconds after the laser interaction and propagating simultaneously toward the optical material side and the air side.The decay behavior of the shockwave velocity directly reflects the explosive internal energy during material micro-explosions, holding significant relevance in assessing the severity of damage caused [38].The shockwave encounters numerous neutral particles at the interface with air, resulting in a higher refractive index ahead of the shockwave compared to the background air.As light passes through the shockwave, it collapses inward, causing internal refraction.This refractive effect manifests as an externally dark and internally bright shockwave structure on the imaging plane, with the degree of shadowing proportional to the second derivative of the refractive index [42].Therefore, TRPP shadow imaging can accurately capture the shockwave front position, as illustrated in Figure 6.By setting an appropriate pump laser pulse energy to induce damage and generate the explosive shockwave, the shadow imaging system captures images of the shockwave under P-polarized and S-polarized light, as shown in Figure 6a,b.The propagation distance difference ΔR = R2 − R1 is calculated between the shockwave peaks in the two images, as depicted in Figure 6c.The instantaneous velocity of the shockwave at this delay By setting an appropriate pump laser pulse energy to induce damage and generate the explosive shockwave, the shadow imaging system captures images of the shockwave under P-polarized and S-polarized light, as shown in Figure 6a,b.The propagation distance difference ∆R = R2 − R1 is calculated between the shockwave peaks in the two images, as depicted in Figure 6c.The instantaneous velocity of the shockwave at this delay is then determined as v = ∆R/τ, where τ is the time interval between P and S polarized light.
In the experiment, we measured the variation curve of the shockwave velocity on the air side under two different incident laser energies (1.0 mJ and 0.8 mJ), as shown in Figure 7.
is then determined as v = ΔR/τ, where τ is the time interval between P and S polarized light.
In the experiment, we measured the variation curve of the shockwave velocity on the air side under two different incident laser energies (1.0 mJ and 0.8 mJ), as shown in Figure 7.It is evident that the shockwave transmission rate on the air side exhibits a clear decay pattern over time.For example, at a pump energy of 1.0 mJ, the maximum shockwave velocity of 7.1 km/s occurs at 6.4 ns, followed by a rapid decay influenced by air resistance, reaching about 1 km/s near 300 ns.The decay tendency of the shockwave velocity can be fitted using the Sedov formula [43], showing an exponential decay.Additionally, it is observed that the shockwave velocity is influenced by the incident laser energy, with higher laser energy resulting in faster shockwave velocities at the same delay time.
The shockwave propagates concurrently in both the directions of the surrounding air and the material interface.Significantly, within the material medium, the shockwave exhibits a markedly higher velocity compared to its propagation in the adjacent air.This discrepancy leads to a rapid expansion of the spatial separation between the two shockwaves, a phenomenon vividly illustrated in Figure 8.It is noteworthy that, in stark contrast to the swift velocity decay observed on the air side, the mechanical wave's propagation speed penetrating the material substrate maintains a consistent value, measuring approximately 5.2 km/s.Notably, this velocity is found to surpass that experienced on the air side, contributing to the distinctive dynamic behavior exhibited within the material.This consistent and elevated velocity within the material side underscores the unique characteristics of the mechanical wave as it traverses through the medium, thereby influencing the evolving dynamics of the shockwave interaction.The It is evident that the shockwave transmission rate on the air side exhibits a clear decay pattern over time.For example, at a pump energy of 1.0 mJ, the maximum shockwave velocity of 7.1 km/s occurs at 6.4 ns, followed by a rapid decay influenced by air resistance, reaching about 1 km/s near 300 ns.The decay tendency of the shockwave velocity can be fitted using the Sedov formula [43], showing an exponential decay.Additionally, it is observed that the shockwave velocity is influenced by the incident laser energy, with higher laser energy resulting in faster shockwave velocities at the same delay time.
The shockwave propagates concurrently in both the directions of the surrounding air and the material interface.Significantly, within the material medium, the shockwave exhibits a markedly higher velocity compared to its propagation in the adjacent air.This discrepancy leads to a rapid expansion of the spatial separation between the two shockwaves, a phenomenon vividly illustrated in Figure 8.
is then determined as v = ΔR/τ, where τ is the time interval between P and S polarized light.
In the experiment, we measured the variation curve of the shockwave velocity on the air side under two different incident laser energies (1.0 mJ and 0.8 mJ), as shown in Figure 7.It is evident that the shockwave transmission rate on the air side exhibits a clear decay pattern over time.For example, at a pump energy of 1.0 mJ, the maximum shockwave velocity of 7.1 km/s occurs at 6.4 ns, followed by a rapid decay influenced by air resistance, reaching about 1 km/s near 300 ns.The decay tendency of the shockwave velocity can be fitted using the Sedov formula [43], showing an exponential decay.Additionally, it is observed that the shockwave velocity is influenced by the incident laser energy, with higher laser energy resulting in faster shockwave velocities at the same delay time.
The shockwave propagates concurrently in both the directions of the surrounding air and the material interface.Significantly, within the material medium, the shockwave exhibits a markedly higher velocity compared to its propagation in the adjacent air.This discrepancy leads to a rapid expansion of the spatial separation between the two shockwaves, a phenomenon vividly illustrated in Figure 8.It is noteworthy that, in stark contrast to the swift velocity decay observed on the air side, the mechanical wave's propagation speed penetrating the material substrate maintains a consistent value, measuring approximately 5.2 km/s.Notably, this velocity is found to surpass that experienced on the air side, contributing to the distinctive dynamic behavior exhibited within the material.This consistent and elevated velocity within the material side underscores the unique characteristics of the mechanical wave as it traverses through the medium, thereby influencing the evolving dynamics of the shockwave interaction.The It is noteworthy that, in stark contrast to the swift velocity decay observed on the air side, the mechanical wave's propagation speed penetrating the material substrate maintains a consistent value, measuring approximately 5.2 km/s.Notably, velocity is found to surpass that experienced on the air side, contributing to the distinctive dynamic behavior exhibited within the material.This consistent and elevated velocity within the material side underscores the unique characteristics of the mechanical wave as it traverses through the medium, thereby influencing the evolving dynamics of the shockwave interaction.The asymmetry in velocity profiles between the air and material sides delineates a complex interplay of physical forces and material properties, providing valuable insights into the intricate dynamics governing the shockwave-material interaction.
Material Fracture and Splashing
In the last stage of damage, under the action of the explosive shockwave, optical materials begin to exhibit fracture splashing phenomena.The velocity, angle, and intensity of splashing are directly related to the final formation of damage.TRPP allows for the quantitative acquisition of relevant transient physical quantities.
Damage can occur on the front and rear surfaces of fused silica components, and there are certain differences in particle splashing phenomena at different surface positions.Figure 9 illustrates the particle splashing process on the front and rear surfaces within the 40 ns to 760 ns delay range.Focusing the pump laser on the front surface of the sample will induce front surface damage, as shown in Figure 9a.Similarly, focusing the pump laser on the back surface of the sample will induce back surface damage, as shown in Figure 9b.In addition, the experimental parameters in Figure 9a are the same as those in Figure 8.
Materials 2024, 17, x FOR PEER REVIEW 9 of 15 asymmetry in velocity profiles between the air and material sides delineates a complex interplay of physical forces and material properties, providing valuable insights into the intricate dynamics governing the shockwave-material interaction.
Material Fracture and Splashing
In the last stage of damage, under the action of the explosive shockwave, optical materials begin to exhibit fracture splashing phenomena.The velocity, angle, and intensity of splashing are directly related to the final formation of damage.TRPP allows for the quantitative acquisition of relevant transient physical quantities.
Damage can occur on the front and rear surfaces of fused silica components, and there are certain differences in particle splashing phenomena at different surface positions.Figure 9 illustrates the particle splashing process on the front and rear surfaces within the 40 ns to 760 ns delay range.Focusing the pump laser on the front surface of the sample will induce front surface damage, as shown in Figure 9a.Similarly, focusing the pump laser on the back surface of the sample will induce back surface damage, as shown in Figure 9b.In addition, the experimental parameters in Figure 9a are the same as those in Figure 8.A comparison of the images reveals several distinctions.First, the ejection angles are different, with the rear surface ejection angle often greater than that of the front surface.
The rear surface ejection angle ranges from 30 • to 60 • , while the front surface ejection angle is smaller (around 10 • ), and the ejection path is generally straight.Second, the intensity of particle splashing varies.Splashing on the front surface of fused silica begins around several tens of nanoseconds (~50 ns), with early splashing being primarily high-speed gas or nano-sized particles, making imaging unclear.In contrast, rear surface splashing is more intense, with dense and clear splashed particles.
Further observations reveal the presence of a "cloud-like" high-speed moving substance, presumably gas-phase, within the shockwave packet.This gaseous substance rapidly expands with the shockwave, gradually exceeding the shockwave front velocity.Around 200 ns to 400 ns, it overlaps with the shockwave front and subsequently breaks through the shockwave front, creating a visible gap, as illustrated in Figure 10.
A comparison of the images reveals several distinctions.First, the ejection angles are different, with the rear surface ejection angle often greater than that of the front surface.The rear surface ejection angle ranges from 30° to 60°, while the front surface ejection angle is smaller (around 10°), and the ejection path is generally straight.Second, the intensity of particle splashing varies.Splashing on the front surface of fused silica begins around several tens of nanoseconds (~50 ns), with early splashing being primarily high-speed gas or nano-sized particles, making imaging unclear.In contrast, rear surface splashing is more intense, with dense and clear splashed particles.
Further observations reveal the presence of a "cloud-like" high-speed moving substance, presumably gas-phase, within the shockwave packet.This gaseous substance rapidly expands with the shockwave, gradually exceeding the shockwave front velocity.Around 200 ns to 400 ns, it overlaps with the shockwave front and subsequently breaks through the shockwave front, creating a visible gap, as illustrated in Figure 10.There are two possible explanations for this process.The first potential explanation is that during the explosion, two shockwaves are generated.The first and second shockwaves occur sequentially, but the velocity of the second shockwave is faster than that of the 1st, leading to their superposition at a certain moment, resulting in features resembling a shockwave front.Another possible explanation is that the shockwave propagates forward, followed by the explosion point generating high-speed ejected gas.The initial speed of the shockwave is relatively fast but quickly attenuates in the air.When the speed of the ejected gas exceeds the shockwave velocity, the two overlap, and the ejected gas breaks through the shockwave front, resulting in the phenomenon shown in Figure 10.
As the temporal scale extends into the microsecond range, a discernible augmentation in the particle size of splashing becomes evident.This increase in particle dimensions not only enhances the quality of imaging but also facilitates the precise estimation of the particle size, elucidated in Figure 11.
During this temporal regime, both particles ejected from the front and rear surfaces manifest as irregular fragments resulting from material fracture.Notably, the disparity between the two lies in the observation that, in comparison to the front surface, the rear surface exhibits a more extensive lateral splashing range.Insightfully, researchers have noted that these irregular fragments, when meticulously collected and subjected to scanning electron microscopy (SEM) analysis, predominantly comprise material fracture fragments, with a discernibly smaller proportion displaying a melted structure [13].This empirical observation underscores that laser-induced damage is a consequence of the synergistic interplay between high-temperature melting and mechanical fracture of the material.The intricate analysis of the ejected particles at the microsecond temporal scale provides valuable insights into the dynamic processes governing laser-material interactions, There are two possible explanations for this process.The first potential explanation is that during the explosion, two shockwaves are generated.The first and second shockwaves occur sequentially, but the velocity of the second shockwave is faster than that of the 1st, leading to their superposition at a certain moment, resulting in features resembling a shockwave front.Another possible explanation is that the shockwave propagates forward, followed by the explosion point generating high-speed ejected gas.The initial speed of the shockwave is relatively fast but quickly attenuates in the air.When the speed of the ejected gas exceeds the shockwave velocity, the two overlap, and the ejected gas breaks through the shockwave front, resulting in the phenomenon shown in Figure 10.
As the temporal scale extends into the microsecond range, a discernible augmentation in the particle size of splashing becomes evident.This increase in particle dimensions not only enhances the quality of imaging but also facilitates the precise estimation of the particle size, elucidated in Figure 11.
During this temporal regime, both particles ejected from the front and rear surfaces manifest as irregular fragments resulting from material fracture.Notably, the disparity between the two lies in the observation that, in comparison to the front surface, the rear surface exhibits a more extensive lateral splashing range.Insightfully, researchers have noted that these irregular fragments, when meticulously collected and subjected to scanning electron microscopy (SEM) analysis, predominantly comprise material fracture fragments, with a discernibly smaller proportion displaying a melted structure [13].This empirical observation underscores that laser-induced damage is a consequence of the synergistic interplay between high-temperature melting and mechanical fracture of the material.The intricate analysis of the ejected particles at the microsecond temporal scale provides valuable insights into the dynamic processes governing laser-material interactions, shedding light on the complex mechanisms involved in the generation of irregular fragments and melted structures during laser-induced damage events.
shedding light on the complex mechanisms involved in the generation of irregular fragments and melted structures during laser-induced damage events.The method for measuring the movement speed of splashing particles is essentially the same as the method for measuring the shockwave velocity, both being based on the calculation of differences between shadow images under S/P light.Figure 12 presents the distribution of splashed particle speeds in the delay range of 150 ns to 300 ns, with Figure 12a having an incident laser energy of 0.8 mJ and Figure 12b The method for measuring the movement speed of splashing particles is essentially the same as the method for measuring the shockwave velocity, both being based on the calculation of differences between shadow images under S/P light.Figure 12 presents the distribution of splashed particle speeds in the delay range of 150 ns to 300 ns, with Figure 12a having an incident laser energy of 0.8 mJ and Figure 12b having an incident laser energy of 1.0 mJ.
The results show that: (1) The speed distribution range of splashed particles is wide, from a minimum of 150 m/s to a maximum of 2000 m/s.(2) The smaller the delay, the faster the speed of splashed particles.(3) The incident laser energy also has a certain influence on the speed of splashed particles, with higher incident laser energy resulting in faster ejection speeds.However, there is no apparent correlation between the splashed particle size and speed.
faster ejection speeds.However, there is no apparent correlation between the splashed particle size and speed.
Conclusions
In this work, we introduced the application of Time-Resolved Pump-Probe (TRPP) technology in characterizing the ultrafast processes of laser-induced damage in fused silica.We obtained some novel quantitative results that were not present in previous studies.First, we observed early plasma plumes (with delays less than 10 ns) using shadow imaging and quantitatively obtained their generation time, their expansion rate and the influence of laser energy on their evolution.The diameter of plasma is proportional to the pulse laser energy and increases linearly during the pulse laser duration with an expansion rate of approximately 6 km/s.Second, we obtained the propagation law of shock waves.After tens of nanoseconds, shockwaves begin to propagate towards the air side and into the fused silica material.The initial velocity of the shockwave is proportional to the incident laser energy.The maximum shockwave velocity on the air side is 9 km/s, occurring at the end of the pulse duration and then exponentially decreasing due to air resistance, reaching approximately 1 km/s around 300 ns of delay.Third, by comparing the differences in P/S
Conclusions
In this work, we introduced the application of Time-Resolved Pump-Probe (TRPP) technology in characterizing the ultrafast processes of laser-induced damage in fused silica.We obtained some novel quantitative results that were not present in previous studies.First, we observed early plasma plumes (with delays less than 10 ns) using shadow imaging and quantitatively obtained their generation time, their expansion rate and the influence of laser energy on their evolution.The diameter of plasma is proportional to the pulse laser energy and increases linearly during the pulse laser duration with an expansion rate of approximately 6 km/s.Second, we obtained the propagation law of shock waves.After tens of nanoseconds, shockwaves begin to propagate towards the air side and into the fused silica material.The initial velocity of the shockwave is proportional to the incident laser energy.The maximum shockwave velocity on the air side is 9 km/s, occurring at the end of the pulse duration and then exponentially decreasing due to air resistance, reaching approximately 1 km/s around 300 ns of delay.Third, by comparing the differences in P/S polarization images, we have accurately obtained the velocity distribution of particle ejections in the later stage of damage.After hundreds of nanoseconds, material fracture
Figure 1 .
Figure 1.Schematic diagram of the TRPP system.
Figure 1 .
Figure 1.Schematic diagram of the TRPP system.
Figure 2 .
Figure 2. The temporal jitter error of the pump-probe pulses (a) and the spatial distribution of the pump laser spot at a sample's surface under about 141 J/cm 2 (laser energy 1.0 mJ) (b).
Figure
Figure 3a depicts a schematic diagram of the evolution of optical material defects towards damage under the influence of nanosecond laser irradiation.
Figure 3 .
Figure 3. Formation and evolution process of an early plasma cluster.(a) schematic diagram of the interaction between laser and material ,and the TRPP images under delay of −2.5 ns (b), −1.0 ns (c), −0.7 ns (d), 0 ns (e) and 2.0 ns (f).
Figure 2 .
Figure 2. The temporal jitter error of the pump-probe pulses (a) and the spatial distribution of the pump laser spot at a sample's surface under about 141 J/cm 2 (laser energy 1.0 mJ) (b).
Figure
Figure 3a depicts a schematic diagram of the evolution of optical material defects towards damage under the influence of nanosecond laser irradiation.
Figure 2 .
Figure 2. The temporal jitter error of the pump-probe pulses (a) and the spatial distribution of the pump laser spot at a sample's surface under about 141 J/cm 2 (laser energy 1.0 mJ) (b).
Figure
Figure 3a depicts a schematic diagram of the evolution of optical material defects towards damage under the influence of nanosecond laser irradiation.
Figure 3 .
Figure 3. Formation and evolution process of an early plasma cluster.(a) schematic diagram of the interaction between laser and material ,and the TRPP images under delay of −2.5 ns (b), −1.0 ns (c), −0.7 ns (d), 0 ns (e) and 2.0 ns (f).
Figure 3 .
Figure 3. Formation and evolution process of an early plasma cluster.(a) schematic diagram of the interaction between laser and material, and the TRPP images under delay of −2.5 ns (b), −1.0 ns (c), −0.7 ns (d), 0 ns (e) and 2.0 ns (f).
Figure 4 .
Figure 4.The evolution images of the plasma under different pump laser energies.
Figure 5 .
Figure 5.The expansion of the plasma cluster under different incident laser energies.
Figure 4 .
Figure 4.The evolution images of the plasma under different pump laser energies.
Figure 4 .
Figure 4.The evolution images of the plasma under different pump laser energies.
Figure 5 .
Figure 5.The expansion of the plasma cluster under different incident laser energies.
Figure 5 .
Figure 5.The expansion of the plasma cluster under different incident laser energies.
Figure 6 .
Figure 6.Measuring shockwave velocity using the TRPP setup.(a) TRPP image of S light; (b) TRPP image of P light; (c) overlap the S light image with the P light image.
Figure 6 .
Figure 6.Measuring shockwave velocity using the TRPP setup.(a) TRPP image of S light; (b) TRPP image of P light; (c) overlap the S light image with the P light image.
Figure 7 .
Figure 7.Comparison of shockwave velocities on the air side under two incident laser energies.
Figure 8 .
Figure 8.The propagation positions of the shockwaves inside the fused silica and on the air side.
Figure 7 .
Figure 7.Comparison of shockwave velocities on the air side under two incident laser energies.
Figure 7 .
Figure 7.Comparison of shockwave velocities on the air side under two incident laser energies.
Figure 8 .
Figure 8.The propagation positions of the shockwaves inside the fused silica and on the air side.
Figure 8 .
Figure 8.The propagation positions of the shockwaves inside the fused silica and on the air side.
Figure 9 .
Figure 9.Comparison of particle splashing phenomena on the front surface (a) and rear surface (b) at different delays in the nanosecond range.Figure 9. Comparison of particle splashing phenomena on the front surface (a) and rear surface (b) at different delays in the nanosecond range.
Figure 9 .
Figure 9.Comparison of particle splashing phenomena on the front surface (a) and rear surface (b) at different delays in the nanosecond range.Figure 9. Comparison of particle splashing phenomena on the front surface (a) and rear surface (b) at different delays in the nanosecond range.
Figure 10 .
Figure 10.Process of the high-speed ejection of gaseous material overlapping with the shockwave front.
Figure 10 .
Figure 10.Process of the high-speed ejection of gaseous material overlapping with the shockwave front.
Figure 11 .
Figure 11.Comparison of particle splashing phenomena on the front surface (a) and rear surface (b) at different delays in the microsecond range.
having an incident laser energy of 1.0 mJ.The results show that: (1) The speed distribution range of splashed particles is wide, from a minimum of 150 m/s to a maximum of 2000 m/s.(2) The smaller the delay, the faster the speed of splashed particles.(3) The incident laser energy also has a certain influence on the speed of splashed particles, with higher incident laser energy resulting in
Figure 11 .
Figure 11.Comparison of particle splashing phenomena on the front surface (a) and rear surface (b) at different delays in the microsecond range.
Figure 12 .
Figure 12.Velocity distribution of splashing particles at different laser energies: 0.8 mJ for (a) and 1.0 mJ for (b).
Figure 12 .
Figure 12.Velocity distribution of splashing particles at different laser energies: 0.8 mJ for (a) and 1.0 mJ for (b). | 10,238.2 | 2024-02-01T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering"
] |
Nonlinear gap modes and compactons in a lattice model for spin-orbit coupled exciton-polaritons in zigzag chains
We consider a system of generalized coupled Discrete Nonlinear Schr\"{o}dinger (DNLS) equations, derived as a tight-binding model from the Gross-Pitaevskii-type equations describing a zigzag chain of weakly coupled condensates of exciton-polaritons with spin-orbit (TE-TM) coupling. We focus on the simplest case when the angles for the links in the zigzag chain are $\pm \pi/4$ with respect to the chain axis, and the basis (Wannier) functions are cylindrically symmetric (zero orbital angular momenta). We analyze the properties of the fundamental nonlinear localized solutions, with particular interest in the discrete gap solitons appearing due to the simultaneous presence of spin-orbit coupling and zigzag geometry, opening a gap in the linear dispersion relation. In particular, their linear stability is analyzed. We also find that the linear dispersion relation becomes exactly flat at particular parameter values, and obtain corresponding compact solutions localized on two neighboring sites, with spin-up and spin-down parts $\pi/2$ out of phase at each site. The continuation of these compact modes into exponentially decaying gap modes for generic parameter values is studied numerically, and regions of stability are found to exist in the lower or upper half of the gap, depending on the type of gap modes.
I. INTRODUCTION
Planar semiconductor microcavities operating in the exciton-polariton regime have become a paradigm model for experimental and theoretical studies of nonlinear and quantum properties of light-matter interaction [1]. A major advantage of these systems is that they are solid state devices, that operate in a wide diapason of temperatures between few Kelvins and up to the room conditions. Interaction between the polaritons is much stronger than for pure photons, that lowers power requirements for creating conditions when polariton dynamics can be effectively controlled with the external light sources [2]. Microcavities can also be readily structured to create a variety of potential energy landscapes reproducing lattice structures known in studies of electrons in condensed matter on more practical scales of tens of microns. Thus polaritons can be controlled using band gap and zone engineering [3]. Through their peculiar spin properties and sensitivity to the applied magnetic field, polaritons in structured microcavities have been shown to have a number of topological properties [4]. Thus polariton based devices have a competitive edge over their photononly counterparts through their relatively low nonlinear thresholds and possibility to create micron-scale topological devices. A combination of these two aspects has been recently used to demonstrate a variety of nonlinear topological effects in polariton systems, see, e.g., [5] and references therein.
As a specific example, a polariton BEC in a zigzag chain of polariton micropillars with photonic spin-orbit coupling, originating in the splitting of optical cavity modes with TE and TM polarization, was proposed in Ref. [6]. The simultaneous presence of zigzag geometry and polarization dependent tunneling was shown to yield topologically protected edge states, and in the presence of homogeneous pumping and nonlinear interactions the creation of polarization domain walls through the Kibble-Zurek mechanism, analogous to the Su-Schrieffer-Heeger solitons in polymers, was numerically observed [6]. Of crucial importance is the spin-orbit induced opening of a central gap in the linear dispersion relation. As we will show in this work, the existence of a gap, together with the option of tuning the linear dispersion towards flatness at specific parameter values, also leads to nonlinear strongly localized modes in the bulk (intrinsically localized modes) with properties depending crucially on the relative strength of interaction between polaritons of opposite and equal spin.
The starting point is the following set of two coupled continuous Gross-Pitaevskii equations [7]: These equations describe exciton-polaritons with circularly polarized light-component, where Ψ + corresponds to left (positive spin) and Ψ − to right (negative spin) polarization. Polaritons interact mainly through their excitonic part, and interactions between polaritons with identical polarization are generally repulsive (here normalized to +1), while interactions between those of opposite spins often are weaker and attractive. A typical value is a −0.05 [8], but may range between roughly −1 a 0, and may possibly be also repulsive, or attractive with a magnitude stronger than the self-interaction [9]. Since the exciton-components of the polariton wave functions typically are localized within small spatial regions, the interactions are assumed to be local (point interactions) in this mean-field description. Ω describes the Zeeman-splitting between spin-up and spin-down polaritons in presence of an external magnetic field; in this work we put Ω = 0.
Of main interest here is the term proportional to β: it arises due to different properties associated with polaritons whose photonic components, as expressed in a suitable basis of linear polarization, have TE resp TM polarizations (or, alternatively, longitudinal/transversal w.r.t. the propagation direction (k-vector)). It is commonly described in terms of different effective masses of the lower polariton branches for TE and TM components, β ∝ m −1 T E −m −1 T M , whose ratio typically may be of the order m T E /m T M ≈ 0.85 − 0.95 (see e.g. supplemental material of [10]), although in principle β could have arbitrary sign. Expressed in a basis of circular polarization (spinor basis) as in (1) (Ψ ± = Ψ x ∓ iΨ y ), this TE/TM energy splitting can be interpreted as a spin-orbit splitting, since the dynamics of the two spin (polarization) components couple in a different way to the orbital part of the other component (via derivatives in x and y of the mean-field wave function in (1)).
In this work, we choose the potential V (x, y) as a zigzag potential along the x-direction, considering this geometry as the simplest generalization of a straight 1D chain which yields non-trivial geometrical effects of the spin-orbit coupling between polaritons localized at neighboring potential minima. As an example potential, we may choose e.g.: as illustrated in Fig. 1. Here d is the distance between potential mininma, and 2N is the total number of potential wells in the chain. The geometry is essentially the same as for the coupled micropillars in [6], with all angles for the links between neighboring minima being ±45 • with respect to the x-axis. Evidently one may easily generalize to arbitrary angles, or more complicated expressions for zigzag potentials which may be realized in various experimental settings e.g. with optical lattices [11]. In order to motivate a tight-binding approximation, we assume V 0 1. In order to understand the most important effects of the spin-coupling coupling in (1) in a tight-binding framework, we here consider situations where the effects of spin-orbit splitting inside each potential well can be neglected, and only are relevant in the regions of wavefunction overlap between neighboring wells. For the experimental set-up of [10], this should be a good approximation if the spatial modes inside the wells may be approximated with Laguerre-Gauss modes with zero orbital angular momentum (LG ± 00 in the notation of [10], where the two subscripts stand for radial and orbital quantum numbers of the 2D harmonic-oscillator wave function, and the superscript indicates polarization as in (1).) At least for a single cavity of non-interacting polaritons, these modes should be good approximations to the ground state, so let us assume that interactions (nonlinearity) and spin-orbit couplings are sufficiently weak to be treated perturbatively, along with the inter-well overlaps. The approach may be extended to consider also lattices of spin vortices (excited modes) built up from modes with nonzero OAM (e.g. LG ± 0±1 as considered in [10]); however this will introduce some additional complications and will be left for future work.
Moreover, if V 0 1 we may also neglect the effect of next-nearest-neighbor interactions (distances between two wells in the horizontal x-direction is √ 2 times larger than between nearest neighbors). It may then be a good approximation to use, as the basis set for the tight-binding approximation, the Wannier functions for a full 2D square lattice (these issues are discussed and numerically checked for some realization of a zigzag optical lattice in a recent Master thesis [12]). These may resemble (but certainly differ from) [12] the LG individual modes (e.g. Wannier functions typically have radial oscillatory tails, decaying exponentially rather than Gaussian). In any case, we will assume that the basis functions w(x, y) (expressed in Cartesian coordinates) are qualitatively close to the LG 00 modes. Particularly, they will be assumed to be close to cylindrically symmetric (w(x, y) ∼ e −ω(x 2 +y 2 ) in the harmonic approximation). (Note that this assumption would not be valid for spin vortices arising from LG modes with nonzero OAM.) The outline of this paper is as follows. In Sec. II we derive the tight-binding model, discuss its general properties, and illustrate the linear dispersion relation for the case with ±45 • angles which will be the system studied for the rest of this paper. We also in Sec. II E identify a limit where the linear dispersion relation becomes exactly flat, and identify the corresponding fundamental compact solutions. In Sec. III we construct the fundamental nonlinear localized modes in the semi-infinite gaps above or below the linear spectrum, as well as in the mini-gap between the linear dispersion branches, opened up due to the simultaneous presence of spin-orbit coupling and nontrivial geometry. Analytical calculations using perturbation theory from the weak-coupling and flat-band limits for the semi-infinite and mini-gap, respectively, are compared with numerical calculations using a standard Newton scheme. In Sec. IV the linear stability of the different families of nonlinear localized modes is investigated, and some instability scenarios are illustrated with direct dynamical simulations. Finally, some concluding remarks are given in Sec. V.
A. Derivation of the tight-binding model
Under the above assumptions, we may expand: where, relative to the coordinate system of (2), d = d/ √ 2, x = x − d /2, y = y − d . Note that the (Wannier) basis functions are the same for both components, since we have assumed no spin-orbit splitting inside the wells, Ω = 0, and w are basis functions of the linear problem. Note also that an analogous approach was used in [13] to derive lattice equations for the simpler problem of a pure 1D lattice with a standard spin-orbit coupling term (−i∂ x , linear in the spatial derivative) for atomic BEC's in optical lattices; similar models were also studied in [14][15][16]. For simplicity we will assume below that w(x, y) can be chosen real (which is typically the case in absence of OAM; the generalization to modes with nonzero OAM requires complex w(x, y) and will be treated in a separate work).
Inserting the expansion (3) into (1), we obtain for the first component: +V (x, y) n u n w(x − n d , y − (−1) n d /2). (4) Here, in writing the nonlinear term as a simple sum and not a triple, we have neglected overlap between basis functions on different sites in cubic terms in w (assuming strong localization of w).
Multiplying with w (n) ≡ w(x − nd , y − (−1) n d /2), integrating over x and y, using the orthogonality of Wannier functions and neglecting all overlaps beyond nearest neighbors, we obtain from (4) a 1D lattice equation of the following form for the site amplitudes of the spin-up component: Here the coefficients are: On-site energy, linear coupling coefficients, where the second equality is obviously true if w (n) is cylindrically symmetric; nonlinearity coefficient, on-site spin-orbit interaction, which is identically zero if w (n) is cylindrically symmetric (easiest seen in polar coordinates, with w = w(r) only, ω = β 2π 0 dφe −2iφ rdr(w rr − wr r )w = 0) (but generally nonzero if Wannier modes would have OAM); and nearestneighbor spin-orbit interactions (the relevant 'new' terms here), Since tails of w are exponentially small, we may assume all integrals taken over the infinite plane. Explicitly, with a change of origin we may write e.g. the first term in the integral in (6) as w xx (x ∓ d , y − (−1) n d )w(x, y)dxdy, etc. But for the case with w cylindrically symmetric, we may easier evaluate the integral (6) in polar coordinates, centered at site n ± 1. After some elementary trigonometry we then obtain: Letting φ = π 4 ± (−1) n φ, this can be expressed as where α n ≡ (−1) n π 4 are the angles for the links in the zigzag chain with respect to the x-axis, and the integral defining σ is independent of all signs since cos φ is even. Explicitly, for the π/4 zigzag chain we get Proceeding analogously with the second component, we obtain the corresponding lattice equation for the siteamplitudes of the spin-down component: Here, , Γ, γ are identical as for the first component (i.e., we may put = 0 by redefining zero-energy, and γ = 1 (or alternatively Γ = 1) by redefining energy scale). For the on-site spin-orbit interaction, (note opposite sign of third term compared to ω), which is again zero if w is cylindrically symmetric. And finally, for the nearest-neighbor spin-orbit couplings, (again note sign of third term compared to (6). As before, restricting to cylindrically symmetric w yields where the last equality holds since the integral is equivalent to that of (8). Explicitly, for the π/4 zigzag chain i.e., with opposite signs compared to (9). Note that, under the above assumptions (w real and cylindrically symmetric), the integral defining σ is always real. We note that the resulting lattice equations (5), (10), with spin-orbit coefficients given by (9), (13), are not equivalent to the equations studied in [13][14][15][16]. In particular, we comment on the relation between the present model and that of Ref. [16], who considered a diamond chain with angles π/4 and a Rashba-type spin-orbit coupling. The zigzag chain could be considered as e.g. the upper part of the diamond chain, if all amplitudes of the lower strand would vanish. However, because the spin-orbit coupling used in [16] is linear in the spatial derivatives while in this work it is quadratic, the spin-orbit coupling coefficients in [16] have a phase shift of π/2 between diagonal and antidiagonal links, compared to π in (9), (13).
B. General properties of the TB-equations
Let us put = 0 and γ = 1. As above, assuming cylindrically symmetric basis functions, we have ω = ω = 0. With no external magnetic field, Ω = 0. Equations (5) and (10), with spin-orbit coefficients given by (9) and (13), respectively, then become: One may easily show the existence of the "standard" two conserved quantities for DNLS-type models; Norm (Power): and Hamiltonian: Here, {u n , v n } and {iu * n , iv * n } play the role of conjugated coordinates and momenta, respectively (i.e.,u n = ∂H/∂(iu * n ),v n = ∂H/∂(iv * n ), etc.). We may note that the Hamiltonian is similar to the Hamiltonian for the "inter-SOC" chain of Beličev et al. (Eq. (11) in [15]), but differs by the "zigzag" spin-orbit factor (−1) n in the last term. Note that this factor can be removed by performing a "staggering transformation" on the site-amplitudes of the spin-down component: v n = (−1) n v n , transforming the equations of motion (14) into: Thus, this transformation effectively changes the sign of the linear coupling of the spin-down component into Γ → Γ = −Γ (which may be interpreted as a reversal of the "effective mass" of the spin-down polariton in this tight-binding approximation), while the nonlinear and spin-orbit terms for both components become equivalent. Eqs. (17) differ from the equations derived in [13] for the straight chain with standard spin-orbit coupling only through this signchange of Γ for the spin-down component. Note also that Eqs. (17) are invariant under a transformation v n → −v n , n + 1 → n − 1, i.e., an overall change of the relative sign of the spin-up and spin-down components is equivalent to a spatial inversion.
C. Generalization to arbitrary angles
As mentioned, it is straightforward to generalize the derivation of the tight-binding equations to arbitrary bonding angles α = π/4 in the zigzag chain. We just outline the main steps: In (3) and the following, we replace y − (−1) n d /2 with y − (−1) n tan(α)d /2 (having redefined d = d cos α). In (7), (12), π/4 then get replaced by α, as already indicated. In (9) we get e −i2α σ for diagonal links and e i2α σ for antidiagonal, and in (13) we get e i2α σ for diagonal links and e −i2α σ for antidiagonal. Then in the tight-binding equations of motion (14), the last term for the spin-up component gets replaced by +e (−1) n i2α v n+1 − e −(−1) n i2α v n−1 , and for the spin-down component by +e −(−1) n i2α u n+1 − e (−1) n i2α u n−1 . For the rest of this paper we will assume α = π/4 and leave the study of effects of variation of the binding angle to future work.
D. Linear dispersion relation
Let u n = ue i(kn−µt) , v n = ve i[(k+π)n−µt] (i.e., v n = ve i(kn−µt) removing factors (−1) n ), with |u|, |v| 1. Inserting it into (14) (or (17)) and neglecting the nonlinear terms then yields: Thus, as illustrated in Fig. 2 (assuming σ < Γ), the spin-orbit coupling opens up gaps in the linear dispersion relation at k = ±π/2, of width 4σ. Note that in contrast to the models for straight chains studied in [13,15], no external magnetic field is needed to open the gap for the zigzag chain. The gap opening is a consequence of the simultaneous presence of spin-orbit coupling and nontrivial geometry, which was also noted for the more complicated diamond chain in [16].
The amplitude ratios between the components may be obtained as v/u = −Γ cos k∓ √ Γ 2 cos 2 k+σ 2 sin 2 k σ sin k . For weak spin-orbit coupling (σ Γ), the polariton is mainly spin-up (u v) on the lower dispersion branch and spin-down on the upper branch (v u) when Γ cos k > 0, and the opposite when Γ cos k < 0.
E. Flat band and compact modes
We may also note that in the particular case of |Γ| = |σ|, the dispersion relation becomes exactly flat. In this case, there are eigenmodes completely localized on either upper or lower part of the chain, with alternating v n = ±iu n on this part (i.e., v n ≡ u n ≡ 0 either for odd or even n). These modes persist also in the presence of nonlinearity (interactions). With the flat band, it is also possible to construct exact compact solutions localized on two neighboring sites. Explicitly, we get for σ = +Γ: and for σ = −Γ: In both cases, the nonlinear dispersion relation for these compactons yields µ = (1 + a)|A| 2 ∓ 2Γ. We will discuss further properties of these nonlinear compactons (e.g. stability) below.
III. NONLINEAR LOCALIZED MODES
A. Single-site modes above the spectrum in the weak-coupling limit For the case of no spin-orbit coupling (σ = 0 and small Γ 1), analysis of fundamental nonlinear localized solutions (including their linear stability) of (14) was done in [17]. It would be straightforward to redo a similar extensive analysis including also a small σ, but it is not the main aim of this work. We focus here first on discussing the effect of small coupling on polaritons with main localization on a single site n 0 .
In the limit of Γ = σ = 0 ("anticontinuous limit"), stationary solutions of (17) are well known. There are two spin-polarized solutions: (spin-down); and one spin- e iθ , with an arbitrary relative phase θ between the spin components. Comparing the Hamiltonian (16) for these solutions at given norm P , we have H = P 2 /2 for the spin-polarized modes and H = (1 + a)P 2 /4 for the mixed mode, so the mixed mode has lowest energy as long as a < 1.
When µ > 0 does not belong to the linear spectrum (18), we search for continuation of these modes for small but nonzero Γ, σ into nonlinear localized modes with exponentially decaying tails and frequency above the spectrum. (We here assume a > −1; if a < −1 the localized modes arising from the spin-mixed solution will have µ < 0 and thus lie below the spectrum.) We may calculate them explicitly perturbatively to arbitrary order in the two small parameters Γ, σ; here we give only the first-and second-order corrections to the five central sites (amplitudes of other sites will be of higher order): It can be seen from such expressions (extending to higher orders) that amplitudes do decay exponentially above the spectrum, µ > 2Γ. However, for spin-mixed modes with |u n | = |v n |, it is important to remark that even though the second-order corrections in (23) can be obtained for arbitrary relative phases θ, the fourth-order correction to site n 0 can be made consistent with the condition |u n | = |v n | only if Γ 2 σ 2 sin(2θ) = 0. Thus, since a solution with θ = π is equivalent to θ = 0 through spatial reflection in the central site, the only non-equivalent single-site centered spin-mixed modes existing for nonzero Γ and σ have θ = 0, π/2. We also remark that, for a stationary and localized solution, current conservation imposes the general condition: Numerically calculated examples for the spin-up and spin-mixed modes are illustrated in Fig. 3. Note from (23) that, for the spin-mixed mode with θ = π/2, |u n0+1 | 2 + |v n0+1 | 2 = |u n0−1 | 2 + |v n0−1 | 2 , i.e., the reflection symmetry around the central site gets broken on the opposite sublattice (upper or lower part of the chain) if there is a nontrivial phase-shift between the spin-up and spin-down components at the central site. As |Γσ|/µ 2 increases the spatial asymmetry increases (Fig. 3 (d)), until the solution typically bifurcates with an inter-site centered (two-site) mode with equal amplitudes at sites n 0 and n 0 + 1 before reaching the upper band edge at µ = 2Γ.
B. Fundamental gap modes from the flat-band limit
Since the gap in the linear spectrum opened by the spin-orbit coupling at k = ±π/2 appears only when Γ and σ are both nonzero, the standard anticontinuous limit Γ = σ = 0 is not suitable for constructing nonlinear localized modes with frequency inside this gap ("discrete gap solitons"). Instead, we may use the flat-band limit |Γ| = |σ| = 0, where the exact nonlinear compacton modes (19)- (20) can be used as "building blocks" for the continuation procedure. Analogously to above, we may then calculate gap solitons perturbatively in the small parameter |Γ| − |σ|. To be specific, we assume a > −1, Γ ≥ σ > 0, and consider the continuation of a single two-site compacton from the lower flat band µ = −2Γ into the gap. From the limiting solution (19) with the upper sign, we then obtain the lowest-order corrections to six central sites (amplitudes at other sites are of higher order) as: This family of fundamental gap modes (called type I gap modes) can be continued throughout the gap, with a numerical example illustrated in Fig. 4 (a). Profiles of another two types of gap modes numerically found to exist as nonlinear continuation of fundamental compactons, are depicted in Fig. 4 (b,c). Family of gap modes of type II ( Fig. 4 (b)) originates from compact solution which is superposition of two neighboring overlapping in-phase compactons.
On the other hand, type III gap modes evolve in the presence of nonlinearity from superposition of two neighboring overlapping compactons with a π/2 phase difference (Fig. 4 (c)).
IV. LINEAR STABILITY OF NONLINEAR LOCALIZED MODES
Linear stability of the above modes can be checked from the standard eigenvalue problem. If we denote the amplitudes of the exact stationary modes of (17) as {u n }, we may express the perturbed modes as u n = u Inserting into (17) and linearizing, we obtain the following linear system of equations for the perturbation amplitudes {c n , d n , f n , g n }: Linear stability is then equivalent to (26) having no complex eigenvalues. We may easily solve it for the uncoupled modes. Due to the overall gauge invariance of (17) (u n → e iφ u n , v n → e iφ v n ), there are always two eigenvalues at λ = 0. For the spin-polarized modes, the remaining two eigenvalues are at λ = ±(1 − a)µ, while for the spin-mixed mode there is a fourfold degeneracy at λ = 0. The latter is explained by the arbitrary phase difference θ between the u and v components for this mode. To see whether linear stability of the fundamental modes survives switching on the couplings Γ, σ, we first note that the linear spectrum of (26) corresponding to sites with u (0) n ≡ v (0) n ≡ 0 has four branches, at λ ∈ ±[µ − 2Γ, µ − 2σ] and λ ∈ ±[µ + 2σ, µ + 2Γ]. Thus, unless a = 0, 1, or 2, we see immediately that the fundamental spin-polarized modes must remain linearly stable at least for small couplings. The general stability properties for larger Γ and/or σ will be discussed below for the different fundamental modes separately.
A. Spin-polarized modes above the spectrum Typical results from numerical diagonalization of (26) for the family of fundamental spin-polarized modes above the spectrum are shown in Fig. 5. As is seen, these modes are linearly stable in their full regime of existence when a < 1. The magnitude of the frequency of the internal eigenmode arising from local oscillations at the central site lies above the linear spectrum when a < 0 (Fig. 5 (a)) and below the linear spectrum when 0 < a < 1 (Fig. 5 (b)). In both cases, it smoothly joins the band edge as µ → 2Γ (linear limit), without causing any resonances. On the other hand, for a > 1, the Krein signature of this eigenmode will change, as a consequence of the spin-polarized mode now having a lower energy than a spin-mixed mode, and thus it is no longer an energy maximizer for the system. This results in small regimes of weak oscillatory instabilities when the internal mode collides with the linear spectrum for frequencies close to the band edge, as shown in Fig. 5 (c). Here, purely imaginary eigenvalues are represented by green (light gray) triangles, and complex eigenvalues are represented by blue (dark gray) squares and red (middle gray) circles for their real and imaginary parts, respectively.
B. Spin-mixed modes above or below the spectrum
For the fundamental spin-mixed modes continued from (23), the four-fold degeneracy of zero eigenvalues resulting from the relative phase θ is generally broken for non-zero coupling as only modes with integer 2θ/π can be continued, and moreover the structures of modes with θ = 0 and θ = π/2 become non-equivalent. We discuss here first the case θ = 0, and show in Fig. 6 typical results from numerical diagonalization for different values of a. First, for a < −1, as remarked above the spin-mixed modes lie below the linear spectrum (µ < −2Γ), and the pair of eigenvalues originating from λ = 0 in the anticontinuous limit (µ → −∞) generally goes out along the imaginary axis ( Fig. 6 (b)), where it remains. Thus, spin-mixed modes with θ = 0 and a < −1 are generically unstable. On the other hand, when a > −1 the spin-mixed modes lie above the linear spectrum (µ > 2Γ), and for −1 < a < 1 this eigenvalue pair goes out along the real axis ( Fig. 6 (a)). Thus, these modes remain linearly stable for sufficiently large µ (or, equivalently, weak coupling), but become unstable through oscillatory instabilities (complex eigenvalues, see Figs. 6 (c,d)) as they approach the linear band edge with widening tails, causing resonances between the local oscillation mode at the central site and modes arising from oscillations at small-amplitude sites.
An example of the dynamics that may result from the oscillatory instabilities of the spin-mixed modes in this regime is shown in Fig. 7. Note that, after the initial oscillatory dynamics, the solution settles down at the stable fundamental spin-up mode (in this particular case the mode center is also shifted one site to the right).
As illustrated in Figs. 6 (c,d), the stability regime increases for a increasing towards 1, and exactly at a = 1 the spin-mixed states are always stable. However, for a > 1 the eigenvalue pair originating from zero again goes out along the imaginary axis (not shown in Fig. 6) as for a < −1, and thus spin-mixed modes with θ = 0 are generally unstable also for a > 1. In fact, this latter instability can be considered as a stability exchange with the θ = π/2 spin-mixed mode, which, as illustrated in Fig. 8, is generally unstable with purely imaginary eigenvalues for a < 1 (Figs. 8(a,b)) but stable for a > 1 (Fig. 8(c)).
C. Compact modes in the flat-band limit
In the flat-band limit, we may obtain exact analytical expressions for the stability eigenvalues of the single two-site compacton modes. We focus as above on the specific case with Γ = σ > 0 and a > −1, when the nonlinear compacton originating from µ = −2Γ (Eq. (19) with upper sign) enters the mini-gap for increasing µ. For all zero-amplitude sites, the eigenvalues are just those corresponding to the flat-band linear spectrum, λ = ±µ ± 2Γ. For the compacton sites, four eigenvalues correspond to local oscillations obtained by eliminating the surrounding lattice: λ = 0 (doubly degenerate as always) and λ = ±2 2Γ(µ + 4Γ). Since the eigenvalues of these internal modes are always real for Γ > 0 and they do not couple to the rest of the lattice, they do not generate any instability. The remaining eigenvalues describe the modes coupling the perturbed compacton to the surrounding lattice, and are obtained from the subspace with The rather cumbersome result can be expressed as: Oscillatory instabilities are generated if the expression inside the square-root in (27) becomes negative. Since obtaining explicit general expressions for instability intervals in µ would require solving a nontrivial fourth-order equation, we show in Fig. 9 numerical results for the specific parameter values a = ±0.5 and 1.5. As can be seen, the compacton remains stable throughout the mini-gap as long as a ≤ 1 but develops an interval of oscillatory instability in the semi-infinite gap above the spectrum. The instability interval vanishes exactly at a = 1, but then moves into the upper part of the mini-gap for a > 1. Purely imaginary eigenvalues, resulting from the terms outside the square-root in (27) becoming negative, also appear in the semi-infinite gap for a > 1.
D. Gap modes in the mini-gap
For the fundamental (type I) gap mode continued from the single two-site compacton (25) (assuming again a > −1 and Γ > σ > 0), we illustrate in Fig. 10(a) typical results for the numerical stability analysis. As can be seen, as σ decreases from the compacton limit σ = Γ, weak instabilities start to develop mainly close to the two gap edges. A further decrease in σ yields instabilites in most of the upper half of the gap, while the mode remains stable in large parts of the lower half. Comparison with the stability eigenvalues for the exact compacton ( Fig. 9(b)) shows that the instabilities in the upper part of the gap result from resonances between modes corresponding to compacton internal modes (27) and the continuous linear spectrum modes, which get coupled as the tail of the solution gets more extended. (These are seen in Fig. 9(b) as eigenvalue collisions at µ ≈ 0.005 and µ ≈ 0.015, but do not generate any instability in this figure since the corresponding eigenmodes are uncoupled in the exact compacton limit. However, they generate oscillatory instabilities when the exact compacton condition is not fulfilled, as seen in Fig. 10(a).) On the other hand, the instabilities appearing close to the lower gap edge, where the shape of the gap mode is far from compacton-like and closer to a continuum gap soliton (see Fig. 11 (a)) arise from purely imaginary eigenvalues. Direct numerical simulations of the dynamics in this regime ( Fig. 11 (b)) shows that the main outcome of these instabilities is a spatial separation of the spin-up and spin-down components.
As for the type II gap modes that arise in the mini-gap from the superposition of two in-phase neighboring single compactons in the presence of nonlinearity, we obtained pure imaginary eigenvalues in the whole mini-gap, even for the case when value of σ slightly differs from Γ (see Fig. 10 (b)). Here, with further decrease of σ, eigenvalues related to oscillatory instabilities start to occur but only in the upper half of the mini-gap.
On the other hand, instability eigenvalue spectra for type III gap solutions contain only imaginary parts of complex eigenvalues (see Fig. 10 (c)). These instabilities are always present in the lower half of the mini-gap and expand to the upper part as we move further from the compacton limit.
V. CONCLUSIONS
We derived the relevant tight-binding model for a zigzag-shaped chain of spin-orbit coupled exciton-polariton condensates, focusing on the case with basis functions of zero angular momentum and chain angles ±π/4. The simultaneous presence of spin-orbit coupling and nontrivial geometry opens up a gap in the linear dispersion relation, even in absence of external magnetic fields. At particular parameter values, where the strength of the dispersive and spin-orbit nearest-neighbor couplings are equal, the linear dispersion vanishes, leading to two flat bands with associated compact modes localized at two neigboring sites.
We analyzed, numerically and analytically, the existence and stability properties of nonlinear localized modes, as well in the semi-infinite gaps as in the mini-gap of the linear spectrum. The stability of fundamental single-peaked modes in the semi-infinite gaps was found to depend critically on the parameter a describing the relative strength of the nonlinear interaction between polaritons of opposite and identical spin (the latter assumed to be always repulsive). Generally, a spin-mixed mode with phase difference π/2 between spin-up and spin-down components is favoured when a > 1 (cross-interactions repulsive and stronger than self-interactions), while a spin-polarized mode is favoured for a < 1, which is the typical case in most physical setups. However, significant regimes of linear stability were found also for spin-mixed modes with zero phase difference between components when |a| < 1, and for spin-polarized modes when a > 1.
For parameter values yielding a flat linear band, nonlinear compactons appear in continuation of the linear compact modes, in the mini-gap as well as in the semi-infinite gaps. The linear stability eigenvalues for a single two-site compacton were obtained analytically, and shown to result in purely stable compactons inside the mini-gap when a < 1, while regimes of instability were identified in the semi-infinite gaps, and when a > 1 also inside the mini-gap. Continuing compact two-site modes away from the exact flat-band limit yields the exponentially localized fundamental FIG. 10: Imaginary parts of stability eigenvalues for the continuation of: fundamental (I type) (a), type II (b) and type III (c) gap mode inside the mini-gap, when Γ = 0.01 and a = 0.5. Pure imaginary eigenvalues are depicted with green (light gray) triangles, while red (dark gray) circles correspond to imaginary parts of complex eigenvalues. When σ = 0.01, the eigenvalues for fundamental gap mode are those of the compacton illustrated in Fig. 9(b). From bottom to top, σ is decreased to 0.007. Blue vertical dotted lines represent the locations of the lower and upper gap edges. Profiles of the corresponding solutions at σ = 0.007 in the mid-gap (µ = 0) are depicted in Fig. 4.
nonlinear gap modes inside the mini-gap. Several new regimes of instability develop, but the fundamental gap modes typically remain stable in large parts of the lower half of the gap when a < 1. We also found numerically nonlinear continuations of superpositions of two overlapping neighboring compactons (i.e., localized on three sites) with phase difference zero or π/2, where the latter also were found to exhibit significant regimes of linear stability in the mini-gap.
The model studied here may have an experimental implementation with exciton-polaritons in microcavities. Recently, microcavities have been actively investigated as quantum simulators of condensed matter systems. Polaritons have been proposed to simulate XY Hamiltonian [18], topological insulators [19], various types of lattices [20][21][22] among other interesting proposals [23] many of which were realized experimentally. In fact, the quasi one-dimensional zigzag chain considered here may be a more practical system to study the effects of interactions in presence of spinorbit coupling as compared to the full two-dimensional systems mentioned above. A possible realization of the studied system could be using microcavity pillars or tunable open-access microcavities [24]. In the latter ones, large values of TE-TM splitting can be achieved exceeding that of monolithic cavities by a factor of three [10]. Apart from directly controlling the strength of TE-TM splitting by changing parameters of the experimental system such as the offset of the frequency from the center of the stop band of the distributed Bragg reflector [25], one more possibility to control . Note the tendency for the spin-up and spin-down components to localize mainly on odd and even sites, respectively, after t ∼ 10 4 . parameters of the system is provided by using the excited states of the zigzag nodes such as spin vortices which were shown to influence the sign of the coupling strength between the sites in a polaritonic lattice [26].
Finally, we note also the recent realizations of zigzag chains with large tunability for atomic Bose-Einstein condensates [27], opening up the possibility for studying related phenomena involving spin-orbit coupling in a different context. | 8,961.4 | 2018-09-05T00:00:00.000 | [
"Physics"
] |
Experimental and numerical study of the symbolic dynamics of a modulated external-cavity semiconductor laser
We study the symbolic dynamics of a stochastic excitable optical system with periodic forcing. Specifically, we consider a directly modulated semiconductor laser with optical feedback in the low frequency fluctuations (LFF) regime. We use a method of symbolic time-series analysis that allows us to uncover serial correlations in the sequence of intensity dropouts. By transforming the sequence of inter-dropout intervals into a sequence of symbolic patterns and analyzing the statistics of the patterns, we unveil correlations among several consecutive dropouts and we identify clear changes in the dynamics as the modulation amplitude increases. To confirm the robustness of the observations, the experiments were performed using two lasers under different feedback conditions. Simulations of the Lang-Kobayashi (LK) model, including spontaneous emission noise, are found to be in good agreement with the observations, providing an interpretation of the correlations present in the dropout sequence as due to the interplay of the underlying attractor topology, the external forcing, and the noise that sustains the dropout events. © 2014 Optical Society of America OCIS codes:(140.2020)Diode lasers; (140.5960)Semiconductor lasers; (190.3100)Instabilities and chaos; (140.1540)Chaos. References and links 1. C. Bandt and B. Pompe, “Permutation entropy: a natural complexity measure for time series,” Phys. Rev. Lett. 88, 174102 (2002). 2. O. A. Rosso, H. A. Larrondo, M. T. Martin, A. Plastino, and M. A. Fuentes, “Distinguishing noise from chaos,” Phys. Rev. Lett. 99, 154102 (2007). 3. N. Rubido, J. Tiana-Alsina, M. C. Torrent, J. Garcia-Ojalvo, and C. Masoller, “Language organization and temporal correlations in the spiking activity of an excitable laser: experiments and model comparison,” Phys. Rev. E 84, 026202 (2011). 4. L. Zunino, M. C. Soriano, and O. A. Rosso, “Distinguishing chaotic and stochastic dynamics from time series by using a multiscale symbolic approach,” Phys. Rev. E 86, 046210 (2012). 5. M. C. Soriano, L. Zunino, L. Larger, I. Fischer, and C. R. Mirasso, “Distinguishing fingerprints of hyperchaotic and stochastic dynamics in optical chaos from a delayed opto-electronic oscillator,” Opt. Lett. 36, 2212 (2011). 6. A. Aragoneses, N. Rubido, J. Tiana-alsina, M. C. Torrent, and C. Masoller, “Distinguishing signatures of determinism and stochasticity in spiking complex systems,” Sci. Rep. 3, 1778 (2013). #205174 $15.00 USD Received 21 Jan 2014; revised 6 Feb 2014; accepted 7 Feb 2014; published 21 Feb 2014 (C) 2014 OSA 24 February 2014 | Vol. 22, No. 4 | DOI:10.1364/OE.22.004705 | OPTICS EXPRESS 4705 7. D. Lenstra, B. H. Verbeek, and A. J. Den Boef, “Coherence collapse in single-mode semiconductor-lasers due to optical feedback,” IEEE J. Quantum. Electron. 21, 6, 674–679 (1985). 8. K. Lüdge editor,Nonlineal Laser Dynamics. From Quantum Dots to Cryptography (Wiley-VCH, 2011). 9. D. M. Kane and K. A. Shore editors, Unlocking Dynamical Diversity(John Wiley & Sons, 2005). 10. S. Donati and R-H Horng, “The diagram of feedback regimes revisited,” IEEE J. Sel. Top. Quantum Electron. 19, 1500309 (2013). 11. M. Giudici, C. Green, G. Giacomelli, U. Nespolo, and J. R. Tredicce,“Andronov bifurcation and excitability in semiconductor lasers with optical feedback,” Phys. Rev. E 55, 6414 (1997). 12. A. M. Yacomotti, M. C. Eguia, J. Aliaga, O. E. Martinez, and G. B. Mindlin, “Interspike time distribution in noise driven excitable systems,” Phys. Rev. Lett. 83, 292 (1999). 13. T. Heil, I. Fischer, W. Elsäßer, and A. Gavrielides, “Dynamics of semiconductor lasers subject to delayed optical feedback: the short cavity regime,” Phys. Rev. Lett. 87, 243901 (2001). 14. A. Tabaka, K. Panajotov, I. Veretennicoff, and M. Sciamanna, “Bifurcation study of regular pulse packages in laser diodes subject to optical feedback,” Phys. Rev E 70, 036211 (2004). 15. J. A. Reinoso, J. Zamora-Munt, and C. Masoller, “Extreme intensity pulses in a semiconductor laser with a short external cavity,” Phys. Rev. E87, 062913 (2013). 16. S. D. Cohen, A. Aragoneses, D. Rontani, M. C. Torrent, C. Masoller, and D. J. Gauthier, “Multidimensional subwavelength position sensing using a semiconductor laser with optical feedback,” Opt. Lett. 38, 4331 (2013). 17. L. Junges, T. Pöschel, and J. A. C. Gallas, “Characterization of the stability of semiconductor lasers with delayed feedback according to the Lang–Kobayashi model,” Eur. Phys. J. D. 67, 149 (2013). 18. D. W. Sukow, J. R. Gardner, and D. J. Gauthier, “Statistics of power-dropout events in semiconductor lasers with time-delayed optical feedback,” Phys. Rev. A. 56, R3370 (1997). 19. J. Mulet and C. R. Mirasso, “Numerical statistics of power dropouts based on the Lang–Kobayashi mod,” Phys. Rev. E59, 5400 (1999). 20. M. Sciamanna, C. Masoller, N. B. Abraham, F. Rogister, P. Mégret, and M. Blondel, “Different regimes of lowfrequency fluctuations in vertical-cavity surface-emitting lasers,” J. Opt. Soc. Am. B 20, 37 (2003). 21. J. F. Martı́nez Avila, H. L. D. de S. Cavalcante, and J. R. Rios Leite, “Experimental deterministic coherent resonance,” Phys. Rev. Lett. 93, 144101 (2004). 22. Y. Hong and K. A. Shore, “Statistical measures of the power dropout ratio in semiconductor lasers subject to optical feedback,” Opt. Lett. 30, 3332 (2005). 23. A. Torcini, S. Barland, G. Giacomelli, and F. Marin, “Low-frequency fluctuations in vertical cavity lasers: experiments versus Lang–Kobayashi dynamics,” Phys. Rev. A 74, 063801 (2006). 24. J. Zamora-Munt, C. Masoller, and J. Garcia-Ojalvo, “Transient low-frequency fluctuations in semiconductor lasers with optical feedback,” Phys. Rev. A 81, 033820 (2010). 25. K. Hicke, X. Porte, and I. Fischer, “Characterizing the deterministic nature of individual power dropouts in semiconductor lasers subject to delayed feedback,” Phys. Rev. E 88, 052904 (2013). 26. D. Baums, W. Elsässer and E. O. Göbel, “Farey tree and Devil’s staircase of a modulated external-cavity semiconductor laser,” Phys. Rev. Lett. 63, 155 (1989). 27. J. P. Toomey, D. M. Kane, M. W. Lee, and K. A. Shore, “Nonlinear dynamics of semiconductor lasers with feedback and modulation,” Opt. Express 18, 1695516972 (2010). 28. Y. Liu, N. Kikuchi, and J. Ohtsubo, “Controlling dynamical behavior of a semiconductor laser with external optical feedback,” Phys. Rev. E 51, R2697–R2700 (1995). 29. D. W. Sukow and D. J. Gauthier, “Entraining power-dropout events in an external-cavity semiconductor laser using weak modulation of the injection current,” IEEE J. Quantum Electron. 36, 175 (2000). 30. W-S Lam, N. Parvez, and R. Roy, “Effect of spontaneous emission noise and modulation on semiconductor lasers near threshold with optical feedback,” Int. J. of Modern Phys. B 17, 4123–4138 (2003). 31. J. M. Mendez, R. Laje, M. Giudici, J. Aliaga, and G. B. Mindlin, “Dynamics of periodically forced semiconductor laser with optical feedback,” Phys. Rev. E 63, 066218 (2001). 32. F. Marino, M. Giudici, S. Barland, and S. Balle, “Experimental evidence of stochastic resonance in an excitable optical system,” Phys. Rev. Lett. 88, 040601 (2002). 33. J. M. Buldú, J. Garcia-Ojalvo, C. R. Mirasso, and M. C. Torrent, “Stochastic entrainment of optical power dropouts,” Phys. Rev. E66, 021106 (2002). 34. J. M. Buldú, D. R. Chialvo, C. R. Mirasso, M. C. Torrent, and J. Garcia-Ojalvo, “Ghost resonance in a semiconductor laser with optical feedback,” Europhys. Lett. 64, 178 (2003). 35. T. Schwalger, J. Tiana-Alsina, M. C. Torrent, J. Garcia-Ojalvo, and B. Lindner, “Interspike-interval correlations induced by two-state switching in an excitable system,” Europhys. Lett. 99, 10004 (2012). 36. R. Lang and K. Kobayashi, “External optical feedback effects on semiconductor injection laser properties,” IEEE J. Quantum Electron. 16, 347 (1980). #205174 $15.00 USD Received 21 Jan 2014; revised 6 Feb 2014; accepted 7 Feb 2014; published 21 Feb 2014 (C) 2014 OSA 24 February 2014 | Vol. 22, No. 4 | DOI:10.1364/OE.22.004705 | OPTICS EXPRESS 4706
Introduction
Inferring signatures of determinism in stochastic high-dimensional complex systems is a challenging task, and much effort is focused on developing efficient and computationally fast meth-ods of time-series analysis that are useful even in the presence of high levels of noise [1][2][3][4][5][6].In optics, a long standing discussion about the roles of stochastic and deterministic nonlinear processes comes from the dynamics of semiconductor lasers with optical feedback.Their dynamical behavior has been studied for decades and is still the object of intense research, allowing for the observation of a great variety of phenomena [7][8][9][10], including excitability [11,12], regular pulses [13,14], extreme pulses and intermittency [15], quasiperiodicity [16] and chaos [17].
A particular dynamical behavior occurs for moderate feedback near the solitary laser threshold, and is referred to as low-frequency fluctuations (LFFs) [18][19][20][21][22][23][24][25].In the LFF regime, the laser output intensity displays irregular, apparently random and sudden, dropouts.In particular, the LFF dynamics has been studied in detail when the laser current is periodically modulated [26,27], not only because the LFFs can be controlled via current modulation [28], but also, from a complex systems perspective, because the interplay of nonlinearity, noise, periodic forcing and delayed feedback leads to entrainment and synchronization [29,30], providing a controllable experimental setup for studying these phenomena.In addition, because the LFF dynamics is excitable, the influence of external forcing has also attracted attention from the point of view of improving our understanding of how excitable systems respond to external signals to encode information [31][32][33][34][35].
In this paper, we use a symbolic method of time-series analysis, referred to as ordinal analysis [1], to study the transition from the LFF dynamics of the unmodulated laser, in which the dropouts are highly stochastic and reveal only weak signatures of an underlying deterministic attractor [6], to the modulated LFF dynamics, which consists of more regular dropouts, with a periodicity that is related to external forcing period [29].For increasing modulation amplitude there is a gradual transition from mainly stochastic to mainly deterministic behavior, and our goal is to identify in this transition characteristic features which are fingerprints of the underlying topology of the phase space of the system.
By using ordinal analysis applied to experimentally recorded sequences of inter-dropout intervals (IDIs), we identify clear changes in the symbolic dynamics as the modulation amplitude increases.Specifically, our analysis uncovers the presence of serial correlations in the sequence of dropouts, and reveals how they are modified by the amplitude of the external forcing.To demonstrate the robustness and the generality of the observations, the experiments are performed with two lasers under different feedback conditions.
We also show that simulations of the Lang-Kobayashi (LK) model [36] are in good qualitative agreement with the experimental observations.While the LK model has been shown to adequately reproduce the main statistical features of the LFF dynamics, such as the IDI distribution with and without modulation [19,29,30], it has also been shown that the LFFs are noise sustained in the LK model [23,24] (i.e., the LFFs are a transient dynamics that dies out when the trajectory finds a stable cavity mode, in deterministic simulations of the LK model).Therefore, it is remarkable that, in spite of the fact that the inclusion of noise is required for simulating sustained LFFs, the model adequately reproduces the symbolic dynamics, and in particular, the correlations present in the sequence of dropouts, and how they vary with the modulation amplitude.
Experimental setup and LFF dynamics with current modulation
The experimental setup is shown in Fig. 1: we perform the experiments with a laser emitting at 650 nm with free-space feedback provided by a mirror, and with a laser emitting at 1550 nm, with feedback provided by an optical fiber.
For the 650 nm laser, the external cavity is 70 cm (giving a feedback time delay of 4.7 ns) and the feedback threshold reduction is 8%.A 50/50 beam-splitter sends light to a photo-detector (Thorlabs DET210) connected with a 1 GHz oscilloscope (Agilent DSO 6104A).The solitary threshold is 38 mA and the current and temperature (17 C) are stabilized with an accuracy of 0.01 mA and 0.01 C, respectively, using a controller (Thorlabs ITC501).Through a bias-tee in the laser head, a sinusoidal RF component from a leveled waveform generator (HP Agilent 3325A) is combined with a constant dc current of 39 mA.The modulation frequency is f mod = 17 MHz and the modulation amplitude varies from 0 mV to 78 mV in steps of 7.8 mV (from 0% to 4% of the dc current in steps of 0.4%).For each modulation amplitude, five measurements of 3.2 ms were recorded.The time series contains between 74,000 and 207,000 dropouts, at low and high modulation amplitude, respectively.
For the 1550 nm laser, the time delay is 25 ns and feedback threshold reduction is 10.7%.The solitary threshold is 11.20 mA, the dc value of the pump current is 12.50 mA, the modulation frequency is f mod = 2 MHz and the modulation amplitude varies from 0 mV to 150 mV in steps of 10 mV (from 0% to 24% of the dc current in steps of 1.6%).The time series contain between 8,000 and 19,000 dropouts, at low and high modulation amplitude, respectively.While, for the 1550 nm laser, the modulation frequency is about one order of magnitude smaller than for the 650 nm laser, the relation with the characteristic time-scale of the LFF dynamics, given by the average inter-dropout interval ΔT is about the same: for the 650 nm laser, ΔT = 365 ns and thus ΔT × f mod = 6.2.For the 1550 nm laser, ΔT = 2.55 μs and ΔT × f mod = 5.1.
Figure 2 displays the intensity time series, the probability distribution functions (PDFs) of inter-dropout intervals, ΔT i (IDIs), and the return maps, ΔT i vs ΔT i+1 , for four modulation amplitudes for the 650 nm laser.As it has been reported in the literature the dropouts tend to occur at the same phase in the drive cycle with current modulation, and the IDIs are multiples of the modulation period [11,29,30].For increasing modulation amplitude, the IDIs become progressively smaller multiples of the modulation period and, for high enough modulation amplitude, the power dropouts occur every modulation cycle [29].Here, for the highest modulation amplitude, the PDF presents a strong peak at two times the modulation period [see Fig. 2(k)].The return maps (third column of Fig. 2) display a clustered structure, with "islands" that correspond to the well-defined peaks observed in the PDFs, also in good agreement with previous reports [11,29].A similar behavior is observed with the 1550 nm laser.The plots of ΔT i+1 vs ΔT i are almost symmetric, suggesting that ΔT i+1 < ΔT i and ΔT i+1 > ΔT i are equally probable; however, in Sec. 4 we will demonstrate that the modulation induces correlations in the ΔT i sequence, induced by the modulation, which can not be inferred from these plots.
Lang and Kobayashi model and method of symbolic time-series analysis
The Lang and Kobayashi (LK) rate equations for the slowly varying complex electric field amplitude E and the carrier density N are given by [36] where τ p and τ N are the photon and carrier lifetimes respectively, α is the line-width enhancement factor, G is the optical gain, G = N/(1 + ε|E| 2 ) (with ε being a saturation coefficient), μ is the pump current parameter, η is the feedback strength, τ is the feedback delay time, ω 0 τ is the feedback phase, and β sp is the noise strength, representing spontaneous emission.
For simulating the dynamics with current modulation, the pump current parameter is μ = μ 0 + a sin(2π f mod t), where a is the modulation amplitude, f mod is the modulation frequency, and μ 0 is the dc current.Simulations of 2 ms were performed.The intensity time-series were averaged over a moving window of 1 ns to simulate the bandwidth of the experimental detection system.The averaged time series contained between 12,000 and 30,000 dropouts for low and high modulation amplitude, respectively.The best agreement with the dynamics found in the experimental data was for μ 0 = 1.01, f mod = 21 MHz, ε = 0.01, k = 300 ns −1 , τ = 5 ns, γ = 1 ns −1 , β sp = 10 −4 ns −1 , η = 10 ns −1 , and α = 4.For these parameters ΔT = 127 ns and ΔT × f mod = 2.7.
The experimental and numerical sequences of IDIs are analyzed by means of ordinal analysis [1], in which the IDI sequence is transformed into a sequence of ordinal patterns (OPs), also referred to as words.Words of length D are defined by considering the relative length of D consecutive IDIs [see Fig. 2(a)].For D = 2 there are two OPs: ΔT i < ΔT i+1 gives word '01' and ΔT i > ΔT i+1 gives word '10'; for D = 3 there are six OPs: ΔT i < ΔT i+1 < ΔT i+2 gives '012', ΔT i+2 < ΔT i+1 < ΔT i gives '210', etc.This symbolic transformation keeps the information about correlations present in the dropout sequence, but neglects the information contained in the duration of the IDIs.The words are formed by consecutive non-superposing IDIs (i.e., for D = 2, ΔT i , ΔT i+1 define one word and ΔT i+2 , ΔT i+3 define the next one).Then, the probabilities of the different words are computed in each time series.
In order to select the optimal length of the words for the analysis, we need to consider the length of the correlations present in the time-series: if D is much longer than the correlation length, most words will appear in the sequence with similar probabilities.In addition, we need to consider the length of the time-series, because the number of possible words increases with D as D!, and for large D values, long time series will be needed for computing the word probabilities with robust statistics.Here, we recorded long time series of dropouts and the main limitation for the value of D comes from the large level of stochasticity of the LFF dynamics, which results in correlations among only few consecutive dropouts.Thus, we limit the ordinal analysis to D = 2 and D = 3 words.We will show that the LFF symbolic dynamics is such that the analysis with words of D = 3 allows us to uncover correlations which are not seen with D = 2 words.
From the sequence of words, additional information can be extracted by computing the transition probabilities (TPs) [3] from one word to the next.In Fig. 2(d), the transition 10 → 01 is depicted as example.With D = 2 words, the TP analysis can uncover correlations among five consecutive dropouts, and thus allows us to extract information about the memory of the system in a longer time scale.
Results
Figure 3 shows the probabilities of words of D = 2 (a, b, c) and D = 3 (d, e, f), vs. the modulation amplitude, for the 650 nm laser (a, d), for the 1550 nm laser (b, e), and for the simulated time series (c, f).The gray region indicates probability values consistent with the null hypothesis (NH) that the words are equally probable, and thus, that there are no correlations among the dropouts.In other words, probability values outside the gray regions are not consistent with a uniform distribution of word probabilities and reveal serial correlations in the IDI sequence.It can be noticed that the gray region is narrower in (a, d) than in (b, e) and (c, f).This is due to the fact that the number of dropouts recorded for the 650 nm laser is much larger than for the 1550 nm laser (the corresponding delay times being 4.7 ns and 25 ns respectively), and is also larger than the number of dropouts in the simulated data.
It is observed that the dynamics is consistent with the NH, in the case of D = 2, for small and for high modulation amplitude.However, the analysis with D = 3, reveals that, for high modulation, the probabilities are outside the gray region, revealing correlations among four consecutive IDIs.We note that there are two groups of words, one less probable ('012', '210') and one more probable ('021','102', '120', '201'), resulting, for D = 2, in the same probabilities for '01' as for '10'.With D = 3, the less probable words are those which imply three consecutively increasing or decreasing IDIs and this can be understood in the following terms: strong enough modulation forces a rhythm in the LFF dynamics, and three consecutively increasing or decreasing intervals imply a loss of synchrony with the external rhythm, and thus, are less likely to occur.By computing the four transition probabilities of D = 2 words, depicted in Fig. 4, we obtain information about correlations among five consecutive dropouts.This analysis is statistically more robust than computing the probabilities of the 24 words of length D = 4.
The results in Fig. 4 confirm that, at this time scale, the dynamics is still consistent with the NH for low modulation amplitudes but, as the modulation increases, a transition takes places and the TPs display a deterministic-like behavior.This transitions occur at the same values as in Fig. 3 (at about 1.8% modulation amplitude for the 650 nm case, 16% for the 1550nm case, and 6% for the simulated data).Figures 4(e), and (f) show that, for high modulation amplitude, the most probable transitions are those which go from one word to the same word ('01 → 01' and '10 → 10'), because the external forcing imposes a periodicity in the LFF dynamics.
In Figs. 3 and 4, there is a good qualitative agreement between experiments and simulations.As discussed in the introduction, within the framework of the LK model, the LFF dynamics is sustained by spontaneous emission noise, and thus, one could expect weak correlations in the sequence of dropouts.While this is indeed the case for no modulation or very weak modulation amplitude, larger modulation induces precise correlations, which are adequately reproduced by the LK model.For strong modulation the reason why some words and transitions are more probable than the others is well understood (as due to the external rhythm imposed by the modulation), but for moderate modulation amplitude, further investigations are needed in order to understand the symbolic behavior.
Conclusions
We have studied experimentally and numerically the symbolic dynamics of a semiconductor laser with optical feedback and current modulation in the LFF regime.We have analyzed time series of inter-dropout intervals employing a symbolic transformation that allows us to identify clear changes in the dynamics induced by the modulation.For weak modulation the sequence of dropouts is found to be mainly stochastic, while for increasing modulation it becomes more deterministic, with correlations among several consecutive dropouts.We have identified clear changes in the probabilities of the symbolic words and transitions with increasing modulation amplitude.The LK model has also been tested and we have found a good qualitative agreement with the experimental observations.We speculate that the symbolic behavior uncovered here is a fingerprint of the underlying topology of the phase space, and is due to the interplay of noise-induced escapes from an stable external cavity mode, and the dynamics in the coexisting attractor.It would be interesting for a future study to analyze the influence of varying the modulation frequency and the noise strength.It would also be interesting to use an analytic effective potential [18], to further understand the mechanisms underlying the symbolic dynamics of the modulated LFFs.
The methodology proposed here can be a useful tool for identifying signatures of determinism in high-dimensional and stochastic complex systems.It provides a computationally efficient way to unveiling structures and transitions hidden in the time series.As the laser in the LFF regime is an excitable system, our results could be relevant for understanding serial correlations in the spike sequences of other forced excitable systems.Also as a future study, it would be interesting to consider simple models of excitable systems, to investigate the universality of the correlations induced by periodic forcing.
Fig. 3 .
Fig.3.Probabilities of the words of D = 2 (a, b, c) and D = 3 (d, e, f) versus the modulation amplitude for the 650 nm laser (a, d), the 1550 nm laser (b, e), and the numerical simulations (c,f).The gray region (p ± 3σ , where p = 1/D!, σ = p(1 − p)/N, and N is the number of words in the symbolic sequence) indicates probability values consistent with 95% confidence level with the null hypothesis that all the words are equally probable (i.e., that there are no correlations present in the sequence of dropouts). | 5,604.2 | 2014-02-24T00:00:00.000 | [
"Physics"
] |
Bridging the Gap in Personalized Oncology using Omics Data and Epidemiology
Personalized medicine is the concept of tailoring a personalized treatment for each individual or group of individuals based primarily on their genomic composition as well as environmental and demographic factors [1]. While most researchers and scientists use the terms personalized medicine and precision medicine interchangeably, some argue that they are not the same. “Personalized” is the older term and “precision” is the newer and more accurate term to describe the concept according to the National Research Council. Personalized healthcare may appear to be a relatively new and revolutionary concept. However, personalization has been applied and acknowledged in the field of medicine long before the success of the Human Genome Project [2]. Personalized and/or precision medicine has gained widespread attention and awareness from the public soon after the completion of the Human Genome Project in 2003.
Introduction
Personalized medicine is the concept of tailoring a personalized treatment for each individual or group of individuals based primarily on their genomic composition as well as environmental and demographic factors [1]. While most researchers and scientists use the terms personalized medicine and precision medicine interchangeably, some argue that they are not the same. "Personalized" is the older term and "precision" is the newer and more accurate term to describe the concept according to the National Research Council. Personalized healthcare may appear to be a relatively new and revolutionary concept. However, personalization has been applied and acknowledged in the field of medicine long before the success of the Human Genome Project [2]. Personalized and/or precision medicine has gained widespread attention and awareness from the public soon after the completion of the Human Genome Project in 2003.
Since initial sequencing and human genome analysis was accomplished, a huge effort has been put into medical research focused on associating genomic variations with individual phenotypes [3]. There have been numerous breakthroughs in the field of precision oncology over the past few years for the most part due to genetic biomarkers, which are certain genes that may affect a person's response to therapy and their prognosis [4]. A true application was the women are now screening for the single nucleotide polymorphisms (SNPs) in the BRCA1 and/or BRCA2 gene which if found significantly increase the risk of breast cancer and ovarian cancer. The early detection of such biomarkers allows the women to administer preventive chemotherapy or consider a prophylactic operation [5].
Genetic polymorphisms and mutations in drug metabolizing enzymes, transporters, receptors, and other drug targets are linked to inter-individual differences in the efficacy and toxicity of medications as well as genetic disease risk factors. The highest impact on personalized medicine is often seen for drugs with a narrow therapeutic index, with important examples emerging from treatment with antidepressants, oral anticoagulants, and chemotherapeutics, which are metabolized by CYP2D6/CYP2C9, VKORC1, and TPMT, respectively. To apply the increasing amounts of pharmacogenomics knowledge to clinical practice, specific dosage recommendations based on genotypes will have to be developed to guide the clinician. Increases in efficacy and safety by the individualization of medical treatment may have benefits in financial terms, if information is presented to show that personalized medicine will be cost-effective in healthcare systems [6]. ic polymorphisms and/or mutations rapidly and conveniently. New technologies are evolving to transform diagnostic tests and translational research into routine practice. Laboratory methods as Chip microarray, NGS, and organ on chip analysis have been used as powerful tools in research within all collaborative field of science and omics analytics that would be covered extensively in this review.
Leukemia
Remarkable improvements in survival rates have made childhood acute lymphoblastic leukemia (ALL) a success story within pediatric oncology. Despite the progress that has been observed, relapses occur unpredictably, and treatment can be associated with acute and long-term toxicity. This has prompted efforts to better tailor ALL therapy in individual patients. Adaptation of personalized or precision medicine in childhood leukemia takes into consideration not only somatically acquired characteristics of tumor cells, but also inherent patient characteristics. Genome-wide association studies have identified inherited genetic polymorphisms that predispose to the development of leukemia, treatment response and outcome [8][9][10][11].
The identification of germline polymorphisms in thiopurine methyltransferase (TPMT), a gene encoding an enzyme responsible for the metabolism of thiopurines and impacting tolerance of 6-mercaptopurine (6-MP), is an example of an observation that has had important implications for optimizing chemotherapy dosing in individual patients. Those harbouring mutant TPMT alleles have reduced TPMT function and accumulate excessive active thiopurine metabolites. While these patients have been reported to have more favorable outcomes, they are at higher risk for developing myelosuppression and secondary malignancies and may be unable to tolerate full doses of 6-MP [12,13]. Some treatment groups have now recommended routine screening for TPMT activity, in order to tailor dosing of thiopurines accordingly [14].
Another aspect of personalizing therapy for children with ALL focuses on measures to prevent acute and long-term treatment-related toxicities. Hypersensitivity reactions are one of the most common side effects of asparaginase, which is a mainstay of ALL therapy. These reactions frequently lead to the production of neutralizing antibodies and have been associated with inferior outcomes [15]. Fernandez and colleagues have recently identified a risk for asparaginase hypersensitivity reactions and antiasparaginase antibodies in individuals with HLA-DRB1*07:01 alleles. This study demonstrates a strategy for a priori identification of patients predisposed to developing an allergic reaction to this important drug [16].
On another side glucocorticoid-induced osteonecrosis (ON) occurs in up to 20% of adolescent patients receiving ALL therapy and leads to significant morbidity [17]. While several well-defined clinical risk factors exist, such as age and corticosteroid exposure, recent efforts have also focused on identifying genomic predictors.
Chang et al. [18] recently reported alterations in genes in the glutamate receptor pathway as predictors for glucocorticoid associated ON in a large genome-wide association study that included 2285 children with ALL. Nevertheless, the individualized treatment of Glucocorticoids is still under research and extensive precision investigation.
In the meanwhile, Adult ALL have been under research for personalized therapy. More sophisticated diagnostic procedures, including immunophenotyping, cytogenetics, molecular genetics, and new genomics, have allowed the definition of new ALL sub-entities which, in some cases, has translated into specific therapies. A great achievement is the possibility of evaluating minimal residual disease (MRD), which can now be done in about 95% of ALL patients. MRD is the most important prognostic factor and thus a major component of a personalized treatment algorithm [19,20]. Targeted therapy in Philadelphia chromosome-positive ALL (Ph+ALL) with tyrosine kinase inhibitors (TKI) and immunotherapy with monoclonal antibodies targeting surface antigens expressed on leukemic blast cells have extended the armamentarium [21]. Not Only ALL experience Personalized Medicine but also Acute Myeloid Leukemia (AML). Currently, treatment decisions for the majority of patients with AML depend on age and cytogenetic characteristics, and only occasionally on the use of a limited number of molecular alterations at diagnosis (mutations in FLT3, NPM1, and CEBPA, and possibly IDH1/2 mutations) [22].
Melanoma
The advances on melanoma molecular pathogenesis have opened a new insight for the management of advanced melanoma using personalized medicine. The development of novel therapies that target causative genetic events and improve disease free survival and overall survival was the key of application [23]. The selective BRAF kinase inhibitors (Vemurafenib and dabrafenib) are effective in BRAF mutant melanoma; MEK inhibitors (trametinib and cobimetinib) show efficacy against both BRAF-and KRAS/ NRAS-driven tumors; KIT inhibitors (imatinib, dasatinib, sunitinib and nilotinib) have demonstrated clinical responses in melanoma arising from acral, mucosal, and chronic sun-damaged cutaneous sites; and additionally, there are novel therapeutic monoclonal antibodies targeted against immunosuppressive molecules such as CTLA4, PD-1 and PD-L1 [24]. Therefore, molecular diagnostics are increasingly performed routinely in the diagnosis and management of patients with melanoma.
Breast cancer
Breast cancer is the most prevalent cancer in women. It is estimated that more than 500,000 annual deaths occur worldwide due to breast cancer [25]. The molecular subtyping of breast cancer is now possible due to the advancement in genomic technologies. The main molecular subtypes of breast cancer are: Estrogen receptor positive (ER+) which is further divided into the types Luminal A and Luminal B, Triple-negative which is human epidermal growth factor receptor 2 negative (HER2-), ER-and progesterone receptor negative (Pgr-), and HER2+. The triple negative and HER2+ are associated with bad prognoses while ER+ subtypes generally have good prognosis with Luminal A being the more favorable subtype [26]. HER+ is has seen the highest success with targeted therapy in the form of monoclonal antibodies: trastuzumab, pertuzum-ab+trastuzumab; they are used as adjuvants in addition to chemotherapy [27].
Circulating tumor cells (CTCs) are a valuable biomarker in breast cancer patients. The monitoring and evaluation of the genomic changes in CTCs through whole genome sequencing (WGS) is helpful in the determination of the appropriate therapeutic intervention [28]. Another genomic biomarker of importance is the microRNA: a non-coding RNA that regulates gene expression. It was revealed that microRNA can trigger the release of chemotherapeutic agents from nanoparticles and into the cells [29]. Controlling the release time of the chemotherapeutic may reduce chemotherapy-induced side effects.
Lung cancer
Lung cancer is the leading cause of cancer-related death. It is estimated that lung cancer kills more than 1 million people every year. There are different subtypes of lung cancer. The first is Non-Small Cell Lung Carcinoma (NSCLC) which is further divided into the two subtypes Adenocarcinoma (ADC) and Squamous Cell Carcinoma (SCC). The second subtype is Small Cell Lung Carcinoma (SCLC). NSCLC cases constitute an estimate of 85% of lung cancer cases globally [30]. Thanks to DNA whole exome sequencing (WES), a full catalogue of somatically acquired mutations in lung cancer is available. However, targeted therapy still remains a challenge since lung tumors exhibit intra-tumor heterogeneity [31].
The majority of lung cancer cases, mainly in ADC, have EGFR mutations and EML4-ALK fusion gene. There is a large population of lung cancer patients of the NSCLC subtype that have the ALK fusion gene. It has been a target for ALK inhibitors such as crizotinib which has proven successful especially in ADC cases where it has a response rate equal to 57%-74% and it caused fewer adverse effects in comparison with traditional chemotherapy [32]. Crizotinib has had massive success in NSCLC patients with ROS1 rearrangements as well with a response rate to be close to 80% [33].
Prostate cancer
Prostate cancer (CaP), indolent or aggressive, is the most common malignancy in men worldwide. Indolent cases, which are localized and slow-progressing are more common than aggressive ones that tend to be resistant to radiotherapy (RT) and prone to metastasis [34]. One of the earliest diagnostic biomarkers is Prostate Specific Antigen (PSA) that present in higher amounts than usual in cases of malignancy. Mutated Androgen Receptor (AR) has a pronounced link with prostate cancer incidence. Traditional therapies include surgical or pharmacological androgen deprivation therapy (ADT). Moreover, inhibition of AR signalling by the targeting of the CYP17 enzyme such as: enzalutamide, abiraterone, apalutamide has been effective [35]. AR mutations and gene amplification are also common in castration resistant prostate cancer (CRPC): an aggressive form in which cells develop ADT resistance by hyper-activating androgen signalling [36].
Colorectal cancer
Advances in the treatment of colorectal cancer have led to an improvement in survival from 12 months with fluorouracil monotherapy to approximately 2 years. However, there are significant molecular differences between tumors which can affect both prognosis and response to treatment. Personalized medicine aims to tailor treatment according to the characteristics of the individual patient, specifically for early metastatic prognosis and biomarker precision application to personalized treatments. The first true use of personalized medicine in mCRC was the clinical testing of KRAS mutations (which occur in approximately 45-50% of patients with CRC) [37]. Subsequently the anti-EGFR treatment is given only to patients who are KRAS wild type. However, not all patients who are KRAS wild type respond to anti-EGFR therapy and therefore there has been substantial research into other potential predictive biomarkers for future precision application [38].
After KRAS mutations, BRAF V600E mutations currently have the strongest evidence to support their use as a predictive biomarker for EGFR-targeted mAb activity [39]. Most but not all of the available evidence links BRAF V600E mutations with resistance to EGFR-targeted mAb therapy [40], and therefore it is still under extensive research.
Challenges in Personalized Medicine
Genomics-based personalized medicine has gained public recognition for its multiple achievements during a short time period. However, many obstacles hinder its practical clinical implementation. Since the realization of the HGP, the cost of genomic analysis has significantly decreased as has the time required to sequence an entire genome. This development augmented the amount of available genomic, transcriptomic, proteomic and metabolomic information into what is known as "big data": a term applied to datasets whose size or type is beyond the ability of traditional relational databases to capture, manage, and process the data. The rapid emergence of big data in the field poses challenges for bioinformaticians. Furthermore, the use of medical big data requires the integration of various data classes, such as electronic health records (EHR), epidemiological studies, clinical trials, clinical registries, biobank data, multi-omics [41]. One merged, comprehensive, international network would bridge the gap between bioinformatics and clinical practice, facilitating the successful application of personalized healthcare in which effective communication between clinicians and bioinformaticians is crucial [42].
The article "Seven Questions for Personalized Medicine" discussed how PM is striving to fulfill "unrealistic" expectations set by the public. Authors argue that PM is not inclusive as the cost of tar-geted therapy is afforded only by the rich in developed countries; given that most current personalized therapies are cost-ineffective. Another argument states that the semi-complete dependence on genomics for the development of targeted therapies will not produce reliable predictions because the genome of an individual does not necessarily play a more important role in one's health than one's socioeconomic background and environment. Access to large omics (genomics, transcriptomics, proteomics, epigenomic, metagenomics, metabolomics, nutriomics, etc.) data has revolutionized biology and has led to the emergence of systems biology for a better understanding of biological mechanisms. Traditional observational epidemiology or biology alone are not sufficient to fully elucidate multifaceted heterogeneous disorders, and this directly limits all prevention and treatment pursuits for such diseases [43]. It is widely recognized that multiple dimensions must be considered simultaneously to gain understanding of biological systems [44]. Therefore, there is an urgent need to bridge the gap between advances in high-throughput technologies and the ability to manage, integrate, analyze, and interpret omics data to progress in personalized medicine [45].
As medicine begins to embrace genomic tools that enable more precise prediction and treatment disease, which include "whole genome" interrogation of sequence variation, transcription, proteins, and metabolites, the fundamentals of genomic and personalized medicine will require the development, standardization, and integration of several important tools into health systems and clinical workflows. These tools include health risk assessment, family health history, and clinical decision support for complex risk and predictive information. Together with genomic information, these tools will enable a paradigm shift to a comprehensive approach that will identify individual risks and guide clinical management and decision making, all of which form the basis for a more informed and effective approach to patient care. The ultimate challenge is the clinical application for improved patient care. In fact, many physicians are unprepared to incorporate personal genetic testing into their practice and it is unclear how to best apply research results to improve patient care [46]. Thus, bioinformatics can have the greatest clinical impact is in pharmacogenomics.
Bioinformatics lies as a bridge to implement parts of the clinical application in Personalized treatment. Bioinformatics also translates discoveries to the clinic by disseminating discoveries through curated, searchable databases like PharmGKB, dbGaP, PacDB and FDA AERS [47]. Finally, there are challenges and opportunities for bioinformatics to integrate with the electronic medical record (EMR). For example, the Bio Bank system at Vanderbilt links patient DNA with a deidentified EMRs to provide a rich research database for additional translational research in disease-gene and druggene associations [48]. Ultimately, bioinformatics needs to develop methods that integrate the genome in the clinic and allow physicians to use personalized medicine in their daily practice.
Clinical Integration in Personalized Medicine
The advance of precision/personalized medicine heavily re-lies on the ability to study biological phenomena at omics levels although the practice of precision/personalized medicine does not use only omics data and knowledge. This is because molecular characteristics obtained from omics data can classify diseases and identify subpopulation of patients suitable to certain common treatment more precisely. Following this trend, many of the emerging fields of large-scale data-rich biology are designated by adding the suffix '-omics' onto previously used terms. Specifically, pharmacogenomics, metabolomics, proteomics and others contribute to the glowing era of personalized treatment.
Pharmacogenomics
Pharmacogenomics is the study of how a person's response to drugs is affected by his/her genetic makeup. It combines pharmacology (the science of drugs) and genomics (the study of genes and their functions) to develop effective, safe medications and doses that will be tailored to a person's genetic makeup. The rapid accumulation of knowledge on genome-disease and genome-drug interactions has also impelled the transformation of pharmacogenetics into a new entity of human genetics, namely, pharmacogenomics. Enabled by high-throughput technologies in DNA analysis, genomics introduces a further dimension to individualized predictive medicine. Determining an individual's unique genetic profile in respect to disease risk and drug response will have a profound impact on understanding the pathogenesis of disease, and it may enable truly personalized therapy. This concept of therapy with the right drug at the right dose in the right patient has emerged as an urgent requirement among several studies on adverse drug effects in hospitalized patients [49,50].
Technological progress in analyzing millions of genes and intergenic variants in the form of single nucleotide polymorphisms (SNP) and copy number variants per individual has accelerated our comprehension of individual differences in genetic makeup. Genome-wide association studies (GWAS) have successfully identified common genetic variations associated with numerous complex diseases.
Pharmacometabolomics
Metabolomics is defined as "the systematic identification and quantification of the small molecule metabolic products (the metabolome) of a biological system (cell, tissue, organ, biological fluid, or organism) at a specific point in time." The discovery and application NMR-Spectroscopy in 1940s revolutionized metabolomics making the detection of metabolites in biological fluids a reality [43]. In 2003 the first metabolic web-based database METLIN was created by the Scripps Research Institute in the USA. It now contains millions of small molecules including exogenous drugs and metabolites. In 2007 the Human Metabolome Database (HMDB) was launched. It is an online and freely available comprehensive database of metabolites found in the human body.
Pharmacometabolomics studies the relationship between pharmaceutical agents and the metabolome [44]. Pharmacometabolomics aims to create a metabolic fingerprint for each patient to determine therapeutic outcomes in terms of efficacy, adverse drug reactions and dosing. Metabolic profiles do not only differ according to a person's genome but also according to their environment, population group, lifestyle and microbiome [45]. Metabolomic studies are necessary for the correct implementation of personalized oncology. It was found that cancer cells have different metabolic requirements than normal cells [46]. Tumors alter their metabolism as a defense mechanism. This was shown in cancer settings including glycolysis [47], one carbon metabolism [48], and a multimetabolome project in Breast cancer therapy [49].
Pharmacoproteomics
Pharmacoproteomics, essentially a sub-discipline of functional pharmacogenomics, is a study of how the protein content of a cell or tissue changes qualitatively and quantitatively in response to treatment or disease, what the protein and protein-ligand interactions are in related to drug response, and how a person's protein variants in quality and quantity affect a person's response to a drug. Protein roles are diverse, and mass spectrometry-based proteomics has established sophisticated tools and instruments that can identify proteins and measure the changes in protein levels, posttranslational modifications (e.g., kinase signaling), localization, and protein-protein or drug-protein interactions.
A leading study in the pharmacoproteomic field, described by Ong et al. [50], utilized quantitative proteomic analysis of SI-LAC-labeled cell lysates to identify specific protein interactions and targets of small molecules, including kinase inhibitors and immunophilin binders. Furthermore, identification of multiple protein targets may lead to novel combinatorial therapies, particularly in cancer, similar studies are emerging in the field [51]such as activated B cell-like diffuse large B cell lymphoma (ABC DLBCL, and more are yet to come.
Pharmacomicrobiomics
Microbiomics is the study of the human microbiome, the community of microbes that colonize the body. The human microbiome is influenced by a multitude of factors such as: age, gender, diet, socio-economic status, environment, health status, genotype and drug intake. Pharmacomicrobiomics is the interdisciplinary field that merges pharmacology, microbiology, and genomics. It studies the effect a host's microbiome has on xenobiotics' metabolism. In 2008 the Human Microbiome Project was launched by the National Institute of Health (NIH) in the USA. The project involved a comprehensive analysis of the human microbiome and how it affects one's health and susceptibility to diseases. Analysis of the microbiome is usually performed using next generation sequencing (NGS) techniques such as: shotgun metagenomic sequencing (SMS), 16S RNA sequencing, microbial whole genome sequencing (MWGS), and microbial metatranscriptomics.
A recent study showed the role of the microbiome on the response of metastatic melanoma patients to immunotherapy targeting programmed cell death protein 1 (PD-1)/ programmed cell death 1 ligand 1 (PD-L1). The results showed that responders had higher levels of bacterial genus Faecalibacterium while patients who did not respond well treatment had bacteria of the order Bacteroidales in their fecal samples [51]. These findings can be utilized as a basis for patient stratification when it comes to candidacy for immunotherapy [52]. Another study showed the chemotherapeutic agent gemcitabine seemed to experience high resistance by cancer cells when elevated levels of Escherichia coli were detected as the volume of tumors increased and the OS was decreased [53].
Pharmacometrics
Pharmacometrics is the application of mathematical and statistical methods to pharmacotherapy with the aim of describing or predicting concentrations, associated physiologic effects and clinical impacts. Pharmacometrics has become progressively a key science in the drug development process by implying the development of pharmacokinetic (PK) or PK/PD or PK/PD/PG models (PD, pharmacodynamics; PG, pharmacogenomics) that provide knowledge about the behavior of a drug and how it can be optimally used. Additionally, when applied pragmatically, PM can also be an efficient, powerful and informative science in clinical settings, when adverse events or hazardous efficacy render individual dose adjustments necessary. In this chapter, the readers will find practical information about different approaches (population pharmacokinetics, Bayesian data analysis, etc.) usually used in PM, and examples of effective applications in clinical routine activity [54]. Integration of the modelling basis of Pharmacometrics could serve along with big data a major shift in precision medicine toward direct application.
Epidemiology in Personalized Medicine
Epidemiology is "the study of the distribution and determinants of health-related states or events in specified populations, and the application of this study to the control of health problems" according to the Centre for Disease Control. Nowadays, medicine is moving towards an evidence-based approach made possible by epidemiological studies which facilitate the identification of trends, patterns and emerging public health issues, and help us evaluate the outcome of different interventions [55].
One of the main focuses of epidemiology is how does one's socioeconomic status (SES) affects their health. It was found that high SES is linked to better health in both curable and incurable diseases. In multiple cancers, low SES is associated with late stage diagnosis and resultant poor prognosis. Another important epidemiological interest is race/ethnicity as a social determinant of health. For instance, studies reveal that minority racial groups in the USA are more likely to be diagnosed with late stage cancers and less likely to receive/continue treatment which significantly decreased their OS. This suggests that the key to reducing health disparities is population-based interventions. These kinds of interventions may seem to oppose the idea of personalized healthcare. However, PM strives to stratify patients into populations based on their environment, SES, lifestyle, and phenotype. Epidemiological studies are a fundamental basis for population-based interventions since they supply the data needed for patient classification into subpopulations. While genomics has revolutionized PM, they alone cannot be the answer. Full reliance on genomic studies arguably increases health disparities and perpetuate misconceptions about race [56].
Population based interventions are the answer to bridging the gap between the rich and poor and promoting health equity between races and ethnicities. They have also proven their success in disease prevention. And it could not have happened without the data obtained through extensive epidemiological studies. With that said, we come to the conclusion that epidemiology is essential for implementation of curative and preventive personalized medicine [42].
There are increasing number of huge cohort studies and cohort consortiums with long term follow-up. Many cohorts provide unique opportunities to address the effect of various demographic, lifestyle, genomic, molecular, clinical, as well as psychosocial factors on cancer outcomes. For example, the Prostate, Lung, Colorectal and Ovarian (PLCO) Cancer Screening Trial is a large population based randomized trial with extensive follow-up. By collecting biologic materials and risk factor information from trial participants before the diagnosis of disease, an ongoing PLCO component, the Etiology and Early Marker Studies (EEMS) is being added. Efforts can be undertaken to link the epidemiologic data with electronic medical and health records to further address the patient's outcomes.
Pharmaco-genetic-epidemiology studies can be nested on these life span cohorts. Recent advances in genomic research have demonstrated a substantial role for genomic factors in predicting response to cancer therapies. Translational cancer research is interdisciplinary and trans-disciplinary by nature. Numerous suggestions and recommendations have been making for multidisciplinary collaborations and partnerships to identify and fill the knowledge gaps. Much less attention has been paid to how to prepare the scientists for trans-disciplinary research. In fact, multidisciplinary training is a prerequisite for the next generation researchers who want to be fully capable to conduct translational cancer research. The next generation epidemiologists (NGEs) may have to obtain comprehensive knowledge of cancer epidemiology, molecular/genetic biology, statistics, and oncology or pathology [56]. Thus, NGE multi-disciplinary expertise is required. Nonetheless, knowledge integration is the key to clinical application of personalized medicine.
Conclusion
Precision medicine, characterized by genomic breakthroughs, has had a great impact on public health particularly in the last decade. Even though genomics has been the cornerstone of traditional personalized medicine and have made diagnosis, targeted therapy and prognosis inarguably easier through genomic biomarkers; other equally important field of scientific research such as: transcriptomics, proteomics, metabolomics, microbiomics, social and molecular epidemiology, pharmacometrics and bioinformatics must be integrated into precision medicine research.
In order to facilitate the existence of an integrated PM network, an important gap must be addressed in clinical settings. Healthcare workers must be familiarized with the areas of bioinformatics and multi-omics. Another important gap that must be bridged is the disparity in availability of PM to high socio-economic groups relative to low socio-economic groups. Therefore, genomic tests must be made more accessible to be inclusive of people of all backgrounds. We also believe that investments in population-based interventions such as prevention screening for stratified populations in of utmost importance as we are moving towards a more prevention-focused era of personalized medicine. | 6,330.8 | 2018-08-13T00:00:00.000 | [
"Biology"
] |
Characteristic determinant and Manakov triple for the double elliptic integrable system
Using the intertwining matrix of the IRF-Vertex correspondence we propose a determinant representation for the generating function of the commuting Hamiltonians of the double elliptic integrable system. More precisely, it is a ratio of the normally ordered determinants, which turns into a single determinant in the classical case. With its help we reproduce the recently suggested expression for the eigenvalues of the Hamiltonians for the dual to elliptic Ruijsenaars model. Next, we study the classical counterpart of our construction, which gives expression for the spectral curve and the corresponding $L$-matrix. This matrix is obtained explicitly as a weighted average of the Ruijsenaars and/or Sklyanin type Lax matrices with the weights as in the theta function series definition. By construction the $L$-matrix satisfies the Manakov triple representation instead of the Lax equation. Finally, we discuss the factorized structure of the $L$-matrix.
List of main notations: q j , j = 1, ..., N -positions of particles; q j = q j − q 0 -positions of particles in the center of mass frame, q 0 = (1/N ) k q k ; q ij = q i − q j x j = e qj or x j = e 2πıqj in trigonometric or elliptic cases respectively; p j , j = 1, ..., N -the classical momenta of particles; ω = e 2πıτ -the elliptic modular parameter, controlling the ellipticity in momenta; p = e 2πıτ -the modular parameter, controlling the ellipticity in coordinates; q = e -exponent of the Planck constant; t = e η -exponent of the coupling constant; λ -the spectral parameter (1.1) (sometimes called also u); z -the second spectral parameter (1.5); A x,p -the space of operators, generated by {x 1 , .., x N , q x1∂1 , ..., q xN ∂N }; : : -normal ordering on A x,p , moving all shift of operators in each monomial to the right (2. All products of non-commuting operators should be understood as left ordered products.
1 Introduction and summary 1
.1 Brief review
The double elliptic (or Dell) model [8] is an integrable system with an elliptic dependence on both -positions of particles and their momenta. It extends the widely known Calogero-Moser-Sutherland [9,19] and Ruijsenaars-Schneider [29] families of many-body integrable systems. Historically, the model was first derived as the elliptic self-dual system with respect to the Ruijsenaars (or equivalently, p-q or action-angle) duality interchanging positions of particles and action variables [28]. At the classical level the original group-theoretical Ruijsenaars construction was not applicable to the elliptic case. Instead, a geometrical approach was used based on the studies of spectral curves and Seiberg-Witten differentials [14]. In this way the Dell Hamiltonians where proposed in terms of higher genus theta-functions with a dynamical period matrices. For this reason a definition of the standard set of algebraic tools for integrable systems (including Lax pairs, R-matrix structures, exchange relations etc) appeared to be a complicated problem. The classical Poisson structures underlying the Dell model were studied in [7,2].
An alternative version of the Dell Hamiltonians was suggested recently in [18]. The authors exploited the explicit form of the 6d Supersymmetric Yang-Mills partition functions with surface defects compactified on torus, which are conjectured to serve as the wavefunctions for the corresponding Seiberg-Witten intergable systems [25,26,27,1]. The exact correspondence of their results with the previous studies is an interesting open problem though the matching has being already verified in a few simple cases. In this paper we deal with the Koroteev-Shakirov version of the generating function for commuting Hamiltonians. Namely, for the N -body system consider the operator of complex variables: This is a definition of the infinite set of (non-commuting) operatorsÔ k . The positions of particles q i enter through x i = e q i ; t = e η -is exponent of the coupling constant η; q = e -is exponent of the Planck constant ; and ∂ i = ∂ x i , so that ∂ q i = x i ∂ i . The constant ω is the second modular parameter (controlling the ellipticity in momenta) and λ is the (spectral) parameter of the generating function. The definition of the theta-function θ p (x) with the constant modular parameter τ (p = e 2πiτ ) (controlling the ellipticity in coordinates) is given in (A.1). The commuting Hamiltonians of the Dell system were conjectured and argued to be of the form: H n =Ô −1 0Ô n , n = 1, ..., N. (1.2) Solution to the eigenvalue problem forĤ n was suggested in [3,4] by extending the Shiraishi functions [32] -solutions to a non-stationary Macdonald-Ruijsenaars quantum problem.
Our study, on the contrary does not appeal to the explicit form of the wavefunctions and is mostly focused on the generating function itself. It is based on the usage of the intertwining matrix Ξ(z) of the IRF-Vertex correspondence (see (9) for their explicit form) and the Hasegawa's factorization formula [16,17] for the gl N elliptic Ruijsenaars-Schneider Lax operator with spectral parameter z [29] The matrix Ξ(z) = Ξ(z, x 1 , ..., x N |p) enters the normalized intertwining matrix g(z, τ ) = Ξ(z) is a diagonal matrix used for convenient normalization only, see (6.10).
A key property of these matrices, which will use, is that det Ξ is proportional to the Vandermonde determinant. These intertwining matrices are known from the IRF-Vertex correspondence at quantum and classical levels [6,20,21,35]. The IRF-Vertex correspondence provides relation between dynamical and non-dynamical quantum (or classical) R-matrices as a special twisted gauge transformation with the matrix g(z), thus relating the Lax operator (1.4) with the one of the Sklyanin type [33].
Outline of the paper and summary of results
In this paper, using the Hamiltonians (1.1), we construct a generalization of the Macdonald determinant operator for the Dell system and study its applications.
We use a slightly modified and extended version of the generating functionÔ ′ (z, λ) (1.1), which depends on additional spectral parameter z, and generates an equivalent 3 set of operatorsÔ ′ k : The paper is organized as follows.
In Section 2 we derive the expression for the generalized Macdonald determinant: where q 0 is the center of mass coordinate. The determinant is well defined as the columns of the matrix commute. For the precise form of the matrix Ξ ij = Ξ i (q j , z) see (9).
In Section 3 we express the generating function (1.5) in terms of the Lax matrix of the Ruijsenaars-Schneider model: and the normal ordering is defined in (2.24). The trigonometric and rational limits (for coordinate dependence) of (1.5)-(1.8) are described as well.
In Section 4 we study the eigenvalue problem for the operatorÔ(u) (1.1) in the (coordinate) trigonometric limit p = 0, which corresponds to the dual to elliptic Ruijsenaars model, and compare our results to the known in the literature [18,3].
The main statement here is that the eigenvalues ofÔ(u) in the limit p = 0 are labelled by Young diagrams λ = (λ 1 , ..., λ N ), and have the form: (1.9) In Section 5 we study the classical limit of the Dell system. Using the classical analogue L(z, λ) of (1.8) we show that the L-matrix satisfies the Manakov triple representation [23,12] (instead of the Lax equation): The conservation laws are generated by the function det L(z, λ) only. It reduces to expression for the spectral curve of the Ruijsenaars-Schneider model in the ω → 0 limit.
In Section 6 we describe the factorized structure for the L-matrix (1.10) L(z, λ). Up to an inessential modification it is presented in the form, which is similar to the elliptic Kronecker function 4 (A.12): thus generalizing the classical version of the factorization (1.3) to the double elliptic case. The elliptic moduliτ appears as ω = e 2πıτ . It is responsible for the ellipticity in momenta, while τ controls the ellipticity in positions of particles.
We also describe connection of the L-matrix with the Sklyanin Lax operators, and propose its quantization in terms of the elliptic quantum R-matrix in the fundamental representation of GL N .
Possible applications of the obtained results and future plans are discussed in the end of the paper. Appendices contain the elliptic functions definitions and properties, description of the intertwining matrices Ξ, computations of GL 2 examples and relations between different forms of the generating functions.
Characteristic Macdonald determinant for the Dell system
In this Section we express the generating functionÔ(λ) (1.1) as a determinant of N × N matrix. The main idea is to introduce the bilinear map (2.6) | : Where A x and A p are the spaces of operators, depending only on coordinate and momenta correspondingly. And express the generating function as an image of such map, with the first argument being the Vandermonde function. Then use the property of the IRF-Vertex correspondence intertwining matrices to have determinant proportional to this function. And finally swap the orders of the calculation of the determinant with the bilinear map, using the special properties of this map and the row linearity of the determinant.
The set of intertwining matrices which we are going to use in different cases is given in the Appendix B . The elliptic coordinate case could not be treated without spectral parameter, while it is possible in the rational and trigonometric cases. The result will be proven in all the details in the trigonometric case. The rational case could be done absolutely analogously. The main statement of this Section is the following.
5)
Proof: First, notice that the above determinant is well defined since any two elements from different columns of the corresponding matrix commute. Consider the space of difference operators, generated by {x 1 , ..., x N , q x 1 ∂ 1 , ..., q x N ∂ N }. We refer to it as A x,p . Similarly, denote the spaces, generated by {x 1 , ..., x N } and {q x 1 ∂ 1 , ..., q x N ∂ N } only as A x and A p respectively.
Introduce the bilinear pairing: by defining it on the basis elements Due to x i q x j ∂ j = q x j ∂ j x i for i = j, the pairing (2.7) satisfies an important property: Then, the generating function (1.1) is represented as follows 6 : Next, we use the determinant property (2.2) and the linearity property of (2.7). From (2.9) we concludê We knew this result from [31]. and the property (2.8) provideŝ (2.14) Finally,Ô Expression under the determinant is easily calculated: This yields (2.4). Plugging the explicit expression for Ξ into the r.h.s. of (2.16) and summing up over n we get that is (2.5). Now let us write the answer for the rational case: Then the generating function O(λ) (1.1) in the coordinate rational limit : is represented as follows: Proof: The proof is word by word repetition of the trigonometric case.
These theorems generalize the generating functions for commuting Hamiltonians for the quantum Calogero-Ruijsenaars family [30,16].
The case with the spectral parameter
Let us proceed to the case with the spectral parameter. First of all we need to introduce a convenient notation. In this Section in place of the symbol θ p (x) any of the three functions could be substituted We will also use the odd theta function: For the precise relation between them, see (A.4) and (11).
In this Section, we will also need the normal ordering on A x,p , which moves all the shift operators to the right of all coordinates. Namely, on monomials where I -multi-index.
Let us formulate the main statement. Define the new generating function: (2.25) Its relation to the previous one is also explained in (11).
Then the generating functionÔ ′ (z, λ) (2.25) is represented as follows: or, equivalently, where the following expansions for the functions Ξ ij (q j , z) are assumed: for some C-numbers α and σ ij .
Proof: In order to use the trick from the Theorem 2.1 let us define the shifted Ξ matrix, calledΞ: Now a matrix elementΞ ij depends on the coordinate q j only. Therefore, the following determinants can be calculated as ordinary determinants since the elements from different columns commute. Indeed, Applying the pairing trick to them as in the proof of the Theorem 2.1, we arrive at Substituting the explicit expression for the determinant ofΞ, one obtains which, after taking the pairing equals So, one obtains: By the same argument as above, we could restore the normal ordering: Finally, by shifting the parameter z to z + N q 0 , we obtain the desired identity.
Determinant representation in terms of the Ruijsenaars-Schneider L-matrix
In this paragraph we will derive one more useful representation for the generating function. Consider (2.4 is the (quantum) trigonometric Ruijsenaars-Schneider Lax matrix. In order to witness it in its convenient form one should also perform the gauge transformation with the diagonal matrix . See details in [17,35]. With the normal ordering the gauge transformed Lax operator has the same determinant. Hence, we arrive to the following determinant representation: is the averaged sum of the Ruijsenaars Lax matrices. The averaging is over Z with the theta-function weights. Explicit form ofL RS (t, q) to be substituted into (2.44) is given in Section 3, expression (3.10). And its generation to the rational case is given by the expressions (3.13) and (3.14).
In the case with the spectral parameter the generating function depends on z. However, the arguments above could be repeated without any complications. Thus, its determinant representation is: whereL RS (z, q, t) is the elliptic Ruijsenaars-Schneider Lax matrix given by (3.2).
We present an alternative direct proof of the statements (2.43), (2.45) without usage of intertwining matrix in the next Section.
Double elliptic GL N model
The definition (1.1) can be alternatively written in terms of the standard odd Jacobi theta-function, see (11): Therefore, the HamiltoniansĤ n = (Ô ′ 0 ) −1Ô′ n also commute. Its extension to the case with spectral parameter z is given in (2.25). We are in position to represent (2.25) in terms of the (quantum) elliptic Ruijsenaars-Schneider GL N Lax matrix with spectral parameter [29,16]: Theorem 3.1. LetL RS ij (z, q, t) be the quantum Lax matrix for the elliptic Ruijsenaars-Schneider model (z -its spectral parameter), then the generating function (2.25) forÔ ′ n operators acquires the form: and substitute it into (3.3). Let us represent it as a sum of determinants. For this purpose collect all the terms with N i q n i x i ∂ i : where the matrixL RS ij (z, q i − q j , n j η, n j ) is constructed by combining rows from different terms of the sum (3.4). Using its explicit form (3.2) let us rewrite it through the elliptic Cauchy matrix: (3.9) Plugging the Cauchy determinant (A.11) into (3.9) we get (2.25).
The result of this Theorem is valid for the trigonometric and rational cases as well. The degenerations are obtained by substitutions ϑ(u) → sinh(u) → u.
Dual to the elliptic Ruijsenaars model
In the GL N case the relations (2.43)-(2.44) hold true for the Ruijsenaars-Schneider Lax matrix The intertwining matrix (B.9) provides the Lax matrix with spectral parameter (see details in [35]): To get this Lax matrix one should substitute y j = e −2q j +2q 0 +2z/N into (B.10). Then from (B.11) we obtain the following generating function of the Hamiltonians related to the Lax matrix (3.11): Being substituted into the averaged matrix it provides the following rational analogue for (1.1): In the case with spectral parameter z we deal with intertwining matrix (B.6), which leads to the following Lax operator (see details in [35]): Then The limit z → ∞ of this answer yields the expression (3.15).
Eigenvalues for the dual to elliptic Ruijsenaars model
Let us now proceed to the first possible application of our result. We are going to derive the general formula for the eigenvalues of the dual to elliptic Ruijsenaars model. It corresponds to the trigonometric limit p = 0 of the Dell system. So, the generating function looks as follows: By introducing standard notations δ j = N − j and ∆ for the Vandermonde determinant we write (4.1) asÔ By the same arguments as for ω = 0 case, we can see that the operatorÔ(u) preserves the space Λ N of symmetric functions of the variables x 1 , ..., x N . So, we can consider the eigenvalue problem for the operatorÔ(u): where Ψ is an element of Λ N . The generating function of the eigenvalues takes the form:
.4)
Proof: Let m λ be the monomial symmetric function: They form a basis in the space of symmetric functions. Following the Macdonald's book [22] let us calculate its image under the action of the operatorÔ(u): Next, make a change in the summation of variables by introducing π with the property σ ′ = σπ. Then by rearranging the factors in the product we get or in terms of the Schur functions: Recall that the Schur functions have these properties: 1) s π(λ) is either zero or equal to s µ for some µ ≤ λ; 2) their relation to the monomial symmetric functions is given by for some numbers u λµ . Therefore, the operatorÔ(u) is upper triangular in the basis {m λ }, and its eigenvalues have the form: This finishes the proof.
Eigenvalues in the GL 2 case and comparison to the known answer
Let us write down the explicit expression for the eigenvalue E 1,λ of the first Hamiltonian (1.2)Ĥ 1 = O −1 0Ô 1 . By expanding the result for the eigenvalues ofÔ(u) (4.11) in powers of u (and to the first order in ω) we obtain the eigenvalue ofĤ 1 : Up to the factor t 1 2 the results (4.11)-(4.13) coincide with those obtained in [18] and [3]. The factor t 1 2 comes from a slightly different definition of the Hamiltonians.
Classical mechanics: Manakov representation
In this section we will describe the classical limit of our construction and derive the Manakov L-A-B triple representation from it. The first step is to express the generating function of the Hamiltonians as the ratio of the two determinants. In the classical limit then, these two determinants could be combined into one, thus giving the expression for the classical spectral curve and the corresponding L-matrix.
One more generating function for the Dell Hamiltonians
So to fulfil the first step, described above, let us introduce an alternative version for the generating function of the commuting Dell Hamiltonians: put the operatorÔ (1) is also a generating function of the commuting Hamiltonians, so that commutativity ofĤ n follows from commutativity ofĤ n .
Proof: First, let us notice that the operatorsĤ k n =Ô −1 kÔ n =Ĥ −1 kĤ n also commute with each other due to commutativity ofĤ k . Therefore,Ĥ mkĤnk =Ĥ nkĤmk , or acting on this equality byÔ −1 k from the rightÔ On the one handÔ(1), compared toÔ 0 is hard to invert as its Taylor series expansion in ω starts not with 1. On the other hand the advantage ofÔ(1) is that its determinant representation, while there is no natural way to find a determinant representation forÔ 0 .
Spectral L-matrix
The operatorÔ(1) −1 in (5.1) really acts onÔ(λ) as a quantum operator, so that we can not unify the normal orderings in (5.6). At the same time in the classical limit (5.6) reduces to L RS (z, q n , t n ) (5.9) arises, which determinant H(z, λ) is the generating function of the classical Hamiltonians. They commute with respect to canonical Poisson structure Expression H(z, λ) can be considered as an analogue of the expression det(λ − l(z)) for spectral curve of an integrable system with the Lax matrix l(z). This is easy to see in the limit ω = 0. Due to (A.5) we have L(z, λ)| ω=0 = 1 N − λL RS (z, q, t) , (5.11) where 1 N is the identity N × N matrix. Plugging (5.11) into (5.8) we get Therefore, equation H(z, λ)| ω=0 = 0 is indeed the spectral curve of the elliptic Ruijsenaars-Schneider model (written in some complicated way). In the general case L(z, λ) is not a Lax matrix. Its eigenvalues do not commute with respect to (5.10).
Let us remark that the existence of the Manakov's L-A-B triple does not contradict the possible simultaneous existence of some Lax pair. If we had a true Lax matrix for the Dell model then det L(z, λ) should represent its spectral curve. So, if the Lax representation exists we need to find a matrixL of a size M × M (as was mention in [8] it is natural to expect M = ∞) and a change of variables u = u(z, λ), ζ = ζ(z, λ) satisfying Another comment is about the geometrical meaning of L(z, λ) (5.9) and the L-matrix (5.8). In the Krichever-Hitchin approach to integrable systems these matrices are sections of (the Higgs) bundles over a base spectral curve with a coordinate z. The classical analogue of the Ruijsenaars-Schneider Lax matrix (1.4) takes the form c -the "speed of light" constant of the classical Ruijsenaars model. Its quasi-periodic behaviour is as follows: L RS (z + 1) = L RS (z) and The first factor in (5.15) means that all terms in the sum (5.9) have different quasi-periodic behaviour on the lattice of periods 1, τ . Therefore, L(z, λ) is not a section of a bundle over the elliptic curve. This can be easily corrected by the substitution Then the first factor in (5.15) get canceled, and we come to the quasi-periodic matrix L(z, λ). The L-matrix (5.8) is quasi-periodic as well. The price for this change of variables (5.16) is as follows.
Initially we had the matrix L(z, λ) to be not quasi-periodic, but with a single simple pole at z = 0. After the change of variables we come to a quasi-periodic matrix function, but having higher order poles. The terms with positive n in the sum (5.9) acquire the n-th order pole at z = η, and the terms with negative n acquire the −n + 1-th order pole at z = 0. In what follows we do not use the substitution (5.16) keeping in mind that it can be done.
L-A-B triple
Consider the L-matrix (5.8) It is easy to see that this matrix satisfies identically the following equations known as the Manakov representation [23]: where trB k (z, λ) = 0 . (5.18) and the "time" derivatives will be specified later. Indeed, by differentiating (5.8) we get and The property (5.18) follows from The l.h.s. of (5.21) equals zero since det L(z, λ) = H(z, λ), while the r.h.s. equals to the trace of B k (z, λ) (5.20). Alternatively, introduce Then it follows from the conservation of det L(z, λ) that trM k (z, λ) = trM k (z, 1) .
Factorization of the L-matrices
In this section we will show how our result naturally embeds the Dell model into the standard factorized Lax matrix approach, to the description of which we proceed in the next paragraph.
Classification of the factorized L-matrices
In [8] integrable many-body systems of the Calogero-Ruijsenaars family were naturally classified by the types of dependence on the coordinates and/or momenta. Both types are numerated by three possibilities. Each can be either rational, trigonometric or elliptic. For example, the choice (rational coordinate, trigonometric momenta) corresponds to the rational Ruijsenaars-Schneider model, while the choice (rational momenta, trigonometric coordinate) imply the trigonometric Calogero-Sutherland system. In the coordinate part this classification follows from solutions of the underlying functional equations [9,29] -the Fay identity (A.14) and its degenerations. By interchanging the types of coordinate and momenta dependence (as in the example pair above) one gets a pair of systems related by the Ruijsenaars duality transformation [28]. When both types coincide the corresponding model is self-dual. These are the rational Calogero-Moser system, the trigonometric Ruijsenaars-Schneider model, and finally the double elliptic model, which existence was predicted by these arguments.
Here we supply the upper classification with precise substitutions corresponding to the factorized Lax (or Manakov) L-matrices. As was discussed in [35] the factorized Lax matrices (with and without spectral parameter) for the systems of Calogero-Ruijsenaars type can be specified by a choice of two ingredients: the function f , and the intertwining matrix Ξ(z): where the matrix G(z) is defined in terms of Ξ(z): with some diagonal matrix D (see (6.10) below) and c -the light speed parameter 7 in the Ruijsenaars-Schneider model. The function f (w) is either: 2) exponent: f (w) = e w (6.4) The first choice of the function f being substituted into (6.1) provides the Lax matrix of the Calogero-Moser-Sutherland systems [9,19]. The second choice of f gives rise to the Lax matrices of the Ruijsenaars-Schneider models [29]. The choices of Ξ(z) in (6.1) are given by (B.12) in the elliptic case, in the trigonometric case it is (B.9), and in the rational case it is (B.6). In trigonometric and rational cases one can also use the Vandermonde matrices (B.3) and (B.1) as Ξ ij (z) = (z − q j ) i−1 and Ξ ij (z) = (e z x j ) N −i respectively. The spectral parameter is cancelled out in these cases, and we get the Lax pairs of the Calogero-Ruijsenaars models without spectral parameter. See the review [35] for details.
Based on (1.1) and the Manakov L-matrix structure (5.8)-(5.9) we come to elliptic version for the function f : The latter result is explained in the next paragraph in detail. A more universal classification picture arises if we slightly change the definition of f λ (w) as in transition from θ p (e w ) to ϑ(w) (A.4) together with additional normalization factor ϑ ′ (0)/ϑ(log(λ)). Then the function f λ (w) turns into the Kronecker elliptic function [36] depending on the moduliτ (defined through ω = e 2πıτ ): where u = log(λ). In trigonometric and rational limits (when Im(τ ) → 0) it is as follows: This function was used by I. Krichever [19] to construct the Lax representation with spectral parameter for elliptic Calogero-Moser system. It is widely used in elliptic integrable systems due to an addition Theorem known also as the genus one Fay identity (A.14). Considered as functional equation its solutions (including degenerated versions) were extensively studied [9].
In this way we come to the classification for the function f u (w) (responsible for the momenta type dependence), which is parallel to the well known classification of the coordinates dependence without spectral parameter [9] and with spectral parameter [19].
Factorized structure of the Dell L-matrix
Recall that the Lax matrix of the Ruijsenaars-Schneider model (3.2) is factorized as follows (1.3): where g(z) = g(z, τ ) = Ξ(z)D −1 (6.9) with the intertwining matrix Ξ ij (z) (B.12) and the diagonal matrix A conjugation with the latter diagonal matrix D is performed in order to have a convenient form for L RS (z). Consider the matrix G(z) (6.2). The Ruijsenaars-Schneider Lax matrix (6.8) takes the form up to gauge transformation with the diagonal matrix exp ( z N cη P ): Let us proceed to the double elliptic case. Plugging (6.8) into the matrix L(z, λ) (5.9) we get Ncη P L(z, λ)e z Ncη P = G −1 (z)θ ω λAd e −Nη∂z G(z) . (6.14) By introducing also we come to the following expression for the Manakov L-matrix (5.8): It is also gauge equivalent to bothL(z, λ) and L(z, λ). In terms of the Kronecker function (6.6) we may write the Manakov L-matrix aš where u = log(λ). From the point of view of the classification of the factorized L-matrices discussed above, the expressions (6.15) and (6.17) can be considered as matrix analogues for theta-function and the Kronecker elliptic function respectively. The Kronecker function in (6.17) is constructed by means of the theta-operator Θ(z, λ) understood in a "plethystic sense" (6.15).
Relation to Sklyanin Lax operators
Due to the IRF-Vertex relation one can equivalently use the Sklyanin type Lax operators [33] instead of the Ruijsenaars-Schneider one in (1.8). Consider the gauge transformed Ruijsenaars Lax matrix (6.8): It is the classical analogue for the representation of the quantum Sklyanin Lax operator [17]: Consider first the classical case (6.18). Its generalization to the double elliptic model is performed similarly to the previous paragraph. Define So, it could be expressed as a sum of the Sklyanin type Lax matrices with different coupling constants. For each of them, we know from [21] that it can be alternatively represented in terms of the underlying elliptic R-matrix: Consider the matrix operator :L Skl (z, λ) : (6.19). In the special case η = − /c the Sklyanin Lax operator is represented as the quantum Baxter-Belavin R-matrix in the fundamental representation [5,17]. The R-matrix coefficients are of the form: where using the definition (A.6) we introduced (6.28) In this way we come to the matrix representation forL Dell (z, λ) (6.26): L Dell (z, λ) = R 12 (z, λ) = R 12 (z, 1) −1 R 12 (z, λ) ∈ Mat(N, C) .
Discussion
• The Manakov L-matrix (6.24) can be used as a building block for the (partial) monodromy as it happens in integrable chains [12]. Namely, construct monodromy for a chain of length L: with the L Skl (6.24), and the L-matrix at each site depends on its own set of canonical variables. Then, det T (z, λ) provides a product of generating functions for each site. In [18] this was called the spin generalization of the double-elliptic model. Besides this construction one can also study the averaging of the spin Ruijsenaars-Schneider Lax operators.
• The quantization of (7.1) can be performed in a usual way with R(z, λ) defined in (6.29)-(6.31). Presumably, a properly defined quantum determinant of T(z, λ) could be a generating function of commuting operators. Notice that, we did not prove the Yang-Baxter equation for R(z, λ) (it is rather not fulfilled since the traces of T (z, λ) do not commute), so that R(z, λ) is not an R-matrix. Finding equation for R(z, λ) is another interesting problem.
• Orthogonality of the eigenvectors. To proceed further in our construction along with the analogous treatment for the usual Macdonald's symmetric functions we need to know the analog of the Macdonald's measure, with respect to which the operatorsĤ n are self-conjugate. Probably, expressing the eigenvectors ofĤ n as vector-valued characters of some elliptic algebras might work, in the analogy with the paper [11].
• The integrable many-body systems can be also described via commuting differential or difference KZ-type connections or Dunkl operators using the Matsuo-Cherednik (or Heckman) projections respectively. The Ruijsenaars duality in many-body problems then turns (or rather embeds) into the spectral duality interchanging canonical coordinates and momenta being written in separated variables of the corresponding Gaudin models and/or spin chains. Much progress has been achieved in studies of these relations including its elliptic version [24]. An interesting problem is to find the Dunkl-Cherednik like description for the double-elliptic models (and define a doubleelliptic version of the qKZ equations). We discuss these topics in our forthcoming paper [15].
• Let us mention a recent paper [34], where an elliptic integrable many-body system of Calogero type with the Manakov representation was obtained instead of the Lax representation. It was derived through a reduction from an integrable hierarchy. Presumably, it is a special limiting case, which can be deduced from the double-elliptic Manakov L-matrix.
• Further study of algebraic structures underlying the double-elliptic model is another set of important problems. This includes r-matrix structures at classical and quantum levels (RTT relations) and extensions of the Sklyanin quadratic algebras by means of R-matrix type operators (6.29)-(6.30).
• The determinant formula (1.7)-(1.8) can be naturally extended to the cases of many-body systems associated to the root systems of BC N type by substituting the Lax operators for the Ruijsenaars-Schneider-van Diejen type. This can be performed using results of [13]. We will discuss it in our future publication.
• Another open problem is to describe the classical L-matrix (5.8)-(5.9) in the group-theoretical (or Krichever-Hitchin) approach. As we mentioned in (5.16), one way is to consider the matrix valued function with higher order poles at a pair of marked points. Another possibility is to consider (block-matrix) direct sum k∈Z L RS (z, q k , t k ) embedded into GL(∞). Each block is well defined, and the weighted sum of all blocks can be viewed as a matrix valued character.
We will discuss these questions in future publications.
Appendix A: Elliptic functions
We use several different theta-functions. The first is the one used in [18]: where the moduli of the elliptic curve τ ∈ C, Im τ > 0 enters through Another theta-function is the standard odd Jacobi one: They are easily related: In the trigonometric limit p → 0 The Riemann theta-functions with characteristics are defined as follows: In particular, where η D (τ ) is the Dedekind eta-function: The determinant of the elliptic Cauchy matrix is given by Define the elliptic Kronecker function and the first Eisenstein function [36] They satisfy the (genus one) Fay trisecant identity and its degeneration Also, (A. 16) 9 Appendix B: Intertwining matrices Following [35] we consider the intertwining matrices in two cases: with a spectral parameter and without a spectral parameter. These matrices lead to the Ruijsenaars-Schneider Lax matrices via (1.3) with and without spectral parameter respectively.
The cases without spectral parameter. Here we deal with the Vandermonde matrices in the rational and trigonometric coordinates. | 7,910 | 2020-10-16T00:00:00.000 | [
"Mathematics"
] |
Disruption of Mcl-1·Bim Complex in Granzyme B-mediated Mitochondrial Apoptosis*
Recently, we reported the identification of a novel mitochondrial apoptotic pathway for granzyme B (GrB) (Han, J., Goldstein, L. A., Gastman, B. R., Froelich, C. J., Yin, X. M., and Rabinowich, H. (2004) J. Biol. Chem. 279, 22020–22029). The newly identified GrB-mediated mitochondrial cascade was initiated by the cleavage and subsequent degradation of Mcl-1, resulting in the release of mitochondrial Bim from Mcl-1 sequestration. To investigate the biological significance of Mcl-1 cleavage by GrB, we mapped the major GrB cleavage sites and evaluated the apoptotic potential of the cleavage products. GrB cleaves Mcl-1 after aspartic acid residues 117, 127, and 157, generating C-terminal fragments that all contain BH-1, BH-2, BH-3, and transmembrane domains. These fragments accumulate at an early apoptotic phase but are eliminated by further degradation during the apoptotic process. The major Mcl-1 C-terminal fragment generated by GrB (residues 118–350) was unable to induce or enhance apoptosis when transfected into tumor cells. Instead, this Mcl-1 C-terminal fragment maintained a partial protective capability against GrB-mediated apoptosis via its lower affinity to Bim. In comparison with ectopically expressed full-length Mcl-1, the stably transfected C-terminal fragments of Mcl-1 were less efficiently localized to the mitochondria. Knockdown of Mcl-1, as achieved by transfection with Mcl-1-specific short interfering RNA, resulted in a significant level of apoptosis in the absence of external apoptotic stimulation and, in addition, enhanced the susceptibility of breast carcinoma cells to GrB cytotoxicity. The significance of Bim in this GrB apoptotic cascade was indicated by the marked protection against GrB-mediated apoptosis endowed on these cells through Bim knockdown. Our studies suggest that the disruption of the Mcl-1·Bim complex by GrB initiates a major Bim-mediated cellular cytotoxic mechanism that requires the elimination of Mcl-1 following its initial cleavage.
and in the survival and homeostasis of mature lymphoid cells (4,5). Mcl-1 is an anti-apoptotic Bcl-2 family member with three putative Bcl-2 homology domains (BH1-3) (1,4). Other pro-survival Bcl-2 family members, including Bcl-2, Bcl-XL, and Bcl-w, also posses a BH4 domain, which is absent in Mcl-1 and A1/Bfl-1 (6). Because the BH4 domain is required for molecular interactions with other proteins (7)(8)(9), its absence in Mcl-1 suggests that this Bcl-2 member interacts with a different set of proteins as compared with Bcl-2 and Bcl-XL. We and others (5,10) have recently identified a Mcl-1 high affinity binding capacity for Bim, whereas its affinity for Bid, Bad, Bax, and Bak was low. By contrast, Bcl-2 displayed comparable binding to Bim and Bad (5). These findings suggest that the anti-apoptotic activity of different Bcl-2 family members depends on their selective interactions with other proteins, including various pro-death Bcl-2 members. Mcl-1 has a C-terminal hydrophobic domain that mediates its localization to membranous organelles (11,12). Because both Mcl-1 and Bim are mainly localized at the mitochondrial membrane (13), sequestration of Bim by Mcl-1 is expected to block the Bimmediated mitochondrial apoptotic cascade (10). Mcl-1 has a fast turnover rate and the shortest half-life among anti-apoptotic Bcl-2 family members (4). This rapid turnover may serve the apoptotic process, particularly in light of the finding that early elimination of Mcl-1 is required for a UV-mediated mitochondrial cascade in HeLa cells (14). Mcl-1 expression is highly regulated at both transcriptional and post-transcriptional levels. Its expression is dependent on environmental survival stimuli, mediated by various growth factors, such as IL-2, 1 IL-3, IL-4, IL-7, IL-13, and granulocyte-macrophage colonystimulating factor (5,(15)(16)(17)(18). Several post-transcriptional mechanisms have been implicated in its down-regulation, including decreased protein synthesis and various proteolytic activities that are mediated by proteasomes or caspases (5,19,20).
We reported that Mcl-1 is a direct substrate for GrB (10). Based on this observation and the identification of the high affinity of Mcl-1 for Bim, we proposed that the GrB-mediated degradation of Mcl-1 may free sequestered Bim and therefore allows for the execution of a potent Bim-mediated mitochondrial apoptotic cascade. However, other Mcl-1-related pro-survival Bcl-2 members, Bcl-2 and Bcl-XL, are converted into pro-apoptotic effectors upon their cleavage by caspase activity (21,22). In the current study, we investigated the apoptotic nature of GrB-cleaved Mcl-1 products, as well as the significance of a complex between Bim and Mcl-1 or its GrB-generated C-terminal cleavage products for cellular survival.
Cell Lines, Cell Lysates, and Cell Extracts-Jurkat T leukemic cells were grown in RPMI 1640 medium containing 10% fetal calf serum, 20 mM HEPES, 2 mM L-glutamine, and 100 units/ml each of penicillin and streptomycin. HeLa, breast carcinoma CAMA-1, and colon cancer Hct116 cells were grown in Dulbecco's modified Eagle's medium containing 15% fetal calf serum, 20 mM L-glutamine, and 100 units/ml each of penicillin and streptomycin. The cell lysates were prepared with 1% Nonidet P-40, 20 mM Tris-HCl, pH 7.4, 137 mM NaCl, 10% glycerol, 1 mM phenylmethylsulfonyl fluoride, 10 g/ml leupeptin, and 10 g/ml aprotinin. To prepare cell extracts for GrB or caspase-3 reactions, cultured cells were washed twice with phosphate-buffered saline and then resuspended in ice-cold buffer (20 mM HEPES, pH 7.0, 10 mM KCl, 1.5 mM MgCl 2 , 1 mM sodium EDTA, 1 mM sodium EGTA, 1 mM dithiothreitol, 250 mM sucrose, and protease inhibitors). After incubation on ice for 20 min, the cells (2.5 ϫ 10 6 /0.5 ml) were disrupted by Dounce homogenization. The nuclei were removed by centrifugation at 650 ϫ g for 10 min at 4°C. Cellular extracts were obtained as the supernatants resulting from centrifugation at 14,000 ϫ g at 4°C for 30 min.
Cellular Fractionation and Mitochondria Purification-To obtain an enriched mitochondrial fraction, Jurkat or HeLa cells were suspended in mitochondrial buffer (MIB) composed of 0.3 M sucrose, 10 mM MOPS, 1 mM EDTA, and 4 mM KH 2 PO 4 , pH 7.4, and lysed by Dounce homogenization as described previously (10). Briefly, the nuclei and debris were removed by a 10-min centrifugation at 650 ϫ g, and a pellet containing mitochondria was obtained by two successive spins at 10,000 ϫ g for 12 min. To obtain the S-100 fraction, the postnuclear supernatant was further centrifuged at 100,000 ϫ g for 1 h at 4°C. To obtain the enriched mitochondrial fraction, the mitochondria containing pellet was resuspended in MIB and layered on a Percoll gradient consisting of four layers of 10, 18, 30, and 70% Percoll in MIB. After centrifugation for 30 min at 15,000 ϫ g, the mitochondrial fraction was collected at the 30/70 interface. Mitochondria were diluted in MIB containing 1 mg/ml bovine serum albumin (at least a 10-fold dilution required to remove Percoll). The mitochondrial pellet was obtained by a 40-min spin at 20,000 ϫ g and used immediately. Purity was assessed by electron microscopy and by enzyme marker analysis (23). For enzyme analysis, the following enzymes were assayed: aryl sulfatase (lysosomes/granules); N-acetyl--D-glucosaminidase, ␣-L-fucosidase, and -glucoronidase (lysosome); lactate dehydrogenase (cytosol); cytochrome oxidase or monoamine oxidase (mitochondria); thiamine pyrophosphatase (Golgi); NADH oxidase (endoplasmic reticulum); and dipeptidyl peptidase IV (plasma membrane). The purity was assessed at 95%, with ϳ5% or less contamination from the microsomal fraction.
Molecular Cloning of Human BimEL-Production of human BimEL cDNA clones was as described previously (10).
Transfection-Hct116 cells were washed in cold phosphate-buffered saline and resuspended in electroporation buffer (Amaxa) at a final concentration of 3 ϫ 10 7 cells/ml. Five g of linearized plasmid DNA were mixed with 0.1 ml of cell suspension, transferred to a 2.0-mm electroporation cuvette, and nucleofected with an Amaxa Nucleofector apparatus (Amaxa, www.amaxa.com) utilizing program T20, according to the manufacturer's directions. CAMA-1 cells were transfected by the GenePorter Transfection Reagent (Gene Therapy Systems Inc., San Diego, CA) according to the manufacturer's directions. Geneticin-resistant cell lines were grown in the presence of G418 (1500 g/ml). Geneticin-resistant clonal cell lines were generated by dakocytomation (1 cell/well) utilizing a MOFLO high speed cell sorter and Summit Software. Transient transfection of CAMA-1 cells was carried out with Mcl-1 recombinant plasmids (see above) and the EGFP control plasmid pEGFP-C2 (BD Biosciences) using GenePorter transfection reagent following the manufacturer's protocol. Gly 128 -Pro 133 ) and the same reverse primer as WT Mcl-1. PCR amplicons were gel-purified as described previously (10), digested with the restriction enzymes BamHI and SalI, and subsequently ligated into the BamHI and SalI sites of the vector pGEX-4T-1 (Amersham Biosciences). Sequence integrity of recombinant pGEX-4T-1 clones was determined as above.
Production of GST-Mcl-1 Fusion Proteins-E. coli BL21 cells (Novagen) were transformed with recombinant pGEX-4T-1 Mcl-1 plasmids and plated out on LB agar-ampicillin plates as directed by the manufacturer. Four colonies were randomly picked for each Mcl-1 transformant, and each colony was grown overnight in 3 ml of LB-ampicillin medium. Three ml of 2ϫYT-ampicillin medium (ϫ4) was inoculated with 45 l of overnight culture for each colony and cells were grown at 37°C to A 600 ϭ 0.6 -0.8 (ϳ2.5 h). Isopropyl -D-thiogalactoside was added to a final concentration of 1 mM, and the cultures were incubated for an additional 3-4 h at 37°C. Bacterial pellets (2200 ϫ g for 30 min at 4°C) were extracted, and GST-Mcl-1 fusion proteins were isolated following the manufacturer's protocol (Amersham Biosciences) for Mi-croSpin GST purification modules except that 16 units of Benzonase (Sigma) and 3 l of Protease Inhibitor Mixture Set III (Novagen) were added to the pellet extraction buffer. GST-Mcl-1 fusion proteins eluted from the microspin columns were concentrated and underwent buffer exchange using YM-10 Microcon centrifugal filter devices (Millipore).
RNAi Using Mcl-1, Bim, and Lamin siRNAs-Short interfering RNAs were obtained as duplexes in purified and desalted form (Option C) from Dharmacon. The three siRNAs had the following sense strand sequences: Mcl-1, 5Ј-GAAACGCGGUAAUCGGACUdTdT-3Ј; Bim, 5Ј-GACCGAGAAGGUAGACAAUUGdTdT-3Ј; and Lamin, 5Ј-CUGGACU-UCCAGAAGAACAdTdT-3Ј. CAMA-1 cells (2.5 ϫ 10 5 ) were plated in a 6-well plate and following 24 h (at ϳ30% confluency) were transfected with 200 nM siRNA in Opti-MEM medium (Invitrogen) without fetal calf serum using Oligofectamine reagent (Invitrogen) according to the manufacturer's transfection protocol. After 4 h, fetal calf serum was added to a final concentration of 10%. At 40 h, the medium over the cells was adjusted to 1 ml before the addition of an apoptotic agent.
Release of Mitochondrial Apoptogenic Proteins-Purified mitochondria (50 g of protein) were incubated with various doses of recombi- His-BimL was then added for a co-incubation at 37°C for 30 min. Mitochondria were pelleted by centrifugation at 10,000 ϫ g for 10 min. The resulting supernatants or mitochondria were mixed with SDS sample buffer, resolved by SDS-PAGE, and analyzed by immunoblotting for the presence of mitochondrial apoptogenic proteins, cytochrome c, and AIF.
In Vitro Transcription-Translation-Mcl-1, Mcl-1⌬N127, Mcl-1⌬N117, and BimEL cDNAs were expressed in the TNT T7 transcription-translation reticulocyte lysate system (Promega). Each coupled transcription-translation reaction contained 1 g of plasmid DNA in a final volume of 50 l in a methionine-free reticulocyte lysate reaction mixture supplemented with 35 S-labeled methionine according to the manufacturer's instructions. After incubation at 30°C for 90 min, the reaction products were immediately used or stored at Ϫ70°C.
In Vitro Cleavage Reaction with Caspase-3 or GrB-In vitro cleavage reactions were performed in total volume of 20 l. The reaction buffer consisted of 20 mM HEPES, pH 7.4, 10 mM KCl, 1.5 mM MgCl 2 , 1 mM EDTA, 1 mM EGTA, 20% glycerol, 1 mM phenylmethylsulfonyl fluoride, 10 g/ml leupeptin, and 10 g/ml aprotinin. Each reaction also contained 1 l of reticulocyte lysate containing 35 S-labeled Mcl-1, or BimEL, and also reticulocyte lysate minus plasmid in the presence or the absence of recombinant caspase-3 (5-100 nM) or GrB (33-330 nM) for 20 min at 37°C. The reactions were terminated by addition of SDS loading buffer and boiling for 5 min.
Immunoprecipitation-For Mcl-1 and Bim immunoprecipitation experiments, the cells (5-10 ϫ 10 6 ) and mitochondria (200 g of protein) were lysed in 1% CHAPS buffer (20 mM HEPES, 10 mM KCl, 1.5 mM MgCl 2 , 1 mM EDTA, 1 mM EGTA, 1 mM dithiothreitol, 0.1 mM phenylmethylsulfonyl fluoride, and 1% CHAPS). The lysates were precleared with protein A-or G-Sepharose beads, and incubated with anti-Mcl-1 or anti-Bim Abs at 4°C for 4 h. The immune complexes were then precipitated with protein A-or G-Sepharose beads at 4°C overnight. The pellets were washed four times with the appropriate lysis buffer and boiled for 5 min in SDS sample buffer.
Western Blot Analysis-Proteins in cell lysates, cell extracts, mitochondria, or S-100 were resolved by SDS-PAGE and transferred to polyvinylidene difluoride membranes, as described previously (25). Following probing with a specific primary Ab and horseradish peroxidaseconjugated secondary Ab, the protein bands were detected by enhanced chemiluminescence (Pierce).
Flow Cytometry-Cytofluorometric analyses of apoptosis were performed by co-staining with propidium iodide and fluorescein isothiocyanate-annexin V conjugates (Becton-Dickenson). The staining was performed according to the manufacturer's procedures, assessed by a Beckman Coulter Epics XL-MCL, and analyzed with the EXPO32 software.
Elimination of Mcl-1 during Caspase-3 and GrB-mediated
Apoptosis-We recently reported the identification of a novel mitochondrial cascade for GrB-mediated apoptosis that encompasses the release of Bim from sequestration by mitochondrial Mcl-1 (10). Our reported studies suggest that the disruption of the Mcl-1⅐Bim complex is mediated by cleavage of Mcl-1 by GrB or caspase-3. Examples for the observed down-regulation in Mcl-1 expression are depicted in Fig. 1. Thus, in breast carci-noma CAMA-1 cells that serve as targets for cytotoxicity mediated by treatment with purified GrB and Ad, expression of Mcl-1 is significantly reduced as early as 6 h following treatment (Fig. 1A). In this cytotoxicity system, Ad substitutes for perforin as an endosomolytic agent that following the cellular internalization of GrB, facilitates its release into the cytoplasm (26,27). Reduction in the level of Mcl-1 expression has also been observed in VP-16-treated cells, where the activity of caspases is stimulated. However, in etoposide-treated cells, the down-regulation in Mcl-1 expression is not efficiently blocked by a potent caspase inhibitor (Fig. 1B), suggesting that other proteolytic activities may also be involved in the observed elim- (24). Thus, mutant Mcl-1 cDNAs that encode single residue conversions (Asp 3 Ala) at one of the five sites as well as a mutant for both residues 127 and 157 were produced. All Mcl-1 cDNAs were ligated into the mammalian expression vector pCR3.1. The mutated Mcl-1 plasmids were transcribed and translated by an in vitro system, and the products were treated with recombinant caspase-3 or GrB. As determined by autoradiography ( Fig. 2A, top panel) or by immunoblotting (bottom panel), Mcl-1 cleavage by caspase-3 was mapped to aspartic acid residues 127 and 157. Mcl-1 cleavage by caspase-3 was completely blocked in the Mcl-1 mutant for both the 127 and 157 aspartic acid residues. These cleavage sites for caspase-3 were recently reported by two independent groups (19,20). GrB cleavage sites were mapped to aspartic acid residues 117, 127, and 157 (Fig. 2B). As demonstrated by immunoblotting (Fig. 2B, bottom panel), the major cleavage site for GrB is aspartic acid 117, because the D117A mutation blocked significantly the processing of Mcl-1, whereas D127A, D157A, and the double mutant of D127A/D157A were significantly less efficient in blocking the loss in full-length Mcl-1.
Kinetics of Elimination of Mcl-1 Cleavage Products-Cleavage of other anti-apoptotic Bcl-2 members, such as Bcl-2 and Bcl-XL, results in their conversion to pro-apoptotic proteins (21,22). Caspase cleavage of either Bcl-2 or Bcl-XL removes from each of these proteins an N-terminal BH4 domain that may be required for their antiapoptotic activity. Mcl-1 does not possess a BH4 domain, and cleavage by GrB or caspase-3 removes a 117-(GrB-only), 127-, or 157-residue N-terminal fragment, producing a C-terminal Mcl-1 fragment(s) that contains the transmembrane and BH1-3 domains (Fig. 2C). To gain a better insight into the apoptotic significance of the cleavage products, we determined their fate during the reaction of Mcl-1 with caspase-3 or GrB. The C-terminal cleavage products of Mcl-1 represented by residues 158 -350 and 118 -350 that are generated by caspase-3 and GrB, respectively, appear to accumulate during the initial phase of the reaction but start to decline within 2 h of enzymatic exposure (Fig. 3A). To generate a reaction setting that mimics the intracellular condi- (Fig. 3B). In addition to its effect on the degradation of fulllength Mcl-1, GrB also accelerated the degradation of the Mcl-1 deletion mutants, because each contains at least one GrB cleavage site at residue Asp 157 . These observations also suggest that the major cleavage products of Mcl-1 generated by either caspase-3 or GrB are being further degraded by proteolytic activity present in the cell extract during an advanced apoptotic process.
The Apoptotic Response to GrB Is Significantly Attenuated by Stable Transfection of Full-length Mcl-1-To assess the significance of the expression of Mcl-1 and its cleavage products on survival, we transiently co-transfected the breast carcinoma CAMA-1 cell line with plasmids that express full-length Mcl-1, Mcl-1⌬N117 (GrB-cleaved C-terminal product), Mcl-1⌬N127 (caspase-3-cleaved C-terminal product), and a control plasmid encoding EGFP at a 10:1 ratio, respectively. As judged by EGFP/annexin V flow cytometry, we did not observe induction of apoptotic death in successfully transfected cells (EGFP-positive) (results not shown). To analyze the significance of the presence of Mcl-1 and its cleaved forms for survival under better defined conditions, we stably transfected colon carcinoma Hct116 and breast carcinoma CAMA-1 cell lines with the plasmids encoding these different Mcl-1 forms. We obtained a comparable number of geneticin-resistant clones from each of the cell lines (ϳ40 clones for each plasmid) and assessed the levels of Mcl-1 expression by Western blotting (Fig. 4A). The majority of the transfected clones demonstrated equal levels of expression of the transfected proteins. Geneticin-resistant Hct116 or CAMA-1 clonal cell lines that were confirmed by immunoblotting to overexpress the aforementioned Mcl-1-related proteins at equal levels were assessed for susceptibility to death mediated by GrB/Ad. A significant reduction in the apoptotic response to GrB/Ad as assessed by annexin V/propidium iodide was detected in each of the tumor cell lines transfected with full-length Mcl-1 (Fig. 4, B and C). However, overexpression of the Mcl-1 fragments that correspond to the caspase-3 and GrB C-terminal cleavage products provided a reduced protective effect from GrB-mediated cytotoxicity (Fig. 4, B and C). The two cells lines included in this analysis (CAMA-1 or Hct116) were sensitive to GrB-mediated apoptosis, but demonstrated different levels of phosphatidylserine externalization (Fig. 4, B and C). Differential detection of phosphatidylserine exposure has been reported to be cell type-specific and in various cells dependent on caspase activity or intracellular ATP levels (29 -31). Thus, in CAMA-1 cells, where only few cells were stained by annexin V, GrB may not directly activate intracellular molecules that are involved in phosphatidylserine externalization. Similar to the observations in transient transfected cells, overexpression of the C-terminal Mcl-1 fragments in either the Hct116 or CAMA-1 cell line was not associated with induction of apoptosis.
As we have previously reported, endogenous Mcl-1 localizes mainly to the outer mitochondrial membrane (10). Following subcellular fractionation to cytosolic S-100 and purified mitochondrial fractions, we assessed the localization of the stably transfected full-length and C-terminal fragments (Fig. 5, left top panel). In contrast to fulllength Mcl-1, the ectopically expressed C-terminal fragments of Mcl-1 accumulated mainly in the S-100 cytosolic fraction (Fig. 5, right top panel). Of note, cell equivalent loading for the various cell fractions (extract, S-100, and mitochondria) is confirmed by the expression levels of -actin (cytosolic marker) and Cox IV (mitochondrial marker) as demonstrated by reprobing of the same membranes (Fig. 5, bottom panels). This inefficient mitochondrial subcellular localization of the Mcl-1 GrB cleavage product may contribute to its ineffective protection against GrB-mediated apoptosis.
Binding between Bim and Mcl-1 Cleavage Products-The high affinity binding between Mcl-1 and Bim may be at the crux of its anti-apoptotic effect, which is most likely accomplished through the sequestration of this potent pro-apoptotic protein, Bim (5,10). We therefore assessed the ability of the Mcl-1 cleavage products to bind Bim. The ability of 35 Slabeled in vitro translated Mcl-1 or in vitro translated 35 Slabeled recombinant Mcl-1 proteins corresponding to the caspase-3 or GrB generated C-terminal cleavage products to bind cold in vitro translated BimEL was assessed by coimmunoprecipitation utilizing Bim-specific Ab (Fig. 6). Both Mcl-1 cleavage products, generated by either GrB or caspase-3, were co-immunoprecipitated with BimEL. Yet, as judged by the efficiency of the co-immunoprecipitation, the affinity of full-length Mcl-1 for Bim was higher than that of the cleavage products for Bim; full-length Mcl-1 was completely depleted from the supernatant by Bim immunoprecipitation, whereas only partial co-precipitation with BimEL was observed for the Mcl-1 cleavage products.
Mcl-1 Cleavage Products Maintain Partial Capability to Inhibit Bim from Mediating the Release of Mitochondrial
Apoptogenic Proteins-To investigate the ability of the Mcl-1 C-terminal cleavage products to inhibit Bim apoptotic function, we applied His-BimL protein to purified mitochondria in the absence or presence of GST-Mcl-1 (Fig. 7A), GST-Mcl-1⌬N127 (Fig. 7B), or GST-Mcl-1⌬N117 (Fig. 7C) and assessed the release of cytochrome c and AIF. Recombinant proteins corresponding to Mcl-1 C-terminal cleavage products generated by caspase-3 or GrB could prevent His-BimL from mediating the release of cytochrome c and AIF. A slightly reduced efficiency in the inhibitory effects of the fragments as compared with full-length Mcl-1 was observed in regard to the release of cytochrome c, but not AIF. The difference between full-length GST-Mcl-1 (Fig. 7A) and the GST-Mcl-1 fragments (Fig. 7, B and C) achieve a similar effect. These findings suggest that the Mcl-1 C-terminal fragments generated by either GrB or caspase-3 can at least partially inhibit Bim-mediated cytochrome c release from purified mitochondria.
GrB Susceptibility of Breast Carcinoma Cells Is Enhanced by siRNA Silencing of Mcl-1 and Inhibited by Bim Silencing-Our results (Fig. 3) suggest that cleavage of Mcl-1 by either GrB or caspase activity leads to an eventual elimination of Mcl-1. To investigate the effects of Mcl-1 elimination on breast carcinoma cell viability and their susceptibility to GrB-mediated apoptosis, we subjected CAMA-1 cells to RNAi for Mcl-1, Bim, or Lamin A/C. Successful knockdown of these genes was con-firmed by immunoblotting at 40 -48 h post-transfection with the specific siRNA (Fig. 8A). At 40 h post-transfection, the cells were treated with GrB/Ad and assessed 6 h later for viability by flow cytometry of annexin V and propidium iodide staining (Fig. 8B). Silencing of Mcl-1 alone, but not of Bim or Lamin A/C, reduced the viability of CAMA-1 cells even in the absence of external apoptotic stimulation. This increase in background apoptosis may relate to the ability of Mcl-1 to sequester and/or neutralize Bim and probably other mediators of apoptosis. Furthermore, elimination of Mcl-1 enhanced the susceptibility of CAMA-1 cells to GrB/Ad-mediated apoptosis. Such an increased susceptibility in the absence of Mcl-1 has been reported for other apoptotic inducers, including UV and TRAIL, and therefore is not GrB-specific (14,32). However, apoptotic activity of GrB in CAMA-1 cells was predominantly mediated by the Mcl-1 high affinity binding partner, Bim, because it was significantly blocked in cells transfected with Bim-specific siRNA (Fig. 8B). Control silencing of Lamin A/C did not have any effect on the response of these cells to GrB/Ad. These results underscore the important contribution of the Mcl-1⅐Bim cascade in CAMA-1 cell susceptibility to GrB-mediated apoptosis and further emphasize the importance of Mcl-1 elimination in the execution of this apoptotic response. DISCUSSION Mitochondrial disruption has been established as a key event during GrB-mediated apoptosis, but the exact mechanism for GrB function has remained unclear (33,34). Recent studies have identified Bid as a direct substrate for GrB and therefore as a direct link to the mitochondrial apoptotic cascade mediated by Bax or Bak (35)(36)(37). Although in a cell-free system GrB-cleaved Bid is a potent inducer for the release of mitochondrial apoptotic proteins (38), a recent study questions whether a direct cleavage of Bid by GrB occurs under physiologic conditions (34,39). Furthermore, in response to GrB treatment, embryonic fibroblasts from Bid Ϫ/Ϫ mice display disrupted mitochondrial transmembrane potentials (40). These studies implied that cytosolic mediators other than Bid may act as a link between GrB and the mitochondria. Our recent studies (10) have identified Mcl-1 as a direct substrate for GrB and as a high affinity binding partner for Bim. We proposed that the Mcl-1⅐Bim cooperation may constitute an alternative mitochondrial apoptotic pathway that is activated directly by GrB, independent of Bid. In the current study, we investigated the functional mechanism and biological significance of this novel GrB-mediated mitochondrial apoptotic cascade.
Caspase cleavage of the pro-survival Bcl-2 members, Bcl-2 and Bcl-XL, converts them into death effector proteins that further amplify the apoptotic cascade (21,22). In contrast to Mcl-1, Bcl-2 and Bcl-XL are not susceptible to GrB activity. The current study has mapped the GrB cleavage sites of Mcl-1 to aspartic acid residues 117, 127, and 157 and confirmed the recently reported caspase-3 cleavage sites of Mcl-1 at aspartic acid residues 127 and 157 (19,20). Interestingly, cleavage of Mcl-1 by either caspase-3 or GrB activities resembles the caspase-3 cleavage of Bcl-2 and Bcl-XL in its removal of an N-terminal fragment while producing a C-terminal protein that contains the BH1-3 domains. Therefore, we investigated the apoptotic nature of the Mcl-1 C-terminal fragments generated by either caspase-3 or GrB activity. Whereas stable transfection of full-length Mcl-1 endowed significant protection from apoptosis on GrB-susceptible tumor cells, only mild (CAMA-1) to moderate (Hct116) protection was observed in these tumor cells when they were stably transfected with plasmids encoding the C-terminal protein fragments generated by GrB or caspase-3. Furthermore, ectopic expression of the cleavage products did not induce apoptosis in colon or breast carcinoma cell lines and did not enhance apoptosis mediated in these cells by cytotoxic drugs. 2 Thus, our findings suggest that the caspase-3 and GrB Mcl-1 C-terminal cleavage products are functionally different from the caspase-cleaved Bcl-2 or Bcl-XL fragments.
The protective function of Mcl-1 is mediated at the mitochondrial outer membrane, the subcellular localization site for both Mcl-1 and Bim (10,13). Thus, the inefficient protection by the Mcl-1 C-terminal cleavage products can be partly explained by their altered ability to target the mitochondria. Recent studies have suggested that Mcl-1 functions at an apical point in the mitochondrial apoptotic pathway, and its elimination is required for mitochondrial apoptotic events to take place (14). Thus, in UV-mediated apoptosis in HeLa cells, elimination of Mcl-1 occurs prior to Bax translocation, Bax and Bak oligomerization, and cytochrome c release (14). This cascade of events fits with our proposal that Mcl-1 has a role in the regulation of mitochondrial events by sequestering Bim, thereby preventing the activation of the aforementioned mitochondrial apoptotic events. Because cleavage of Mcl-1 by GrB does not completely abolish the sequestration of Bim, we reasoned that a proteolytic mechanism(s) other than via GrB and caspases may be involved in the further degradation of the Mcl-1 cleavage products. Indeed, we obtained evidence that the Mcl-1 cleavage products are further degraded by proteolytic activity present in the cell extract. Such activity may be mediated by the proteasome, because several studies have documented increased stability of Mcl-1 in the presence of proteasome inhibitors (14,41,42). To investigate the significance of ptotic cascades, knockdown of Mcl-1 is very likely to increase the sensitivity of the cells to an array of apoptotic agents, including GrB. Indeed, it has recently been reported that Mcl-1 knockdown was associated with an increased sensitivity to UV or TRAIL (14,32). In the context of the current study, Mcl-1 knockdown may be viewed as a model for cells that undergo Mcl-1 elimination. Thus, in addition to increased susceptibility to apoptotic agents, knockdown of Mcl-1 may also free Bim and other apoptotic Bcl-2 proteins from being sequestered by Mcl-1. Such a scenario may explain the significant level of apoptosis (24%) detected following Mcl-1 siRNA transfection in the absence of an external apoptotic signal. Recently, an example of apoptosis induction by Bcl-2 knockdown was reported (43). Silencing of Bcl-2 induced p53-mediated apoptosis that occurred without stimulation by genotoxic drugs, and that was not seen in the controls or in isogenic clonal cells deficient in p53. The increased level of background apoptosis we observed in cells treated with Mcl-1 siRNA suggests a role for Mcl-1 in the regulation of cell survival under homeostatic conditions. The biological importance of the Mcl-1⅐Bim cascade in GrB function was further underscored by the significant block in the response of CAMA-1 cells to GrB/Ad in the absence of Bim. Because GrB has multiple potential entry points for initiation of a caspase-dependent apoptotic cascade (34,44), it was surprising that the absence of Bim arrested, at least within the time frame of the first 6 h of exposure to GrB/Ad, the apoptotic response seen in controls.
In summary, our studies suggest that the disruption of Mcl-1⅐Bim complex is initiated by GrB cleavage of Mcl-1. The GrBgenerated Mcl-1 C-terminal fragment(s) exerts a variable but partial protective effect against GrB-mediated apoptosis by maintaining limited ability to bind and sequester Bim. The inefficient protective effect of this GrB-cleaved Mcl-1 fragment is most likely due to the combination of its altered subcellular localization to the mitochondria and reduced binding to Bim. Based on these findings and the studies involving Mcl-1 or Bim knockdown, we propose that the disruption of the Mcl-1⅐Bim complex by GrB initiates a major Bim-mediated cellular cytotoxic mechanism that requires the elimination of Mcl-1 following its initial cleavage. | 6,745.2 | 2005-04-22T00:00:00.000 | [
"Biology",
"Medicine"
] |
The logic of pseudo-uninorms and their residua
Our method of density elimination is generalized to the non-commutative substructural logic GpsUL*. Then the standard completeness of GpsUL* follows as a lemma by virtue of previous work by Metcalfe and Montagna. This result shows that GpsUL* is the logic of pseudo-uninorms and their residua and answered the question posed by Prof. Metcalfe, Olivetti, Gabbay and Tsinakis.
Introduction
In 2009, Prof. Metcalfe, Olivetti and Gabbay conjectured that the Hilbert system HpsUL is the logic of pseudo-uninorms and their residua [1]. Although HpsUL is the logic of bounded representable residuated lattices, it is not the case, as shown by Prof. Wang and Zhao in [2]. In 2013, we constructed the system HpsUL * by adding the weakly commutativity rule (WCM) ⊢ (A ↝ t) → (A → t) to HpsUL and conjectured that it is the logic of residuated pseudo-uninorms and their residua [3].
In this paper, we prove our conjecture by showing that the density elimination holds for the hypersequent system GpsUL * corresponding to HpsUL * . Then the standard completeness of HpsUL * follows as a lemma by virtue of previous work by Metcalfe and Montagna [4]. This shows that HpsUL * is an axiomatization for the variety of residuated lattices generated by all dense residuated chains. Thus we also answered the question posed by Prof. Metcalfe and Tsinakis in [5] in 2017.
In proving the density elimination for GpsUL * , we have to overcome several difficulties as follows. Firstly, cut-elimination doesn't holds for GpsUL * . Note that (WCM) and the density rule(D) are formulated as Here, the major problem is how to extend (D) such that it is applicable to G 2 Γ 2 , Π ′ 2 , p, Π ′′ 2 , Σ 2 ⇒ p. By replacing p with t, we get G 2 Γ 2 , Π ′ 2 , t, Π ′′ 2 , Σ 2 ⇒ t. But there exists no derivation of which we can't obtain by (WCM). It seems that (WCM) can't be strengthened further in order to solve this difficulty. We overcome this difficulty by introducing a restricted subsystem GpsUL Ω of GpsUL * . GpsUL Ω is a generalization of GIUL Ω , which we introduced in [6] in order to solve a longstanding open problem, i.e., the standard completeness of IUL. Two new manipulations, which we call the derivation-splitting operation and derivation-splicing operation, are introduced in GpsUL Ω .
The third difficulty we encounter is that the conditions of applying the restricted external contraction rule (EC Ω ) become more complex in GpsUL Ω because new derivation-splitting operations make the conclusion of the generalized density rule to be a set of hypersequents rather than one hypersequent. We continue to apply derivation-grafting operations in the separation algorithm of the multiple branches of GIUL Ω in [6] but we have to introduce a new construction method for GpsUL Ω by induction on the height of the complete set of maximal (pEC)-nodes other than on the number of branches.
Hence G is not a tautology in HpsUL. Therefore it is not a theorem in HpsUL by Theorem 9.27 in [1].
is provable in GpsUL * , as shown in Figure 1.
Figure 1 A proof τ of G
Suppose that G has a cut-free proof ρ. Then there exists no occurrence of t in ρ by its subformula property. Thus there exists no application of (WCM) in ρ. Hence G is a theorem of GpsUL, which contradicts Lemma 2.4.
Remark 2.6. Following the construction given in the proof of Theorem 53 in [4], (CUT ) in the figure 1 is eliminated by the following derivation. However, the application of (WCM) in ρ is invalid, which illustrates the reason why the cut-elimination theorem doesn't hold in GpsUL * .
We call it the weakly cut rule and, denote by (WCT ).
Proof. It is proved by a procedure similar to that of Theorem 53 in [4] and omitted.
GpsUL Ω is a restricted subsystem of GpsUL * such that (i) p is designated as the unique eigenvariable by which we means that it does not be used to built up any formula containing logical connectives and only used as a sequent-formula.
(ii) Each occurrence of p in a hypersequent is assigned one unique identification number i in GpsUL Ω and written as p i . Initial sequent p ⇒ p of GpsUL * has the form such that G ′′ can be obtained by applying σ l to antecedents of sequents in G ′ and σ r to succedents of sequents in G ′ .
(v) A hypersequent G G 1 G 2 can be contracted as G G 1 in GpsUL Ω under certain condition given in Construction 3.15, which we called the constraint external contraction rule and denote by (vi) (EW) is forbidden in GpsUL Ω and, (EC) and (CUT ) are replaced with (EC Ω ) and (WCT ), respectively. and (viii) G 1 S 1 and G 2 S 2 are closed and disjoint for each two-premise inference rule Proof. Although (WCT ) makes t's in its premises disappear in its conclusion, it has no effect on identification numbers of the eigenvariable p in a hypersequent because t is a constant in GpsUL Ω and distinguished from propositional variables.
The generalized density rule (D) for GpsUL Ω
In this section, GL cf Ω is G ps ULΩ without (EC Ω ). Generally, A, B, C, ⋯, denote a formula other than an eigenvariable p i .
There are three cases to be considered.
The case of all focus formula(s) of S ′ contained in ∆ k is dealt with by a procedure dual to above and omitted.
(I)) and, ⟨τ * ⟩ + j ⟨H k+1 ⟩ + j is constructed by combining ⟨τ * ⟩ + j ⟨H k ⟩ + j and Case 3 S ′ ∈ ⟨H k ⟩ + j . It is dealt with by a procedure dual to Case 2 and omitted. Definition 3.2. The manipulation described in Construction 3.1 is called the derivation-splitting operation when it is applied to a derivation and, the splitting operation when applied to a hypersequent.
Then there exist two hypersequents G 1 and G 2 such Ω is constructed by induction on n − m as follows.
• For the base step, let n − m = 0. Then and Π ′ , Π ′′ , Π ′′′ = Π and Γ ′′ , Γ ′ = Γ and ∆ ′ , ∆ ′′ = ∆. It follows from Corollary 3.3 that 6 • For the induction step, let n − m > 0. Then it is treated using applications of the induction hypothesis to the premise followed by an application of the relevant rule. For example, By the induction hypothesis we obtain a derivation of G Γ, Π, ∆ ⇒ A: Definition 3.5. The manipulation described in Construction 3.4 is called the derivation-splicing operation when it is applied to a derivation and, the splicing operation when applied to a hypersequent.
, the procedure terminates and n ∶= k, otherwise v l (G k ) v r (G k ) ≠ ∅ and define i k+1 to be an identification num- Proof. By the construction in the proof of Lemma 3.9, i k ∈ v l (G k−1 ) for all 2 ⩽ k ⩽ n. Then This shows that D K (G G ′ ) is constructed by repeatedly applying splicing operations.
Let G ′ be a splitting unit of G G ′ in the form Γ 1 ⇒ p 1 ⋯ Γ n ⇒ p n . Then each node of Ω G ′ has one and only one child node. Thus there exists one cycle in Ω G ′ by V G ′ = n < ∞. Assume that, without loss of generality, (1, 2), (2, 3), ⋯, (i, 1) is the cycle of Ω G ′ . Then p 1 ∈ Γ 2 , This process also shows that there exists only one cycle in Ω G ′ . Then we introduce the following definition.
Definition 3.13. (i) Γ j ⇒ p j is called a splitting sequent of G ′ and p j its corresponding splitting variable for all 1 ⩽ j ⩽ i.
(ii) Let K = {1, 2, ⋯, n} and D 1 (G Γ, In this section, we adapt the separation algorithm of branches in [6] to GpsUL * and prove the following theorem. | 1,997.8 | 2009-08-01T00:00:00.000 | [
"Mathematics"
] |
Optimization of Alumina Ceramics Corrosion Resistance in Nitric Acid
The development of ceramic materials resistance in various aggressive media combined with required mechanical properties is of considerable importance for enabling the wider application of ceramics. The corrosion resistance of ceramic materials depends on their purity and microstructure, the kind of aggressive media used and the ambient temperature. Therefore, the corrosion resistance of alumina ceramics in aqueous HNO3 solutions of concentrations of 0.50 mol dm−3, 1.25 mol dm−3 and 2.00 mol dm−3 and different exposure times—up to 10 days—have been studied. The influence of temperature (25, 40 and 55 °C) was also monitored. The evaluation of Al2O3 ceramics corrosion resistance was based on the concentration measurements of eluted Al3+, Ca2+, Fe3+, Mg2+, Na+ and Si4+ ions obtained by inductively coupled plasma atomic emission spectrometry (ICP-AES), as well as density measurements of the investigated alumina ceramics. The response surface methodology (RSM) was used for the optimization of parameters within the experimental “sample-corrosive media” area. The exposure of alumina ceramics to aqueous HNO3 solutions was conducted according to the Box–Behnken design. After the regression functions were defined, conditions to achieve the maximum corrosion resistance of the sintered ceramics were determined by optimization within the experimental area.
Introduction
Alumina (Al 2 O 3 ) is a ceramic material that possesses high values of hardness, strength and wear resistance, as well as chemical stability [1][2][3][4]. Therefore, it may be applied as an advanced material in electronics, metallurgy, catalysis, wear protection, refractories, as a composite, etc. [5][6][7]. Nevertheless, the issue of ceramic corrosion is considered and investigated in many fields, such as geochemical research, nuclear waste disposal, art history and archaeological research-including industrial applications [8,9]. Small amounts of impurities and additives have a considerate impact on the production and the final properties of alumina-based ceramics [10]. Grain boundaries of alumina ceramics are sensitive to chemical attacks, which can consequently cause changes in the alumina corrosion resistance. An increase of control over the grain boundary chemistry of polycrystalline alumina may lead to the production of polycrystalline alumina that has a structure comparable to a single crystal sapphire, which is considered to be a highly corrosion resistant material because of the absence of grain boundaries [10,11].
Ceramic corrosion is based on different corrosion mechanisms compared to metal corrosion. The corrosion of metals is mostly a consequence of electrochemical processes, while Alumina granules, produced by a spray drying process, were isostatically cold shaped into cylindrical (green) compacts at Applied Ceramics Inc., Sisak, Croatia. Each green compact was engraved with a number, in order to follow the properties of each one during the experiment. Green compacts were sintered in a high-temperature furnace P310 (Nabertherm, Lilienthal, Germany) using the following regime: initial heating at a rate of 5 • C min −1 up to a temperature of 500 • C, holding at 500 • C for 30 min, further heating at a rate of 5 • C min −1 up to 1600 • C, holding at 1600 • C for 6 h and slow cooling in the furnace to room temperature.
Characterisation of Alumina Ceramics
The phase composition of Al 2 O 3 granules was determined by powder X-ray diffraction, PXRD (Shimadzu XRD6000, Shimadzu Corporation, Kyoto, Japan) with CuKα radiation. The step size of 0.02 degrees between 10 • and 80 • 2θ and a counting time of 0.6 s were used, under an accelerating voltage of 40 kV and a current of 30 mA.
The morphology of the prepared sintered samples was determined according to standard ceramographic technique [20] by means of scanning electron microscope (SEM) (Tescan Vega TS5136LS, Prague, Czech Republic).
The bulk density of the sintered alumina samples was determined by the Archimedes method (Mettler Toledo GmbH, Greifensee, Switzerland, density kit MS-DNY-43) according to ASTM C373-88.
The relative density of the sintered samples was calculated by the following equation: while the relative porosity is calculated as a difference between 100% density and relative density (%) [21]. The hardness of the sintered samples was measured by means of the hardness tester Wilson Wolpert Tukon 2100B (Instron, Grove City, PA, USA). Diagonals were measured by optical microscope Olympus BH (Olympus Imaging Corp., Tokyo, Japan) immediately after unloading. Vickers hardness was measured 10 times per sample.
Fracture toughness was determined after Vickers's indentation. The ratio of the crack length and half of the indentation diagonal (c/a) indicates the crack type, which is used as an indirect indicator of the ceramic toughness [6,[22][23][24]. Care was taken to make indentations only on those areas that had no visible pores. Furthermore, indentation points were randomly chosen over the polished surfaces with a sufficient distance between indentation spots in order not to impact the crack growth of the ceramics during testing. The crack dimensions were not allowed to exceed one-tenth of the thickness of the samples [25].
Although the crack growth of the sintered alumina may be influenced by, e.g., temperature field and thermal stress [26], these effects will not be explored in this research. The research provides measurements that were obtained at ambient temperature.
Corrosion Monitoring of Alumina in Aqueous HNO 3 Solution
The sintered alumina samples were cleaned with alcohol and dried in a sterilizer at 150 ± 5 • C for 4 h. Polypropylene (PP) tubes were marked and filled with 10 cm 3 of the adequate concentration of HNO 3 . The samples were then immersed into the acid solutions and the PP tubes were sealed. The concentrations of HNO 3 used in this experiment were 0.50, 1.25 and 2.00 mol dm −3 . A static corrosion test was carried out according to the Box-Behnken design at 25, 40 and 55 • C. The factors and design points are shown in Tables 2 and 3. Afterwards, the alumina samples were removed from the tubes, rinsed with distilled water and dried in an oven for 3 h at 150 • C. Finally, the bulk density of the alumina samples after the corrosion test was measured.
During the corrosion testing, the weight of the alumina samples remained unchanged (measured on an analytical balance with a precision of 10 −5 g). The mechanisms responsible for the corrosion processes were observed by determining the concentration of ions (Al 3+ , Ca 2+ , Fe 3+ , Mg 2+ and Na + ions) eluted into the corrosive aqueous HNO 3 solution. The concentration of eluted ions was determined by ICP-AES, Teledyne Leeman Labs (Hudson, NH, SAD). The Si 4+ cations were under the quantification limit (LOQ (Si 4+ ) < 0.45 µg/g) [27].
Design of Experiments of Monitoring Alumina Corrosion Resistance
The Box-Behnken design was applied to avoid experiments performed under extreme conditions (vertices of the cube), where unsatisfactory results might occur [19]. The number of experiments also decreased compared to the other designs of RSM [28], which is beneficial in terms of time and other resource limitations (materials and equipment).
According to previous studies [5,7,16,29,30], three factors (input variables) that impact the chemical stability of ceramics in acidic solutions were selected: temperature, concentration and immersion time in HNO 3 . Each factor was varied at three levels with five replicates, which were conducted at the center point (Tables 2 and 3).
Design Expert ®® software (version 13) by Stat-Ease Inc. (Minneapolis, MN, USA) was used to model and analyze the causal relationship between the input and output variables and to perform the diagnostic analysis as well. Calculated regression models provided the quantification of the temperature, time and corrosive media concentration (aqueous HNO 3 solutions) effects on alumina density and the number of eluted ions. It must be noted that reported models were applicable only in the range defined by the experimental area (Table 2). Subsequently, six response variables were measured: density of the investigated alumina ceramics and the amount of Al 3+ , Ca 2+ , Fe 3+ , Mg 2+ and Na + ions eluted from Al 2 O 3 .
Properties of Alumina
The X-ray diffractogram (Figure 1) of the alumina granules showed the presence of the characteristic peaks of the only phase that was α-Al 2 O 3 . area (Table 2). Subsequently, six response variables were tigated alumina ceramics and the amount of Al 3+ , Ca 2+ , Fe 3+ Al2O3.
Properties of Alumina
The X-ray diffractogram (Figure 1) of the alumina g the characteristic peaks of the only phase that was α-Al2O During sintering, grains and grain boundaries wer tween coarsening and densification during the 6 h of si nonuniform grains in size (cca 0.7-8 μm) and orientation μm (Figure 2), which was calculated through the line int accordance with the literature data [7]. During sintering, grains and grain boundaries were formed. The competition between coarsening and densification during the 6 h of sintering led to the formation of nonuniform grains in size (cca 0.7-8 µm) and orientation. The average grain size was 7.6 µm ( Figure 2), which was calculated through the line intercept method [31,32] and is in accordance with the literature data [7].
Design Expert software (version 13) by Stat-Ease Inc. (Minneapolis, M used to model and analyze the causal relationship between the input and outp and to perform the diagnostic analysis as well. Calculated regression models quantification of the temperature, time, and corrosive media concentrati HNO3 solutions) effects on alumina density and the number of eluted ion noted that reported models were applicable only in the range defined by the area (Table 2). Subsequently, six response variables were measured: density tigated alumina ceramics and the amount of Al 3+ , Ca 2+ , Fe 3+ , Mg 2+ , and Na + ion Al2O3.
Properties of Alumina
The X-ray diffractogram (Figure 1) of the alumina granules showed th the characteristic peaks of the only phase that was α-Al2O3. During sintering, grains and grain boundaries were formed. The com tween coarsening and densification during the 6 h of sintering led to the nonuniform grains in size (cca 0.7-8 μm) and orientation. The average grain μm (Figure 2), which was calculated through the line intercept method [31, accordance with the literature data [7]. The measured bulk density was 3.864 ± 0.018 g cm −3 , while the relative 3.1 ± 0.5%. The mechanical properties, such as hardness and fracture toughne in Table 4. Vickers hardness was measured at a load of 9.807 N. The measured bulk density was 3.864 ± 0.018 g cm −3 , while the relative porosity was 3.1 ± 0.5%. The mechanical properties, such as hardness and fracture toughness, are given in Table 4. Vickers hardness was measured at a load of 9.807 N. Cracks obtained during the hardness measurement indicated the Palmqvist crack system while the c/a ratio was less than 2.5 [33]. Fracture toughness (K IC, MPa m 1/2 ) was determined according to Casellas [23,24,34,35]: where F is applied load, N; c half-length of crack, m; E is Young's modulus, GPa; and HV is Vickers hardness.
Modeling of the Amount of Eluted Ions and Alumina Density
As described, the corrosion test was conducted for sintered alumina to determine their corrosion resistance to three concentrations of HNO 3 in a time frame of up to 10 days at different temperatures.
The results of the analysis of variance (ANOVA) of the obtained data regarding the amounts of eluted ions and Al 2 O 3 sample density showed the statistical significance of each factor. The ANOVA table for the amount of Al 3+ eluted ions is given in Table 5. The regression models explained more than 98% of the total variation of the amount of all eluted ions and more than 83% of the density variation (according to the determination coefficient, R 2 ). The normal probability plots for the eluted ions, as well as the one for alumina density, have a similar behavior compared to the normal plot of the amount of eluted Al 3+ ions, which is shown in Figure 3. The normal plot of the residuals shows that there is no significant and undesirable trend, which indicates a normal distribution of residuals [36,37].
High R 2 values and a normal distribution of response residuals demonstrated the adequacy of the obtained models [38]. In Table 6, the experimental data used for the calculation of the regression equations are presented. Response surface plots, as graphic representations of the regression models, are shown in Figure 4. High R 2 values and a normal distribution of adequacy of the obtained models [38]. In Table 6, th culation of the regression equations are presented. R resentations of the regression models, are shown in High R 2 values and a normal distribution of response residuals demonstrated the adequacy of the obtained models [38]. In Table 6, the experimental data used for the calculation of the regression equations are presented. Response surface plots, as graphic representations of the regression models, are shown in Figure 4. exposure to HNO3.
High R 2 values and a normal distribution of response residuals demonstrated the adequacy of the obtained models [38]. In Table 6, the experimental data used for the calculation of the regression equations are presented. Response surface plots, as graphic representations of the regression models, are shown in Figure 4.
Optimization and Verification of Alumina Ceramics Corrosion Resistance in Nitric Acid
The optimum values of the selected independent variables were obtained using numerical optimization and by analyzing the response surface plots (graphical optimization). Models generated by RSM were verified by conducting experiments at the numerically obtained optimized parameters. Experimentally obtained results and results predicted by the model were compared to evaluate the accuracy and suitability of the model. The applicability of the regression models, listed in Table 7, were tested and confirmed by conducting five verification points. All of the results fall within 95% of the confidence interval of the mean, which also proves that it satisfies the 95% of prediction interval, leading to the conclusion that the models are applicable and useful for predicting the values of the responses. Figure 4A,B,E shows similar response surface plots of the regression models for the amount of eluted Al 3+ , Ca 2+ and Na + ions in HNO 3 during the experiment at a constant concentration of HNO 3 (1.25 mol dm −3 ). Contrary to that, the eluted Fe 3+ , Mg 2+ ions and alumina density ( Figure 4C,D,F) show more convex shaped response surface plots. With the increase of HNO 3 temperature and time, the increase of all eluted ions is evident. However, time is not shown as a statistically significant factor for the regression model, even though in practice, the impact of time is notable [39]. Maximum values of eluted Al 3+ , Ca 2+ and Na + ions in HNO 3 are reached at the highest temperature and longest immersion time in HNO 3 , while the maximum amount of eluted Fe 3+ and Mg 2+ ions, as well as alumina density peaks, are achieved earlier, at lower temperatures. Conclusively, the number of eluted ions from the alumina ceramics obtained from the corrosion experiments are in the following order: Fe 3+ < Mg 2+ < Na + < Al 3+ < Ca 2+ The corrosion resistance of alumina ceramics is influenced by the purity of the material due to the segregation of impurities to the grain boundaries during the sintering process. The presence of SiO 2 , at concentrations above cca 1000 ppm, is detrimental due to the formation of a silicate-rich glassy phase on the grain boundaries, which is easily attacked by mineral acids [2]. Alumina ceramics corrosion is in correlation with their microstructure and distribution of CaO, Fe 2 O 3 , MgO, Na 2 O and SiO 2 in it. The distribution and solubility of CaO, Fe 2 O 3 , MgO, Na 2 O and SiO 2 in alumina ceramics depends on the difference in the charge and ionic radius of Ca 2+ (100 pm), Fe 3+ (64.5 pm), Mg 2+ (72 pm), Na + (102 pm) and Si 4+ (40 pm) compared to the Al 3+ (53.5 pm) cation [40]. When cations are not soluble in the crystal lattice of alumina ceramics, they segregate to the grain boundaries [2,16]. Therefore, impurities in alumina ceramics, such as CaO, Fe 2 O 3 , Na 2 O and SiO 2 and sintering aid MgO, have a low solubility in Al 2 O 3 and move to the grain boundaries during the sintering process, where they segregate. Table 7. Regression equations with coded factors for the number of eluted ions and density of Al 2 O 3 ceramics.
Response Regression Equations
Optimum conditions, to achieve the least possible ion elution and highest alumina ceramics density, were found to be at the very beginning of the experiment (0.50 mol dm −3 HNO 3 , 25 • C, 24 h) with a desirability of 93% ( Figure 5). The value of the 93% desirability function means that 93% of the maximum response value is achieved considering the given constraints and criteria. A confirmatory experiment was conducted with the obtained parameters that are not covered by Box-Behnken design (vertex of the cube) [19]. The actual and predicted values were compared (Table 8) and a small deviation is present. However, the verification shown in Table 9 indicates that the obtained models may be considered adequate for the prediction of the alumina ceramics corrosion resistance optimum.
Furthermore, a second optimum may be defined at 2.00 mol dm −3 HNO 3 , 40 • C, 24 h with a desirability of 87 %, according to the numerical and graphical optimization ( Figure 5). The results of the conducted confirmatory test in the second optimum are within 95 % of the confirmation interval ( Table 9).
The first optimum is to be expected to a certain extent, while the second optimum is probably as a consequence of the lower impact of the higher HNO 3 concentration at higher temperatures on the investigated alumina. The plateau visible in Figure 5C represents the experimental area that does not satisfy the desirability conditions.
Summary and Conclusions
In this study, the chemical stability of alumina was investigated at 25, 40 and 55 • C and HNO 3 concentrations of 0.50, 1.25 and 2.00 mol dm −3 in a time frame of up to 240 h. The experiment was conducted according to the Box-Behnken design in order to estimate the conditions at which maximum corrosion resistance was achieved.
Regression models showed a higher elution of ions from alumina ceramics at a lower concentration of HNO 3 and higher temperatures with time. Consequently, at the mentioned conditions, lower alumina ceramics density values were measured.
Within the experimental "sample-corrosive media" area, optimum conditions for reaching the highest corrosion resistance, i.e., the lowest number of eluted ions and the highest alumina ceramics density were achieved after the minimum exposure time (24 h) to 0.50 mol dm −3 HNO 3 at 25 • C. Furthermore, a second optimum was present at 2.00 mol dm −3 HNO 3 , 40 • C, 24 h, but with a lower desirability. Lower HNO 3 concentrations at higher temperatures were shown to be more influential on the dissolution of segregated impurities (CaO, Fe 2 O 3 , Na 2 O and SiO 2 ) and sintering aid (MgO) in the grain boundaries of the alumina ceramics than the higher HNO 3 concentrations. | 4,505.2 | 2022-03-31T00:00:00.000 | [
"Materials Science"
] |
Urban Traffic Noise Maps under 3D Complex Building Environments on a Supercomputer
. The complexity of the 3D buildings and road networks gives the simulation of urban noise difficulty and significance. To solve the problem of computing complexity, a systematic methodology for computing urban traffic noise maps under 3D complex building environments is presented on a supercomputer. A parallel algorithm focused on controlling the compute nodes of the supercomputer is designed. Moreover, a rendering method is provided to visualize the noise map. In addition, a strategy for obtaining a real-time dynamic noise map is elaborated. Two efficiency experiments are implemented. One experiment involves comparing the expansibility of the parallel algorithm with various numbers of compute nodes and various computing scales to determine the expansibility. With an increase in the number of compute nodes, the computing time increases linearly, and an increased computing scale leads to computing efficiency increases. The other experiment is a comparison of the computing speed between a supercomputer and a normal computer; the computing node of Tianhe-2 is found to be six times faster than that of a normal computer. Finally, the traffic noise suppression effect of buildings is analyzed. It is found that the building groups have obvious shielding effect on traffic noise.
Introduction
With rapid growth of population and the continued expansion of transportation systems, traffic noise pollution becomes quite a nuisance to urban residents.In Europe, the recent Environmental Noise Directive (END) revision has updated the current situation on the application of the END [1] and noise pollution continues to be a major health problem.The growing amount of noise pollution can lead to a number of serious diseases, including sleep disturbance [2,3], stroke [4], male infertility [5], and learning impairment [6].Moreover, long-term exposure to traffic noise can easily lead to high blood pressure [7] and affect the quality of life in the neighborhood [8].
Since END 's prescription for noise maps and action plans, there have been many efforts by the scientific community to conduct high-level study and propose new mitigation systems for the main sources of noise in urban areas: road traffic, railway traffic, airport, and industrial, ranging from regulations to operational approaches [9][10][11].A noise map is an important tool for observing and controlling noise pollution and the basis for epidemiology study relating annoyance to noise and noise exposure in urban areas.In urban planning, a noise map is used to analyze the sound qualities of the soundscapes in a specific urban area to generate recommendations for the urban design of the soundscapes [12].For example, Palma de Mallorca (Spain) conducted a study analyzing various noise mitigation measures, which consider not only the reduction of noise and the number of people who can benefit from these measures but also the net monetary benefits generated, by using a traffic noise map [13].Another study proposed a method for sorting road stretches by priority by using a noise map.The method is based on the so-called "road stretch priority index", in which the index involves a number of variables that are weighted according to their influence on the road traffic noise problem, and road stretches of different priorities thus require different plan actions [14].Similarly, Daniel Naish proposed a regional road traffic noise management strategy (RRTNMS); in this strategy, the road was ranked according to the predicted sound pressure level, and different control measures were implemented according to the results of the ranking [15].In both studies, noise maps play a key role in the prediction of traffic noise and the display of final result.Furthermore, noise impact indicators and the mean of the expected individual annoyance scores in the population are calculated on the basis of a noise map.Those indicators include the percentages of people being highly annoyed, annoyed, and slightly annoyed and are used to compare the effect of noise reduction [16].
The research on noise maps started with the publication of END; since then, many scholars have researched noise maps, covering the calculating model and the update method for a noise map [17][18][19][20][21][22][23].The calculation of a noise map is a complex process that involves a traffic noise prediction model and sound propagation attenuation.Nicolas Fortin et al. implemented a noise prediction method within the Orbis-GIS2 software [24]; the method can produce large noise maps on a personal computer in a few hours, but the noise map is two-dimensional rather than three-dimensional.A smallscale 3D noise map can be implemented well by combining the prediction model and measured data to obtain a highprecision noise map [25].However, since the complexity of the noise map calculation is the square of the computing scale which includes receiver point's density and map size, a largescale 3D noise map requires an excessive amount of time when using a normal computer with a CPU containing no more than 16 computing cores (the normal computer information used in the experiments in this article is described in Section 9).A study in 2009 considered computing a noise map on a supercomputer; in this effort, software called Noise Propagation Model is used to obtain the noise map in a supercomputer [26]; the parallel algorithm assigns tasks at one time rather than using dynamic assignment, and the noise map is two-dimensional.In addition, a traffic noise map was computed in the cloud computing environment [27]; however, in that study, the noise map produced was an existing service in the cloud computing system, and the computational framework was not explained.
In the paper, a method for computing a large-scale noise map in three dimensions on a supercomputer is presented to solve the problem of time-consuming calculation of dynamic three-dimensional noise maps.The supercomputer used in this study is Tianhe-2, and a computation strategy using Tianhe-2 is introduced and its performance is analyzed.The entire noise map calculation process includes a noise prediction model, a parallel calculation algorithm, and a visualization scheme of noise map.Then, combined with taxi trajectory data, a real-time noise map implementation method was proposed to dynamically update the noise map of the entire city.In addition, two experiments were carried out to analyze the efficiency of the parallel algorithm, one is about the expansibility analysis of the noise map parallel computing algorithm proposed in this paper, and the other is the analysis of the computing efficiency of Tianhe-2.Finally, we analyze the suppression effect on traffic noise of buildings from the changes in noise distribution on the ground and building surface caused by building groups.
Traffic Noise Prediction among Building Groups
Noise map is composed of noise receiver points and it reflects the noise distribution in an urban area.Vehicles running on the road generate noise that propagates across buildings in a city before reaching the receiver point.Therefore, a model for predicting traffic noise has three steps: the first step is to predict the acoustical power of road, the second step is to predict sound attenuation caused by the building group in an urban area, and the last step is to combine all of the road's acoustic power with the sound attenuation caused by the building group to calculate the noise value of the noise receiver point [28].
The factors affecting road noise emissions include traffic volume, road speed, and road surface materials.Among them, road surface materials are inherent attributes of roads, and good road surface materials can effectively reduce road noise emissions [29,30].This paper is based on China's road environment.China's roads are basically asphalt pavements, so the impact of pavement materials as a fixed item is not reflected in the model parameters.As a result, the acoustical power of road is calculated using (1) and (2): = 10 lg (10 0.1 + 10 0.1 + 10 0.1 ) where is equivalent sound level of the ℎ type traffic flow to the receiver point without shelter, is Equivalent sound level of the road to the receiver point without shelter, is average sound level of the ℎ type traffic flow to the reference point, is traffic flow volume of the ℎ type traffic flow, is average speed of the ℎ type traffic flow, is the prediction time, 0 is distance between the reference point and the lane ( 0 = 7.5), is distance between the receiver point and the lane, and is the field angle formed by the receiver point, the origin point of the road, and the end point of the road, with the peak of the angle being the receiver point.
In the model, the sound attenuation caused by the building group has two parts: 1 denotes the immediate shelter effect on the propagation path of sound caused by building group, and the relevant prediction equation is where is the density of the building group on the propagation path of sound and is the length of the sound propagation path. 2 is the shelter effect of sound diffusion caused by the building group, and the relevant prediction equation is where is the ratio of the projection length on the road of buildings near the road divided by the road length.In total, the noise value of the receiver point should combine all roads and the attenuation of those roads: where is the equivalent sound level of receiver point , is the equivalent sound level of road to a receiver point without shelter, and is the sound attenuation caused by the building group between road and the receiver point.
Parallel Algorithm in a Supercomputer
Urban areas contain a large number of buildings with complicated layouts and irregular outlines, which increases the computation complexity of a noise map, especially a threedimensional one.However, simplifying the prediction model would cause the noise map to have low prediction accuracy.In this condition, a supercomputer is applied to compute a largescale 3D noise map.In this study, Tianhe-2 is used to compute a noise map in three dimensions.Tianhe-2 was announced in June 2013, with Intel Xeon Phi accelerators contributing more than 85% of the 55pflops peak performance [31].The strong computing ability of Tianhe-2 supports the computation of a large-scale 3D noise map on the basis of its hardware; thus, it is critical to have an appropriate and efficient parallel algorithm.The parallel algorithm presented here can be divided into two sections: one is the map tiling, which is the structural basis of parallel computation, and the other section is the coordination of the compute nodes, which implements the computation of noise map.
Map Tiling.
When the distance between the sound source and the receiver point is sufficiently large, the impact of the sound source to the receiver point is so small that it can be ignored.Therefore, breaking up the noise map into a tiling map composed of many smaller rectangle blocks is a reasonable approach for calculating the noise.
The map is tiled into a set of rectangle blocks that are 200 meters in length and 200 meters in width.In each block, the distance between noise receiver points is always set as 4m, so there are always 2500 receiver points in a block.While computing the noise of a block, the roads and buildings in a wider area are brought into such a state that each receiver point in the block can receive all impacts of the roads near it.Range of the area is determined by the number of roads inside it.As shown in Figure 1, when computing block 0, the roads and buildings in the grey area are also brought into the computation.The initial size of the grey area is 600m × 600m, and it gradually expands until the number of roads inside it is larger than a threshold .
Coordination of the Compute Nodes.
Because a supercomputer is composed of a large number of compute nodes and the map is tiled into a set of blocks, one compute node can compute one block at a time.As a result, N blocks can be computed simultaneously if N compute nodes are applied to compute the noise map.The critical point lies in coordinating those compute nodes.In the algorithm, one node is specifically used to control the computation of the other nodes.In the control node, a task list is running to assign tasks to the compute nodes.The control node sends the task number to the compute node, which represents the block index that the compute node is supposed to compute, and the compute node sends the end flag to the control node when the task is finished; the process is shown in Figure 2.
To maximize the computational efficiency and reduce the idle time of compute node, the control logic of control node is supposed to be dynamic.This process requires that whenever a compute node completes its task, the control node assigns a new task for it.The control logic is shown in Figure 3. First, the control node assigns the first task for each compute node, and the control node turns into the loop state.In the loop state, the control node checks whether any compute node has completed its computations and then sends an end flag to that compute node.Whenever the control node finds an idle compute node, it sends a new task to the idle compute node until all tasks are completed.
Rendering a 3D Noise Map
After computing a noise map on the supercomputer, the noise data must be rendered to visualize the result.It is difficult to render a large-scale noise map without any simplification process.One of the most convenient and rapid methods of simplification is to partly render the noise map according to the current zoom level.The basis of rendering the noise map is the conversion of a noise value into a color value.
Render in Grading.
The zoom level determines the range of noise map to be rendered; thus, the noise receiver points required to render the noise map can be selected at a certain distance, and not all of the receiver points must be used in the rendering if the zoom level is small and the rendering range is large.The distance depends on the current zoom level: where is the distance between the noise receiver points selected to render the noise map, is the proportionality coefficient, and is the current zoom level.
In addition, the receiver point includes the point of a building face; thus, the rendering of the noise distribution in buildings is required.The rendering of the noise distribution in buildings depends on the zoom level as well, and buildings are rendered when the zoom level is greater than a given threshold: where 0 is the threshold of the zoom level.
where is the noise value of receiver point.
Display of the Rendering of Noise Map.
The traffic noise rendering result is displayed in Figure 4. Figure 4(a) is at a small zoom level that is not in excess of the threshold, so the building is not rendered.Figure 4(b) is at a zoom level that is in excess of the threshold, so the building is rendered.Figure 4(c) shows the noise distribution on the building face in detail.The green area indicates that the area is very quiet, the red area indicates that the level of noise pollution is serious, and the yellow area indicates that the area is at a normal noise level.
Strategy for Obtaining a Dynamic Noise Map
The dynamic noise map can reflect the real-time noise pollution and is an extremely effective tool for noise monitoring and implementing a short-term noise prevention plan.Dynamap project has put forward the aim for realizing a real-time noise map, it focuses on the research of a technical solution able to ease and reduce the cost of noise map through an automatic monitoring system [32].Another project called Noise Tube proposes measuring and mapping noise pollution through public mobile phones [33].This section proposes a method for drawing dynamic noise maps based on taxi trajectory data and supercomputers to effectively solve the problems of acquiring real-time traffic data and excessive time-consuming when computing noise maps.
Real-Time Traffic Data.
There are many taxis in every city, the tracking information of which is owned by those taxi enterprises.This information is available in real-time.In this study, the traffic data used to predict noise are calculated from the tracking information of almost 20000 taxis in Guangzhou.The instantaneous speed of the taxi is inferred from the real-time location information of the taxi; then, the speed of each taxi on the same road over one hour is averaged to obtain the hourly average speed of the road.Studies have shown that road traffic volume and average road speed have different degrees of influence on traffic noise, and there is a clear correlation between them [34].To obtain the hourly traffic volume, large amounts of traffic monitor video are used to determine the pattern between the road speed and the road traffic volume for different road types, and the hour traffic volume is determined from the hourly average speed.Figure 5 shows the taxi tracking data in Guangzhou, as visualized by a GIS tool.Through this method, real-time traffic data does not need to be obtained through all-day traffic monitoring, and traffic data of the entire city can be obtained; accordingly, this method is not precise enough to obtain traffic data with high accuracy.
Offline Computing.
Computing a traffic noise map for the main urban area of Guangzhou on Tianhe-2 while using 300 compute nodes still requires 22 hours, so an optimization measure is still required to increase the computational speed and obtain a dynamic traffic noise map.It is unreasonable to use more compute nodes to realize the dynamic noise map because doing so would be too expensive.Fortunately, considering the computing process, the sound attenuation caused by the building group is correlated only to the position of the road and buildings according to (4) and ( 5).Therefore, offline computing is possible for [35].
In the first computation of noise map, is calculated using ( 4) and ( 5), and the result is saved in the database.In the following calculation of the noise map, is read from the database rather than calculated using ( 4) and ( 5).The offline computing of the sound attenuation caused by the building group improves the computational efficiency vastly, requiring only approximately 222 seconds to compute the traffic noise map in the main urban area of Guangzhou on Tianhe-2 when using 300 compute nodes; thus, there is enough surplus time to generate traffic data from the taxi data.
Efficiency Analysis
The computation of a large-scale 3D traffic noise map is quite vast, and the computing time is determined by the computing efficiency.To evaluate whether the methodology in the paper is feasible, the efficiency is analyzed in this section.The efficiency analysis is divided into two parts: the algorithm and the hardware.The first part is the expansibility of the algorithm, and the second part is the computing efficiency of the supercomputer.
6.1.Expansibility of the Algorithm.The expansibility of algorithm refers to the variation of algorithm's efficiency with a change in the computing resources and the scale of computation.In the experiment, the efficiency of the algorithm is indicated by the computing time, the computing resource is indicated by the number of compute nodes, and the scale of computation is indicated by the step length of the receiver points (in the same map, the smaller the step length is, the greater the number of receiver points is).
The computing time of a noise map is composed of two parts: one is the time for initialization, which lapses at the beginning of the operation of each node, and the other part is the time for computing the blocks.Because the parallel algorithm allows the compute node to be running constantly until all tasks are completed, the computing time of a noise map is similar to the running time of each compute node.From the discussion above, After simplification, where is the time for initialization of each compute node, is the time for computing each block, is the number of compute nodes, is the number of blocks, and is actual computing time of the noise map.The noise map used to measure the expansibility of the algorithm contains 127 road segments and 102 buildings, and the area is 1332×1785; thus, the noise map is tiled into 7× 9 blocks.The number of compute nodes and the step lengths of receiver point are changed to measure the expansibility of the algorithm, as indicated in Table 1.In addition, the threshold in map tiling is set as 9 × / .
and in ( 11) are estimated from Table 1 using the least squares method, and when different numbers of compute nodes are used, and have different values.The fitting results using the least squares method are displayed in Figure 6, where represents / .
In Figure 6, the abscissa is / , and the ordinate is the computing time ; the step length of the receiver point varies from 10 m to 2 m.In the five regression formulas, the slope represents time for computing each block , and the intercept represents the time required for the initialization of each compute node .
Equation (11) reveals that the computing time is composed of two parts: the initial time and the average computing time of each block .A specific value between the times is correlated to the efficiency of the algorithm, as shown in Table 2 for a computation using 3 compute nodes.
The following three conclusions can be drawn from the experiment implemented above: (a) Equation ( 11) is fitted quite well when using the least squares method from Figure 5; all coefficients of the association approach 1 very well, indicating that the computing time is explicitly revealed by (11).
Computing Efficiency of the Supercomputer. Nearly 22
hours is required to calculate the noise map of the main city zone in Guangzhou on supercomputer when using 300 compute nodes.Thus, an experiment to determine the computing efficiency of a supercomputer is implemented to illustrate the necessity of using a supercomputer to compute a large-scale noise map.
A normal computer with a quad-core CPU and 16 GB random-access memory is used for comparison with the supercomputer, and the map used in the experiment is the same map that was used in the expansibility experiment.The map is computed using one, two, and three compute nodes in the normal computer and in the supercomputer, and the result is shown in Table 3.
From Table 3, it is evident that the supercomputer is six times faster than a normal computer in terms of computation speed.As a result, it takes more than 132000 hours to calculate the same noise map of the main city zone in Guangzhou on a normal computer with a quad-core CPU because one core is used to control the other nodes.Thus, a supercomputer must be used to compute a large-scale noise map.
Inhibitory Effect of Building Groups on Traffic Noise
Building groups are the main barriers of traffic noise in urban area; thus, a scene is chosen to reflect the variation of sound field distribution causing by building groups.The scene is shown as in Figure 7 with 1034 road segments and 406 buildings.In order to reveal the effect of building groups, a noise map without buildings of the scene is computed; then, a difference map between noise map with buildings and noise map without buildings is rendered as shown in 8. 9 points are selected to accurately show the inhibitory effect of building groups as Table 4. From Figure 8, it is obvious that building groups have significant inhibitory effect on traffic noise; the depression of noise pressure level ranges from 0dB to 40dB.In combination with Figure 7, it can be found that the depression of noise pressure level is determined by the density of building groups: the denser the buildings, the greater the noise attenuation value.
Building groups not only have an inhibitory effect on horizontal noise, but also have an inhibitory effect on high buildings.Figure 9 shows the noise distribution on the building surface and black lines are noise pressure level contours.According to the distance attenuation principle of sound, the noise level at the upper floor of a building should be less than the noise level at the lower floor; however, in Figure 9, the noise level at the upper floor is higher than that Figure 9: Noise distribution on the building surface.
at the lower floor in many buildings; it is because the noise at lower floor of a building will be blocked by other buildings, and the noise at upper floor is not blocked.The noise distribution on the surface of buildings marked with asterisks is shown in Table 5.Each column in Table 5 represents each floor of the building.It can be seen from Table 5 that the noise value increases rather than decreases as the height increases.
Conclusion
A systematic methodology for computing a large-scale 3D traffic noise map on a supercomputer was proposed in the paper.The method covers the noise prediction model and the parallel computation algorithm used on a supercomputer.It offers a highly efficient and feasible approach for computing a large-scale 3D noise map.Then, to visualize the computing result, a rendering method is developed to display the rendering result of the traffic noise pollution in an urban city.Moreover, the strategy of obtaining a dynamic noise map was presented and can generate a real-time noise map in theory.
Two experiments were implemented to analyze the efficiency.The first one considered the expansibility of the parallel algorithm.When the computing scale is fixed, the computing time is positively correlated with the ratio between the number of computing blocks and the number of compute nodes: with an increase in the number of compute nodes, the computing time decreases linearly.When the computing node is fixed, with an increase in the computing scale, the ratio between the computing time and the time required for initialization remains about the same; i.e., the algorithm achieves high usage of the compute nodes.The second experiment aimed to measure the computational speed of a supercomputer.The result showed that supercomputer Tianhe-2 is six times faster than a normal computer in terms of computing speed.Therefore, it is necessary to use a supercomputer to compute a large-scale noise map.
In addition, a scenario was chosen to analyze the impact of building groups on traffic noise.The results showed that buildings have a significant inhibitory effect on traffic noise, and due to the sheltering effect of buildings, the noise pressure level at the lower floor is lower than the upper floor in some buildings.
Software Availability
All software used in the paper can be requested from the corresponding author by email and will be provided free of charge.Program for the parallel computing of noise map is coded in C++ and applied on supercomputer or computer with multicore and application for visualization of the computing result is coded in C#.The supercomputer used in the paper is Tianhe-2, and a normal computer with a quad-core CPU (Inter Core i5.3.30GHZ) and 16 GB of RAM is used for comparison with the supercomputer in Section 6.2.The dataset for testing the parallel algorithm can be provided in XML file and dataset for visualization noise map can be provided in text files.All files are available from the corresponding author too.
Figure 1 :
Figure 1: Take eight neighborhoods of block 0 into the computation.
Figure 2 :
Figure 2: The process of parallel control on a supercomputer.
Figure 3 :
Figure 3: Control logic of the control node on a supercomputer.
4. 2 .
Noise Conversion to Color.The noise map is composed of a variety of receiver points.Each receiver point has a noise value after computing, and visualization requires the conversion from noise to color.Color is expressed by an RGB vector: a noise value larger than 80 dB corresponds to red, whose RGB vector is (1, 0, 0), a noise value less than 40 dB corresponds to green, whose RGB vector is (0, 1, 0), and a noise value between 40 dB and 80 dB is calculated as R
Figure 4 :
Figure 4: 3D noise map calculated by a supercomputer.
Figure 5 :
Figure 5: Real-time taxi tracking data in Guangzhou.
Figure 8 :
Figure 8: Difference map between noise map with or without buildings.
Table 2 :
Specific value between and .
Table 4 :
Building attenuation of traffic noise.
Table 5 :
Noise distribution of a building surface. | 6,538 | 2018-07-09T00:00:00.000 | [
"Computer Science"
] |
Development of a novel fibre optic beam profile and dose monitor for very high energy electron radiotherapy at ultrahigh dose rates
Objective. Very high energy electrons (VHEE) in the range of 50–250 MeV are of interest for treating deep-seated tumours with FLASH radiotherapy (RT). This approach offers favourable dose distributions and the ability to deliver ultra-high dose rates (UHDR) efficiently. To make VHEE-based FLASH treatment clinically viable, a novel beam monitoring technology is explored as an alternative to transmission ionisation monitor chambers, which have non-linear responses at UHDR. This study introduces the fibre optic flash monitor (FOFM), which consists of an array of silica optical fibre-based Cherenkov sensors with a photodetector for signal readout. Approach. Experiments were conducted at the CLEAR facility at CERN using 200 MeV and 160 MeV electrons to assess the FOFM’s response linearity to UHDR (characterised with radiochromic films) required for FLASH radiotherapy. Beam profile measurements made on the FOFM were compared to those using radiochromic film and scintillating yttrium aluminium garnet (YAG) screens. Main results. A range of photodetectors were evaluated, with a complementary-metal-oxide-semiconductor (CMOS) camera being the most suitable choice for this monitor. The FOFM demonstrated excellent response linearity from 0.9 Gy/pulse to 57.4 Gy/pulse (R 2 = 0.999). Furthermore, it did not exhibit any significant dependence on the energy between 160 MeV and 200 MeV nor the instantaneous dose rate. Gaussian fits applied to vertical beam profile measurements indicated that the FOFM could accurately provide pulse-by-pulse beam size measurements, agreeing within the error range of radiochromic film and YAG screen measurements, respectively. Significance. The FOFM proves to be a promising solution for real-time beam profile and dose monitoring for UHDR VHEE beams, with a linear response in the UHDR regime. Additionally it can perform pulse-by-pulse beam size measurements, a feature currently lacking in transmission ionisation monitor chambers, which may become crucial for implementing FLASH radiotherapy and its associated quality assurance requirements.
Introduction
Treatment of deep-seated tumours (>5 cm) with electrons is expected to become possible in the coming years with the use of very high energy electron (VHEE) beams.Electrons with energies in the range 50-250 MeV could be used as a new radiotherapy modality due to recent advances in accelerator technology such as the highgradient x-band radiofrequency (RF) electron acceleration cavities-developed as part of the CLIC study at CERN (Zha and Grudiev2017))-as well as novel compact C-band RF systems (Faillace et al 2022).Therefore it is possible to attain the higher energies required for electrons to reach deep-seated tumours with machines that could feasibly be located in a hospital campus.The main advantages of this modality are the increased penetration to depths >20 cm into the body and the sharper lateral penumbra in comparison to current clinical electron beams (DesRosiers et al 2000).In addition, the reduced scattering within the inhomogeneous tissues in the patient, due to the high relativistic inertia of the electrons at these energies (Lagzda et al 2020), allows the possibility to treat deep-seated tumours through the use of pencil beam scanning (PBS) (Bazalova-Carter et al 2015, Schuler et al 2017, Muscato et al 2023) and focussing (Kokurewicz et al 2019, Kokurewicz et al 2021, Whitmore et al 2021).Furthermore these intensity-modulated treatments have the potential for greater precision and conformity than is possible with photons.The higher beam intensities achievable with the proposed technology for VHEE, makes it an attractive choice for the delivery of ultrahigh dose rate (UHDR) radiotherapy.In fact it is technologically easier to generate high intensity electron beams in comparison to both UHDR MV photons and UHDR protons utilising the Bragg peak.For photons the bremsstrahlung production rate for x-ray targets is a limiting factor (Montay-Gruel et al 2022), while for protons the degraders required to generate a spread-out Bragg peak reduce the beam intensity (Jolly et al 2020).The use of VHEE beams allows a higher dose to be delivered over larger areas of tumorous tissue regions and hence is a promising candidate for potentially being able to elicit the FLASH effect in deep-seated tumours (Böhlen et al 2021).
The FLASH effect is a fairly recently discovered phenomenon observed in numerous in vivo radiobiological and animal studies which have demonstrated that delivering the prescribed dose of radiation at UHDR causes less damage to healthy tissue than when it is delivered at conventional dose rates, while still maintaining the same tumour control efficacy (Favaudon et al 2014, Loo et al 2017, Montay-Gruel et al 2017, Montay-Gruel et al 2019, Vozenin et al 2019).Determining the dose delivery parameters at which the FLASH effect is observable is still a huge area of research.Current preclinical data seem to suggest that delivering doses in excess of 10 Gy within a total delivery time of <500 ms, at a mean dose rates of >40 Gy s −1 , are reasonable values to observe a significant FLASH effect (Wilson et al 2020, Montay-Gruel et al 2021, Rothwell et al 2021, Vozenin et al 2022).While the delivery of VHEE at UHDR to elicit the FLASH effect offers a promising new paradigm in radiotherapy, many challenges still need to be overcome to consider this modality feasible for translation at the clinical level.
Perhaps one of the most important technological challenges for the clinical translation of FLASH radiotherapy is related to the difficulty of real-time dosimetry and beam monitoring at UHDR.This has been extensively reported in the literature (Romano et al 2022), and is largely due to the fact that for UHDR beamsmore specifically ultrahigh dose-per-pulse beams-ionisation chambers exhibit large non-linearities in the form of recombination effects caused by the high charge density in each pulse of radiation (Petersson et al 2017).This effect is much more prominent for modalities where the delivery of the radiation is from a pulsed linear accelerator, for example VHEE, since the instantaneous dose rate within each pulse is extremely high to obtain the required mean dose rates.The presence of this effect has already been demonstrated with VHEE and characterised with two separate ionisation chambers, namely the PTW Advanced Markus Chamber (McManus et al 2020) and the PTW Roos Chamber (Poppinga et al 2020).
Extensive work has been done to calculate corrections for the non-linearities in the response of ionisation chambers at UHDR, however these correction factors introduce large uncertainties into the dose measurements in the UHDR regime.Furthermore, the research into correcting the response of ionisation chambers at UHDR has primarily been focussed on parallel plate ionisation chambers for secondary standard dosimetry, as opposed to large area transmission ionisation chambers, used for beam monitoring, for which only limited studies have been conducted (Konradsson et al 2020).
Therefore, alternative dosimetry technologies need to be investigated for such modalities.The ideal characteristics for online beam monitoring technologies to be used use for UHDR RT are (Romano et al 2022): • a large dynamic intensity range in order to deal with beam intensities from conventional RT dose rates up to UHDR regime; • a high temporal resolution in order to resolve pulses to determine instantaneous dose rate as well as provide feedback signal for accelerator's safety interlock; • a high spatial resolution to provide beam position and profile measurements; • a large sensitive area to be able to monitor the beam over the maximal extend of the clinical treatment fields, that can vary from less than one centimetre up to tens of centimetres; • a high radiation tolerance since these detectors are a permanent fixture in the machine; • a high level of beam transparency in order to minimise perturbations to the beam characteristics.and optimised for PBS dose rates.In radiation therapy scintillation fibre dosimetry, Cherenkov radiation is often considered a source of contamination to the scintillation signal.However, direct detection of Cherenkov signal using optical fibres could be particularly beneficial within UHDR dosimetry since the increased beam intensity associated with this modality favours the detection of the optical photons produced in Cherenkov radiation (Ashraf et al 2020).Furthermore, Cherenkov light is produced instantaneously, on a timescale of 10 -12 s, following the interaction between the charged particle and the dielectric medium, hence making it an ideal method for radiation detection at the fast time scales required for UHDR RT.Since the electrons in VHEE beams are relativistic, the relative variation of the angle of emission of Cherenkov radiation is minimal within the VHEE energy range (0.12% difference between 50 and 250 MeV) and therefore a Cherenkov based detector could be more suitable for VHEE dosimetry.A novel detector consisting of an array of optical fibre-based Cherenkov sensors connected to a photodetector is proposed as an alternative technology for UHDR online monitoring of the beam profile and dose, particularly for use with VHEE beams.In this work the development of such a detector, called the fibre optic flash monitor (FOFM) is outlined along with the initial characterisations of the first array prototype with 160 MeV and 200 MeV electron pencil beams at the CLEAR facility.
CLEAR facility
The CERN Linear Electron Accelerator for Research (CLEAR) is a user facility at CERN that is independent from the main accelerator complex.It produces electron pulses with energies between 60 and 220 MeV and values of the charge-per-pulse ranging from 10 pC to a maximum achievable value of approximately 75 nC.The charge is measured using ICTs at different positions along the linear accelerator.The pulses (also called trains) are delivered at a repetition frequency (p.r.f.) that can be varied between 0.833 Hz (nominal) and 10 Hz and have a width with values in the range, approximately, from 0.1 ps (single bunch) to 150 ns (Sjobak et al 2019).Each train is sub-divided in shorter bunches, of order of picoseconds duration, spaced at a frequency of either 1.5 GHz or 3 GHz.The pulse structure of the CLEAR electron beam is shown in figure 1.The main focus of the CLEAR facility operation is on general accelerator R&D and component studies for existing and possible future accelerator applications.This includes studies of methods for high-gradient acceleration and prototyping and validation of accelerator components for beam diagnostics.The facility is also used to study radiation damage to electronics and for irradiations for medical applications.Indeed research in medical applications at CLEAR has gained significant interest in recent years thanks to the facility's unique capability to deliver electron beams whose parameters (energy and intensity) are in the range required to study the feasibility of VHEE and UHDR RT (Korysko et al 2023a).At present numerous studies have been conducted, and many are still ongoing, to research the possible mechanisms behind the FLASH effect; as well as to test the feasibility of novel technologies for the delivery, characterisation, dosimetry and monitoring of these unconventional beams.The results of some of the experiments on the latter are described in this work.
2.2.Silica optical fibre monitor 2.2.1.Silica optical fibres The FOFM beam dose and profile monitor consists of two main components, the fibre optic radiation sensors (FORS) where the Cherenkov signal is produced upon the delivery of the radiation and a photodetector to measure the optical Cherenkov photons transported within the silica optical fibre.The initial tests conducted as part of the development of the FOFM, were carried out using only a single fibre, specifically a 200 m m core diameter, 0.5 numerical aperture, step index multimodal silica optical fibre (Thorlabs GmbH, Munich, Germany), 20 cm long, coupled to photodetectors either directly (figure 2(a)) or through a 1.2 m long transport fibre (figure 2(b)).Subsequent measurements involved an array of 28 fused silica optical fibres, 0.4 mm in diameter and 30 cm in length (Hilgenberg GmbH, Malsfeld, Germany) and the layout of the setup is shown in figure 3.
Photodetectors
Various photodetectors were tested in different configurations.
Specifically, the sensitive region of the single silica optical fibre was coupled: • directly to a SiPM (Hamamatsu Photonics, Shizouka, Japan) in turn connected to a digitiser; • directly to a fast photomultiplier tube (PMT) with a neutral density (ND) = 6.0 optical filter (1 × 10 −4% transmission) (Thorlabs GmbH, Munich, Germany); • to a charge coupled device (CCD) 6 MP camera (Retiga R6, Teledyne Photometrics, Thousand Oaks, US) housed within a black box, through a 1.2 m long transport fibre.In this case the sensitive region was positioned at 46°with respect to the direction of the electron beam.The CCD camera was operated at 14-bit pixel depth which has a 7 frames s −1 acquisition rate, with an exposure time of 1200 ms.
For the FOFM fused silica optical fibre array, a 2.3 MP Basler ace complementary-metal-oxidesemiconductor (CMOS) camera (Basler AG, Ahrensburg, Germany), with a Fujinon H525HA-1B 1:1.4/25 mm lens (Fujifilm Corporation, Tokyo, Japan) was used as photodetector.The CMOS camera was operated at 8-bit pixel depth at an acquisition rate of 42 frames s −1 and 0.034 ms exposure time.Before any data analysis was performed, background subtraction and noise removal were applied to the raw recorded data.
Radiochromic film dosimetry
Gafchromic EBT3, EBT-XD, MD-V3 and HD-V2 films (Ashland Inc., Bridgewater, NJ, USA) were used for radiochromic film dosimetry since they are well suited for the UHDR regime (Jaccard et al 2017).Specifically, for the measurements involving the response of the single optical fibre, dose-to-water calibrations were carried out by irradiating Gafchromic EBT3 and MD-V3 films to the same beam conditions as the optical fibre.Direct doseto-water calibrations for the response of the FOFM optical fibre array were performed using EBT-XD films, while the FOFM beam profile measurement comparisons in air were conducted using HD-V2 and MD-V3 films.
Each of the radiochromic films was scanned using an Epson Perfection V800 Photo scanner (Epson, Long Beach, US) at least 24 h after irradiation.Dose-to-water calibrations for all of the film types were obtained with a 5.5 MeV electron beam at a dose rate of 0.05 Gy s −1 , at a depth of 10 mm in solid water phantom with an SSD of 100 cm on the Oriatron eRT6 at CHUV Hospital, Lausanne, Switzerland (Jaccard et al 2018).For each point on the calibration curve the dose was measured using a PTW Advanced Markus Ionisation Chamber (PTW, Freiburg, Germany).
A custom Python script was used for the calibration and analysis of radiochromic films, which followed the single channel procedure defined in Micke et al (2011).Calibrations were performed with low energy electron beams since no standardised procedure currently exists for VHEE beams.The reported agreement between radiochromic film measurements and Monte Carlo simulations made with 165 MeV electrons for radiochromic films calibrated to clinical 20 MeV electron beams (Subiel et al 2014), as well as for 156 MeV electron beams at CLEAR calibrated to clinical 15 MeV electron beams (Lagzda et al 2020, Böhlen et al 2021), demonstrates the feasibility of using radiochromic films calibrated using low energy electron beams until it is possible to carry out this procedure with VHEE beams.
It must be pointed out however that, in other modalities, energy dependence (notably for kVp x-rays) (Chan et al 2023, Guan et al 2023a) and LET dependence (seen in protons) (Guan et al 2023b) in radiochromic films have been observed.A dose rate dependency has also been reported when using UHDR protons (Villoing et al 2022).Studies are currently ongoing at the CLEAR facility to validate and confirm any dose rate and energy dependence, using various passive dosimeters such as alanine and TLDs.
To obtain the dose-to-water calibrations for the single fibre response, the radiochromic film was positioned at a depth of 150 mm in a water phantom using the CLEAR C-robot (Korysko et al 2023b).During the film analysis a 2D Gaussian fit was applied to the dose distribution on the film and the dose was obtained by averaging the values within a circle of radius 5 mm centred on the mean of the Gaussian fit for the green channel.For the FOFM optical fibre array profile comparison, the projection of the y-axis profile is plotted, and a 1D Gaussian fit is applied whereby the standard deviation of this Gaussian fit is obtained and compared to the standard deviation of the Gaussian fit applied to the FOFM beam profile measurements.
Experimental setup
The single-fibre and 28-fibres array sensors were installed on the in-air test stand at the CLEAR facility, according to the schematic layout shown in figure 4 (with the exception of the single fibre case and CCD camera as photodetector, where the sensitive region of the optical fibre was oriented at 46°and had a transport fibre leading to the CCD).The ICT is used to measure the charge of each electron pulse.The C-Robot (Korysko et al 2023b) is used to pick up both radiochromic films and the scintillating yttrium aluminium garnet (YAG) screen in order to position them at different longitudinal positions in the beam path, either in air or in a water phantom.The setup described was used both for dose-to-water calibrations for radiochromic films, and for beam profile measurement comparisons in air using both radiochromic films and the scintillating YAG screen.
Photodetector evaluation measurements
With the single optical fibre and photodetectors assembled as shown in figure 2, measurements were taken for multiple consecutive pulses, specifically 100 pulses when using the SiPM or PMT as photodetector and 20 pulses for the CCD camera.The mean amplitude of the signal, recorded on a digitiser, was computed and the response of the optical fibre and photodetector at each pulse charge was obtained by integrating the mean signal in a time window of 400 ns.To check the linearity of the detector, the response was measured as a function of the charge per pulse.The increase of charge (and hence of the dose) per pulse was achieved by operating at the maximum stable bunch charge and then enlarge the pulse width by incrementing the number of bunches.The DPP values came from a corresponding charge to dose ratio obtained irradiating radiochromic film with the same pulse charges to which the optical fibre was exposed.A linear regression fit is then performed to describe the behaviour of the response of the fibre as function of DPP.The R 2 value, reduced chi-squared (also called mean squared weighted deviation 2 c -n ) and p-value of the chi-squared value (P 2 c ( )) are computed and indicated in the relevant figure captions.
SiPM setup
The measurements using the SiPM as photodetctor covered a range of pulse widths between 6.7 and 67 ns, corresponding to 10 bunches at 1.5 GHz, and to 200 bunches at 3 GHz spacing, respectively.The mean signal amplitude is shown in figure 5(a) and the response as a function of DPP in figure 5(b).The response of the optical fibre connected to the SiPM appears to be linear up to an equivalent dose per pulse of 38 Gy/pulse (65 ns pulse width with 3 GHz bunch spacing in order to reach upper DPP range) with a R 2 value of 0.994 for the fit.The response of the SiPM photodetector shows saturation above 50 bunches on the response traces shown in figure 5(a).Therefore the apparent linearity is likely to be from an increase in the area under the trace caused by the increase in pulse length and hence signal duration rather than from an increase in the signal amplitude.An error bar of 5% was determined for the EBT3 and MDV3 DPP dose-to-water calibration which was obtained from the intrinsic uncertainty of the films, along with the pulse-by-pulse charge fluctuations and the standard deviation upon calculating the mean dose around the maximum of the Gaussian fit.Uncertainties in the fibre output were calculated from the standard deviation of the repeated measurements for each DPP, however these were negligible for each point for both the SiPM and PMT readout.
PMT setup
The results obtained using the PMT as photodetector are shown in figures 6(a) and (b).In this case a range of pulse widths between 0.7 and 67 ns was used.Like the SiPM case, the signal amplitude saturates when the number of bunches is greater than 30, as it can be seen in figure 6(a).The response shown in figure 6(b) exhibits only approximate linearity up to a maximum achievable dose per pulse of 34 Gy/pulse with the linear fit having an R 2 value of 0.984.Several factors might contribute to the observed deviations from linearity.Saturation of the PMT is the likely cause of the onset of this deviation at about 10 Gy (30 bunches) and above.Subsequently, at longer pulse widths, the relation between the DPP and pulse width changes due to beam losses.This then decreases the instantaneous dose rate within the pulse and it appears as a larger integrated signal in figure 6(b), since the dose is delivered over a longer duration.In this case as well, like with the SiPM, the apparent linearity of the response to increasing DPP's will be from a greater integrated signal from the widening of the pulse duration.
CCD camera photodetector setup
The setup of the experiment was modified slightly when using the optical fibre and a CCD camera as photodetector, as can be seen in figure 2(b).A longer optical fibre (20 cm sensitive region, 1.2 m transport region) was used and the CCD camera was positioned in a light-tight 'black' box behind the beam dump.Since the photodetector was no longer in the beam contrary to the SiPM and PMT cases, there was space to mount the single fibre on a motorised stage with vertical movement and this made possible a measurement of the transverse profile of the beam.For each value of the beam charge (corresponding to a specific dose-per-pulse value) the optical fibre was moved vertically over the beam position at intervals of 1 mm between −5 and +5 mm with respect to the nominal centre (y = 0).For each pulse the mean value of the pixels over the area of the optical fibre Pulse widths in the range of 0.7-57 ns were used.The response of the detector can be seen to be linear up to a dose per pulse of 39 Gy/pulse, with an R 2 value of 0.985 and 0.998 for the summed response plot and the integral of the Gaussian plot, respectively.The response at DPP's of less than 5 Gy/pulse suffered from having a low signal-to-noise ratio (SNR).A better linear behaviour seems to be observed when using the integral of the Gaussian fit to the vertical profile for each DPP as detector response, however this method has larger error bars for each point due to uncertainties on the Gaussian fit therefore making it a less accurate method for determining the response of the fibre optic monitor.The differences between the two methods could be perhaps explained by the fact that the low intensity outer tails of the Gaussian beam are not efficiently detected due to the low SNR of the CCD camera.This under-response however, is compensated for when using the integral of the Gaussian fit.
From the standard deviation of the Gaussian fit to the vertical projection of the profile of the beam measured with the silica fibre and CCD camera for a pulse charge of 10 nC, (shown in figure 7(a)) the beam size was estimated to be 1.73 ± 0.09 mm.This can be compared to and is consistent with the value (1.83 ± 0.06 mm) measured on a YAG screen just behind the fibre installation.
Optical fibre array measurements
To evaluate the performance of the FOFM optical fibre array for VHEE UHDR real-time beam monitoring using the setup shown in figure 3, a range of measurements were carried out with both 160 MeV and 200 MeV electrons.The investigation involved studies of the linearity of the response as function of the dose-per-pulse, using radiochromic films in water as reference as well as measurements of the beam profile, which was compared with that obtained using a scintillating YAG screen and radiochromic film.
Response linearity measurements
To evaluate the linearity of the response, at a given DPP, the sum of the pixel value from each individual fibre captured by the CMOS camera, such as that shown in figure 8(a), was measured.The results obtained with 200 MeV electrons and charge between 3.2 and 50.7 nC/pulse are shown in figure 8(b).Pulse widths in the range 9-105 ns were used, with a micropulse structure of 400 pC/bunch.For these measurements, the DPP varied from 4.3 to 39.0 Gy/pulse and the whole range was covered using the same CMOS camera settings, namely a normalised gain of 0.7.The data show that the FOFM exhibits a linear response over the entire range of DPP's, with the linear fit having an R 2 value of 0.996.Once more, an error bar of 5% was determined for the EBT-XD DPP dose-to-water measurements.There was a variation of about 1% between each shot for the same DPP, therefore error bars in the fibre output are not visible on the plots.
In order to extend the range of DPP's to values well below 5 Gy/pulse, two separate CMOS data acquisition (DAQ) settings were used for the low intensity and high intensity measurements.For the low intensity DAQ pulse (with 400 pC/bunch and pulse widths of 0.7-76 ns) is shown in figure 10(a).The response for 200 MeV and 160 MeV can be seen to be linear, and of the similar gradient, the values for which are displayed in table 1.The R 2 value is 0.997 for the 200 MeV measurement and 0.999 for the 160 MeV measurement.In figure 10(a) constant offset between the 200 MeV and 160 MeV responses can be seen.This can be attributed to the difference in the percentage depth dose distribution for the two VHEE energies, and the constant position of the EBT-XD film, since the 160 MeV VHEE beam has an earlier fall-off, hence for the same pulse charge (and therefore the same fibre output), a smaller dose is deposited at the depth of the EBT-XD film (150 mm in water).The instantaneous dose rate, i.e. the dose rate within each pulse of radiation, could potentially be a relevant parameter to elicit the FLASH effect.Therefore, it is important to establish that variations of this parameter do not affect the response of the beam monitor for the same DPP.For this purpose, the DPP response linearity measurements at 200 MeV and 160 MeV were repeated with two different pulse structures.It should be observed, however, that even with the same pulse width and charge, the dose rate within the pulse can vary.This variation is due to beam loading effects which are present for longer pulse widths and decrease the intensity of the later bunches on these longer trains.Since the charge per bunch is only measured for a single bunch per pulse, a range of instantaneous dose rates are given for each pulse structure.The first structure used was the same as for the results presented so far, i.e. the nominal and maximum charge per bunch of 400 pC, corresponding to an instantaneous dose rate range of 0.3 1 10 9 -´Gy s −1 .The second bunch structure had a significantly lower charge per bunch, 75 pC, hence a longer pulse width for the same DPP, which corresponds to an instantaneous dose rate of 0.9 2.6 10 8 -´Gy s −1 .The results obtained with this second bunch structure are shown in 1, where all gradients are in agreement within their respective standard errors.
Beam profile measurements
In order to reconstruct the profile of the beam measured by the FOFM, the pixel value of each of the individual fibres recorded by the CMOS camera was plotted against their corresponding vertical position.For the purpose of comparison and validation, in-air profile measurements were also made using radiochromic film (MD-V3 and HD-V2) and on a scintillating YAG screen-both positioned at 300 mm behind the FOFM.A Gaussian fit was applied to the vertical projection of the profile from both the radiochromic film and the scintillating YAG screen, and compared to the profile reconstructed from the FOFM.Due to slight variations in the diameters and of the quality of the smoothness on the ends of the silica optical fibres, the relative response of each of the fibres varied slightly.To correct for this variation, the response of each of the individual optical fibres to the same beam intensity and position was recorded and then normalised to the mean response of all fibres in the FOFM.The nominal electron beam parameters at CLEAR were used for all of the beam profile measurements, namely 200 MeV energy and charge of 400 pC/bunch.Two different measurements were done on a single pulse, one at 1 nC/pulse, corresponding to 2.1 Gy/pulse, and another at 10 nC/pulse, corresponding to 17.0 Gy/pulse.
The vertical beam profile measured by the FOFM for a 1 nC pulse and a 10 nC pulse are shown in figures 11(a) and (b), respectively.The beam size, computed from the standard deviation of the Gaussian fit applied to the vertical profiles, was estimated to be 2.02 ± 0.15 mm for the 1 nC pulse and 1.71 ± 0.09 mm for the 10 nC pulse.The uncertainty is calculated from the error on the Gaussian fit.A comparison between these values and those obtained from the vertical profile projection from radiochromic films and a scintillating YAG screen is shown in table 2. Although with a larger uncertainty the results are consistent with those obtained using different techniques, with exception of the YAG screen for the 10 nC pulse.
Discussion
The purpose of the research presented in this paper is to show the development of a novel technology, based on the detection of Cherenkov light, alternative to transmission ionisation chambers for beam monitoring at UHDR for FLASH radiotherapy with VHEE, and eventually other modalities.The results presented demonstrate its ability to meet the requirements stated in the Introduction.The purpose of the measurements with a single optical fibre was to determine which of the photodetectors used exhibited the most suitable characteristics.Even though the SiPM and PMT showed a relatively linear response with increasing DPP (R 2 = 0.994 and 0.984, respectively), the signal amplitude showed saturation and the apparent linearity could likely Table 2. Vertical beam size measurements made with the FOFM, radiochromic film (with the type of film stated in square brackets), and scintillating YAG screen obtained from the standard deviation Gaussian fit applied to the vertical projection of the beam profile.The uncertainty of the beam size measurement is expressed as the error on the standard deviation of the 1D Gaussian fit.
Pulse charge (nC)
FOFM beam size (mm) Film beam size (mm) YAG beam size (mm) be attributed to the widening of the pulse width.However, this demonstrates that these photodetectors have the advantage of a high time resolution that could be able to resolve the pulse width, even when operated under saturation conditions.Therefore the information could be used to monitor the instantaneous dose rate of the beam and a possible final prototype of the FOFM could incorporate a SiPM or a PMT attached to one of the fibres with the purpose of providing temporal information.A configuration similar to the one described in the paper by García Díez et al (2023) where the principle has been demonstrated using a plastic scintillator coupled to a SiPM via an optical fibre for monitoring the pulse structure in high dose rate proton therapy.The CCD camera showed poor linearity (R 2 = 0.985) when summing the fibre response at each vertical position measurement, but conversely showed excellent linearity (R 2 = 0.998) when using the integral of the Gaussian fit applied to the profile for each DPP.This can likely be attributed to the Gaussian fit compensating for the under-response at intermediate DPP's due to the low SNR of the CCD camera, meaning the low intensity tails of the beam are not accurately measured.Furthermore, the relatively low SNR of the CCD camera meant that the optical fibre had to be positioned at an angle in order for a large enough Cherenkov signal to reach the CCD.
The fused silica optical fibres were chosen for the FOFM since the lack of coating and cladding (which had a similar refractive index to the silica core) permitted a larger transmission of the Cherenkov signal to the end of the optical fibres.The CMOS camera was chosen in this case instead of a CCD camera since it has all the advantageous properties of the latter with the addition of a larger dynamic range and a faster frame rate.Since the maximum p.r.f at CLEAR is 10 Hz, a CMOS camera with a frame rate of 42 frame s −1 was perfectly adequate for the measurements performed, a larger frame rate being not necessary at the CLEAR repetition frequency.However, proposed clinical VHEE linacs are likely to have a p.r.f of at least 100 Hz in order to deliver the prescribed dose over multiple pulses within FLASH timescales.There are commercially available cameras from the same manufacturer with the same properties and resolution but with a frame rate of 168 frame s −1 , or at frame rates >500 frame s −1 with a reduced resolution.A CMOS camera has the additional advantage over SiPM and PMT of being able to image multiple optical fibres simultaneously (in the case discussed in this paper the entire array of 28 fibres), compared to requiring one photodetector per optical fibre sensor.
Arguably the most important requirement for a UHDR beam monitor is the detector's capability to respond linearly to dose and dose rate.The response of the currently used transmission ionisation monitor chambers has been shown to deviate from linearity at dose rates of < 0.1 Gy/pulse with 10 MeV electrons (Konradsson et al 2020).In this work it has been demonstrated that the FOFM response is linear with dose and dose rate between 4.3 Gy/pulse and 39.0 Gy/pulse, when operated with the same DAQ settings (figure 8), and between 0.9 and 57.4 Gy/pulse (equivalent to range of 9.0-574 Gy s −1 mean dose rate at the maximum CLEAR p.r.f. of 10 Hz), when operated with two different DAQ settings for low DPP and high DPP (figures 9(a) and (b)).
Other novel UHDR beam monitors are under development.ICTs have been shown to have a linear response beyond 15 Gy/pulse with 6 MeV electrons and require two separate modes for CONV and UHDR dose rates (Oesterle et al 2021); linearity of the response was shown up to 20 Gy/pulse with 7 MeV electrons (Triglio et al 2022) for the FLASHDC air fluorescence monitor, and up to 2 Gy/pulse (mean dose rate of 30 Gy s −1 ) with 9 MeV electrons (Romano et al 2023) for SiC detectors.The FBSM reportedly showed dose rate linearity in their response up to 234 Gy s −1 with 8 MeV electrons (Levin et al 2023).In the two FLASH radiotherapy patient trials to date, conducted using 5.6 MeV electrons (Bourhis et al 2019) and 200 MeV protons (Mascia et al 2023), radiation was delivered using single fraction in both cases, with a total prescribed dose of 15 Gy delivered at 166.7 Gy s −1 (1.5 Gy/pulse) and 8 Gy with a quasi-continuous dose rate of 60 Gy s −1 for the electrons and protons, respectively.Furthermore, significant work has also been published on the treatment plans proposed for VHEE, suggesting minimum beam intensities of 9.375 ´10 11 electrons s −1 for >90% of the PTV, to receive a dose-averaged dose rate in excess of 40 Gy s −1 (Zhang et al 2023).Assuming similar beam parameters and size at which the FOFM was tested with at the CLEAR facility, the above intensity requirement translates into a minimum DPP of ∼23 Gy/pulse at 10 Hz p.r.f, or ∼2.3 Gy/pulse at 100 Hz p.r.f.Therefore, the measurements described in this paper demonstrate that indeed the FOFM exhibits the response linearity required for dose monitoring for VHEE into-and possibly beyond-the UHDR regime necessary for inducing the FLASH effect.
It is likely that the clinical translation of VHEE radiotherapy will make use of electron beams of different energies, to provide conformal dose distributions to tumours of different depths.It is therefore important that any adopted beam monitoring technology does not exhibit an energy dependence.The validity of this premise was tested as part of the experiments for validating the FOFM with 200 MeV and 160 MeV electron energies.The results, presented in figure 10 and table 1, show that the gradients of the slope of the straight line fits describing the DPP response at the two different energies are within close agreement, demonstrating therefore a consistent response within this energy range.
Furthermore, differing instantaneous dose rates will likely be used both in pre-clinical and clinical experiments, so it is therefore necessary that the response of the beam monitor is independent of this.The results of this comparison for the FOFM can also be seen in figure 10 and table 1, whereby the gradients of the two separate instantaneous dose rates for 200 MeV and 160 MeV are also all within close agreement.Since the values for the gradient for the 75 pC/bunch measurements at both energies are within their respective standard error of the gradients for the 400 pC/bunch at both energies, the response can be considered to be independent of the instantaneous dose rate.The slight difference and larger standard error visible in table 1 between the two instantaneous dose rates at 160 MeV can likely be attributed to the associated changes of beam parameters that come with altering the charge per bunch for energy that is not nominal for CLEAR operation.
Most of the currently employed conventional radiotherapy monitor chambers do not provide beam profile measurements, but instead they monitor the flatness and symmetry of the beam, to ensure it stays within predefined tolerances.However, given the added uncertainties and risks that are associated with delivering the prescribed dose at UHDR, pulse-by-pulse measurements of the beam profile would be an additional level of quality assurance for dose delivery during FLASH radiotherapy.Furthermore, if PBS were to be used for the delivery of VHEE beams (either at conventional or ultrahigh dose rates), the FOFM could provide pulse-bypulse beam position measurements in addition to profile measurements.The results of the beam profile measurements for one single 1 nC and 10 nC electron pulse, shown in figures 11(a) and (b), were found to be in reasonable agreement with measurements on radiochromic film and YAG screen.For the 1 nC pulse both the measurement comparisons from the MDV3 film and YAG screen were within the errors of the FOFM beam size measurements.For the 10 nC pulse the beam profile measurement with the HDV2 film was still within the uncertainty range of the FOFM measurement.Whereas the comparison between the FOFM and YAG screen measurement showed a larger difference and was outside of the FOFM uncertainty range.This larger beam size observed at high charges in the YAG scintillating screen for in-air measurements has also been reported previously at CLEAR and further studies are underway to understand and characterise this effect (Rieker et al 2023).The beam profile measurements made FBSM for UHDR beam monitoring were reported to have fits that agree with those on Gafchromic EBT-XD films within 1.4% (Levin et al 2023).One of the limiting factors that cause the larger discrepancies seen with the FOFM profile measurements; the first being the spatial resolution being limited to the diameter and spacing of the optical fibres, which in this case is 0.5-1 mm.In spite of the limitation, the measurements show that the FOFM is capable of providing pulse-by-pulse measurements of the beam profile with good accuracy.The final design of the FOFM will incorporate two optical fibre arrays to provide beam profile measurements in both the x and y-axis.
The work presented in this paper served the purpose of proof-of-principle measurements, demonstrating that the FOFM is capable of providing real-time pulse-by-pulse beam profile measurements and give a linear response with dose deposited at a reference depth in water into the UHDR regime.In order to ensure that such a monitor would be feasible for use on a clinical VHEE machine, further measurements are necessary, however.The long-term stability of the monitor response (particularly for dose prediction and monitoring) will have to be verified, along with the characterisation of any ageing or deteriorations of the response of the fibres following irradiations with doses in the range of that expected throughout the lifetime of a typical monitor chamber.Radiation hardness tests of the silica fibres used in the experiment described in this paper are currently being carried out at the IRRAD facility at CERN, with 24 GeV protons up to doses on the order of MGy's and both the Cherenkov signal from the beam and light transmission from a bulb are being measured (Buchanan et al 2023).Similar tests will be carried out at the CLEAR facility investigating the stability of the response and of its linearity, following irradiations of large doses (up to hundreds of kGy).
Conclusions
To facilitate pre-clinical experiments and the eventual clinical translation of FLASH radiotherapy with VHEE, a new technological solution for real-time beam monitoring is required, given that saturation effects are present at UHDR in transmission ionisation monitor chambers currently in use.In this paper, a device in which the Cherenkov light produced by high energy electrons traversing silica optical fibres-single or arranged in an array (FOFM)-is detected by different photodetectors, is proposed as a possible solution, While recently published results on a similar area of research have shown advances on the development of beam monitoring technologies for UHDR using devices such as ICTs, SiC detectors and scintillating screens, the results on the FOFM presented in this article are the first in which such a technology is considered for UHDR online beam monitoring for VHEE beams.The initial results, obtained with 200 MeV and 160 MeV electron beams at the CLEAR facility at CERN, showed that the FOFM beam monitor with a CMOS Camera as photodetector exhibited a linear response with DPP from 0.9 to 57.4 Gy/pulse, i.e. over the entire range it was tested.It was also demonstrated that the response did not depend on energy and instantaneous dose rate, both important factors for monitoring the delivery of the VHEE dose at UHDR.Furthermore, the FOFM was able to perform accurate pulse-by-pulse measurements of the beam profile, showing agreement with the same profile measurements performed using radiochromic film.
In addition, the use of either a SiPM or a PMT connected to a single fibre as part of the optical fibre array in the FOFM would be able to provide information about the instantaneous dose rate of the beam, even when operated under saturation conditions, due to the good time resolution of these photodetectors.Consequently, the work presented has shown that a fibre optic beam monitor, such as the one proposed in this paper, is a promising candidate for real-time beam profile and dose monitoring for FLASH radiotherapy with VHEE.
One current limitation of the study is using radiochromic films calibrated to 5.5 MeV electrons to provide dose-to-water characterisations for the fibre optic monitor.Whilst some studies point to there being little-to-no energy dependence across a large range of electron energies, such films do exhibit energy-dependence for other modalities and therefore further studies would benefit from using a radiochromic film calibration with VHEE beams once the procedure is available.Subsequently, using other detectors and passive dosimeters to provide such characterisations would add further reliability to the measurements.
Further work is currently ongoing to fully characterise the single optical fibre array and develop a full scale prototype of the FOFM, consisting of two separate arrays to provide profile measurements in both transverse dimensions of the beam.The full prototype will need to be tested with other radiation modalities at UHDR as well as being further characterised with VHEE beams at the CLEAR facility.The additional profile measurements should also be made with magnified uniform beams in addition to the VHEE Gaussian pencil beams used at the CLEAR facility.These studies will involve further measurements on the linearity of the response as well as beam profile comparisons, radiation hardness, stability of the response, and dose prediction.
While the majority of the research into UHDR real-time dosimetry has been focussed on detectors for reference dosimetry, a number of research groups have been investigating novel methods for UHDR beam monitoring, including the use of integrated current transformers (ICTs) (Oesterle et al 2021), air fluorescence (Trigilio et al 2022), SiC detectors (Romano et al 2023), and a scintillating screen named the 'FLASH Beam Scintillator Monitor' (FBSM) (Levin et al 2023).The use of optical fibre-based methods, i.e. scintillator-coupled optical fibres and fibre optic Cherenkov sensors, for real-time dosimetry at UHDR has been of particular interest for its favourable properties such as dose-rate linearity and high spatial and temporal resolution, with numerous works demonstrating its applicability to real-time dosimetry and pulse monitoring for UHDR beams with electrons (Favaudon et al 2019, Jeong et al 2021, Ashraf et al 2022, Vanreusel et al 2022), protons (Kanouta et al 2023), and kVp x-rays (Hart et al 2022).A technique utilising scintillating fibres has been implemented on the secondary beamlines in the experimental areas at CERN for beam intensity and profile monitoring.The detector consists of an array of scintillating plastic fibres that are read-out using multi-channel silicon photomultipliers (SiPMs) (Ortega Ruiz et al 2020).Similar devices have also been developed in recent years for beam fluence and profile monitoring for hadron therapy centres (Leverington et al 2018, Allegrini et al 2021), however these devices have only been tested
Figure 1 .
Figure 1.Pulse and bunch structure of the electron beam at the CLEAR facility.
Figure 2 .
Figure 2. Photographs of the single optical fibre measurements setup in the CLEAR in-air test stand.(a) The setup with SiPM and PMT photodetectors.(b) The setup with the transport fibre and CCD photodetector which was positioned behind the beam dump and hence not visible in this image.
Figure 3 .
Figure 3. (a) Schematic of the FOFM assembly with the CMOS cameras and silica optical fibre array, where the working distance (WD) between the edge of the lens and the fibres is 105 mm.(b) Schematic of 3D printed optical fibre support displaying the vertical arrangement of the silica fibres.(c) Photograph of the optical fibre array of the FOFM, consisting of 28 fused silica optical fibres, installed in the in-air test stand at the CLEAR facility.
Figure 4 .
Figure 4. Schematic of the plan view of the experimental setup of the FOFM installed in the in-air test stand at the CLEAR facility.
Figure 7 .
Figure 7. (a) Vertical beam profile measurement for a 10 nC 200 MeV electron pulse with a Gaussian fit applied (red line), and with the vertical error bars corresponding to 1s of the mean value measured from the 20 repeated measurements for each position.(b) The dose-per-pulse (DPP) response of the single optical fibre and CCD camera photodetector by summing the pixel value at each vertical position (R 0.985, 2 = 9.49, 2 c = n
Figure 9 .
Figure 9. (a) Plot showing the dose-per-pulse (DPP) response of the FOFM between 0.9 Gy/pulse and 6.1 Gy/pulse in low intensity DAQ Mode (R 0.996, 2 = 0.813, 2 c = n figure 10(b) for both 200 MeV and 160 MeV, with corresponding pulse widths of 5-130 ns.For comparison, the straight lines which are the results of the fit to the linear dependence of the response, obtained from the data with nominal charge per bunch shown in figure 10(a), are superimposed.The data displayed in figure 10(b), show that the linearity of the response with DPP is similar for both 200 MeV and 160 MeV energies, and for 400 pC/ bunch and 75 pC/bunch pulse structures.The values for the gradients and the standard error of the gradient for these measurements are shown in table1, where all gradients are in agreement within their respective standard errors.
Figure 11 .
Figure 11.Vertical profile measurements made by the FOFM for (a) a 1 nC; and (b) a 10 nC electron pulse at 200 MeV with a Gaussian fit applied (red line).
Table 1 .
The gradient values of the linear fit and their corresponding standard error for the measurements shown in figure10. | 11,079.6 | 2024-03-13T00:00:00.000 | [
"Medicine",
"Engineering",
"Physics"
] |
IoT-based Integrated System Portable Prayer Mat and Daily Worship Monitoring System
Muslims have various difficulties in praying, such as difficulty memorizing the number of rak’ah they have been doing and determining the direction of the Qibla. In this research, we proposed a technological device for monitoring daily worship in Islam. We presented the IoT-based integrated system as a portable prayer mat serving as a rak’ah counter, Qibla direction finder
INTRODUCTION
Millions of devices are connected to the global network, causing the Internet to be inextricably interwoven with human life [1]. The Internet of Things (IoT) system merges numerous technologies into one, including sensors, internet connections, radio frequency identification (RFID), wireless sensor networks, and other technologies. The Internet of Things (IoT) idea was created to address these needs. The Internet of Things improves people's lives holistically by advancing the Internet to improve industrial processes and businesses [2]. The Internet of Things is a platform on which ordinary objects can become more innovative, processing can become more intelligent, and communication can become more informative so that daily activities become more accessible and more efficient [1], [3][4][5][6]. The Internet of Things enables things to be connected at any time and location through networks and Internet services [7].
The emergence of IoT technology has made many microcontrollers compatible with IoT technology, for example, Arduino. Arduino is an open-source microcontroller that can be used for flexible programming, customizable signal types, and easy adaptation to create interactive objects that stand alone or are connected to software on a computer [8][9][10]. Many kinds of Arduinos are on the market, including the Arduino Mega2560. The Arduino Mega2560 is a microcontroller board based on the ATmega2560 that can be used both online and offline and is programmed using the Arduino software. It is appropriate for applications that need a lot of input/output and memory since it has 54 digital I/O pins, 16 analog inputs, 4 UARTs, a USB connection, an ICSP header, a reset button, and a larger sketch area [11].
The IoT grows quickly and significantly influences many aspects of people's lives [12]. IoT innovations have been implemented in various fields, such as manufacturing, agriculture, telemedicine, and education [13][14][15][16][17][18][19][20][21]. IoT can also be used as supporting technology in the religious field. IoT can be a solution in the form of a device that helps Muslims worship. For Muslims, daily prayer (in Arabic: "salah") is an obligation for Muslims daily. Muslims performed prayer five times every day, at dawn (Fajr prayer), midday (Zhur prayer), late afternoon (Asr prayer), dusk (Maghrib prayer), and night (Isha prayer). Two rak'ah is performed at the Fajr prayer, four rak'ah at the Zhur prayer, Asr prayer, and Isha prayer, and three rak'ah at the Maghrib prayer.
Several things must be considered in performing prayers, such as the number of cycles of each prayer ("rak'ah"), and should face the direction of the Kaaba, which is the Qibla of Muslims [22]. Some Muslims face various difficulties, such as difficulty determining the direction of prayer, especially when traveling and memorizing the number of rak'ah they have done [23,24]. However, Forgetting the number of rak'ahs that one has prayed does not automatically invalidate one's prayer. The practice here is to perform two prostrations of forgetfulness (in Arabic: "Sujud Sahwi") before doing salam at the end of the prayer. One goes on the assumption that he has prayed the least number that he is certain of. For example, if a person is unsure whether he has prayed two or three rak'ahs, then he must assume that he has only prayed two rak'ahs, then complete the prayer based on the assumption and do two times of sujud sahwi. The incomplete number of rak'ah means they must repeat the prayer from the beginning. For most elders, these conditions are due to some problems with their cognition level (attention, working memory, and decision making) affected by age [25].
Regarding the difficulties in some Muslims, technological aid is needed to support their worship activity. Numerous Regarding the difficulties in some Muslims, technological aid is needed to support their worship activity. Numerous researchers have contributed to research about prayer mats. Ismail et al. (2015) produced a smart prayer mat that assisted prayer activities for the elderly who experience cognitive impairment [24]. This tool had a sensor-based textile to tap the signaling/sound to implement complete worship prayers. Another study by Kasman and Moshnyaga (2017) resulted in a new technique that uses press censorship to identify posture on a smart prayer mat. In his research, a smart prayer mat could recognize 100% posture in prayer activities [26]. Mansor, in (2021) researched smart prayer as an effective solution for elderly Muslims. This prayer mat uses an infrared distance sensor and pressure sensor to track the worshiper's movement to detect the prayers' cycles (rak'ah). It also can detect the direction of the Qibla [27]. Another device for the rak'ah counter was developed using a Piezoelectric sensor located in the upper part of the prayer mat. One prostration is counted if the forehead touches the button [28]. Like the intelligent prayer mat, Rishi Pal et al. (2023) studied about Intelligent IoT Yoga Mat. This Yoga Mat can recognize the yoga postures using a wearable device [29].
Another interesting research about Qibla Finder also has been conducted by Dunque (2021). Dunque designed a qibla finder device placed in Muslim headwear Peci. The device not only functioned as a qibla finder but also as an obstacle detection [30]. It has a switch for controlling the function. It claims can help visually impaired Muslims to perform prayers independently.
Earlier research on a portable smart prayer mat concentrated on the function of a counter to the number of rak'ah in prayer. As a result, equipment was created as a prayer mat. However, several instruments lacked Qibla orientation and were not connected to a daily worship monitoring program. A daily worship monitoring application can be helpful for children, youth, and older Muslim people in monitoring their daily worship.
Some researchers have developed religious mobile applications. UMMA is a religious application that provides information about daily prayer, Quran recitation, articles about Islamic studies, and community features [31]. This kind of application indeed made youth Muslims interested. Another Islamic mobile application, which was developed for fasting (Shiyam) reminders, focuses on fasting and can provide specific fasting-related information and warnings during the time of predawn (imsak), iftar, and sahur, which can assist Muslims in carrying out their prayers [32].
As a proposed solution to improve the functions of the smart prayer mat, in this research, we focused on developing and integrating IoT-based systems: a portable smart prayer mat and a daily worship monitoring system. The device comprises an Arduino AT Mega 2560 powered portable prayer mat through a force-sensitive resistor sensor and an HMC 5883L compass module. Some features completed this integrated system, namely the rak'ah prayer counter, Qibla direction, prayer guideline, Hijri calendar, prayer schedule, worship reporting, and recapitulation of worship that has been done. The smart prayer mat was designed to be a portable device, so the application can be used to conduct daily worship anywhere. The prayer mat can also be connected to daily worship mobile applications so that all worship activities can be recorded in a database so that it can be a supporting device for Muslims in their daily worship.
The contribution of this paper is to design and develop a smart prayer mat that functions as a rak'ah counter and as a Qibla direction, which is integrated into a daily worship monitoring system with many features that help people remember and monitor their worship. The IoT-based system is integrated using Bluetooth and can be accessed using a smartphone.
This paper is organized as follows. Section 2 is about the method used for the hardware and system design and the testing scenarios. Section 3 describes the result and discussion of this research, and finally, in section 4, the conclusions are described.
Data
In this research, we used questionnaires for initial research and user acceptance tests. For preliminary research, questionnaires were distributed to 100 respondents from the general Muslim population aged 17 to 50. It was found that 7% of respondents frequently needed to remember the number of rak'ah of prayer performed, and 70% needed a compass to determine the Qibla direction. We also learned that 61.6% of respondents experienced difficulties because no application combines several prayer tools to install smartphone applications. These facts became the motivation of this research that Muslims need a tool to assist them when praying, especially for counting the number of rak'ah and determining the Qibla.
System Development
We used the Bluetooth IEEE 802.15.1 protocol to merge a portable smart prayer mat and a mobile daily worship system. A prototyping approach was used to produce a portable smart prayer mat, and Rapid Application Development was used to establish a mobile daily worship system. Prototyping is an iterative approach to system development. Information systems acquisition generally strives to improve information storage convenience, lower costs, save time, increase control, stimulate growth, boost productivity, and raise organizational profitability. Computer technology, hardware, and software development have recently accelerated [33].
According to [33], the stages of system development using the prototype approach begin with the communication stage, which establishes the device's purpose.
Step two is a quick plan that identifies system needs. The following stage is a quick design involving a prototype design. Then, a prototype is constructed and evaluated (deployment delivery and feedback). If the system does not meet the requirements, the process will loop back to the first stage, communication, until all needs are met ( Figure 1). Figure 1. Prototype method [33] The communication stage of the prototype paradigm begins with gathering information on the research conducted through an analysis of related earlier studies and questionnaires presented to respondents. Through the questionnaires, we discovered that Muslims have a range of difficulties, including recalling how many rak'ah prayers they had offered and determining Qibla's position. Then we examined the system needs of today as well as the hardware, software, techniques, and algorithms. We used an Arduino AT Mega microcontroller and several sensors from the quick plan stage to build the smart prayer mat. We use Arduino because it is low-cost and highly scalable. It can be used for any sensor and is easily amplified [10]. We used a force-sensitive resistor to identify the rak'ah prayer and an HMC 5883L module to calculate the directions of the Qibla. Using low-cost but reliable hardware will make it easier for this tool to be mass-produced and used by many people.
We developed the Android application and system module to connect the portable smart prayer mat. These two components are designed to connect over Bluetooth. We developed the Android application and the system module for the smart prayer mat. The previous stage's modules are all now being programmed in C. After manufacturing the modules, the prototype was tested to ensure that every module worked as intended. After the prototype had been constructed, a few scenarios were utilized to evaluate the system.
Figure 2. Research flow
A daily worship monitoring system was integrated into a portable smart prayer mat. Rapid Application Development was used to create this system (RAD), consisting of three main phases: requirement planning, design workshop, and implementation. This monitoring application includes several elements, including worship guidelines, a Hijri calendar, prayer reminders, and a list of worship recapitulated. This system was developed for the Android smartphone.
Evaluation
We used some methods in the evaluation to ensure that all modules' functions fulfill all requirements. We used Blackbox testing to evaluate system and device functionality. The evaluations are conducted to modules below to find out whether it is worth their respective functions: 1) Counting the cycle of prayer (rak'ah), 2) Determining the Qibla direction, 3)Accessing the database of the obligatory prayer time, 4) Sending the user data to the smartphone (application) when the prayer is finished, 5) Displaying the obligatory prayer time, the number of cycles of pray and Qibla direction on the LCD TFT 2.8, 6) Choose the features that have been offered by the device 7) Integrating device and application, 8) Choose an activity from the application menu, 9) Worship recapitulation, 10) Displaying Hijr calendar.
RESULT AND ANALYSIS 3.1. Result
The scope of the proposed device was a smart prayer mat integrated into the daily worship monitoring system. The device functions are counting the rak'ah of prayer and determining Qibla's direction. Before using the smart prayer mat, the user identifies the Qibla direction using the HMC 5883L connected to Arduino. A force-sensitive resistor is used as a sensor to determine the prostration pressure. When two prostrations are detected as one cycle or rak'ah, the number of cycles will be displayed on the 2. TFT LCD. When the user completes the prayer and the number of rak'ah is correct, this system will send the data to the smartphone for recapitulation. If the number of rak'ah is more than it should be, a notification will sound in the form of a beep sound.
The proposed prayer mat consists of a series of components, namely Arduino Mega 2560, HMC 5883L, The Force-Sensitive Resistor (FSR) module, Real Time Clock, an HC-05 module, a Push Button and LCD TFT. Arduino Mega is the brain of a smart prayer mat, which is functioned to give instructions to all components. HMC 5883L, connected to Arduino, was used to determine the Qibla direction. The Force-Sensitive Resistor (FSR) is used to identify the rak'ah of prayer. FSR determines the pressure exerted when a person performs prostration. The data from the FSR is displayed on the 1.8 TFT LCD. After the user has finished praying, the HC-05 Bluetooth module will send data to the smartphone, which is the type of prayer and the time the user has finished praying. The architecture of the proposed portable smart prayer mat is in Figure 3. After designing the architecture, we continued to design the circuit. The device circuit design Figure 4 shows the structure of the schematic connection between Arduino and all components in the system circuit. Figure 4. The design of the system schematic Each module was connected to Arduino as the brain of the device through its pins. We summarize the pin configuration between Arduino and other components in Table 1 The circuit design was finally implemented into a portable smart prayer mat. A prayer mat with FSR inside serves as a place for prostration and is connected to a machine as a controller ( Figure 5). Because of its small size, it is easy to use as rak'ah counter and qibla finder. It also connected to the daily worship monitoring application by connecting the application to the ESP2886 module using Bluetooth HC-05. In the daily worship monitoring application, some activities are available, as seen in Figure 6. These activities were built to show the interaction between the user and the application. The smart prayer mat prototype design was evaluated to see if it met the requirements. Several key points have been assessed, the counter of rak'ah of prayer, Qibla's direction, compulsory prayer time scheduling, data transmission to the smartphone, compulsory-prayer time display, and the number of prayers. All these points have been evaluated using Black Box testing and fulfill all requirements. We also assessed the smart prayer mat's features, including the sensor's sensitivity, the accuracy of the qibla indicator, Bluetooth connection delay, and user acceptance testing for evaluating daily worship monitoring applications.
The smart prayer mat used FSR to identify the prostrations of the user. The system will count two prostrations as a rak'ah. To be recognized as a prostration, the value of prostration was set above 10. shows that the FSR will identify a prostration with an average value of 81,36 kilo Ohm. Table 2 shows that the FSR will identify a prostration with an average value of 81,36 kilo Ohm. 1 76 78 84 80 79 75 2 User 2 78 75 81 83 83 87 3 User 3 79 81 82 78 83 80 4 User 4 80 82 86 79 82 88 5 User 5 79 82 84 78 81 84 6 User 6 82 85 79 83 85 88 The HMC 5883L compass module is used as a Qibla direction indicator on the smart prayer mat. HMC 5883L is able to read the coordinates where the module is placed. The calculation of Qibla direction is based on calculations provided on https://www.alhabib.info/arah-kiblat/ with an area of South Tangerang city. The results obtained are latitude: -6.28352, Longitude: 106.71129, and Qibla Direction: 295.2 degrees from the north of the map. The Qibla direction is determined from 280 to 310 because the HMC 5883L module reads coordinates very quickly and fluctuates. Then we decided to use 280 to 310 values as the Qibla direction displayed on the 1.8 TFT LCD. Smart prayer mats and daily worship monitoring systems are integrated through Bluetooth. Testing on the HC-05 module aims to test data delivery from the smart prayer mat to the smartphone. The smart prayer mat transmits the number of rak'ah of prayers done to a smartphone using Bluetooth module HC-05 by pressing the send button on the prayer mat. The results display on the ISSN: 2476-9843 smartphone screen. If the smart prayer-mat data are read correctly on the smartphone, the Bluetooth communication system between smartphones and the smart-prayer-mat runs perfectly. From 10 times evaluations, we had two failed (Table 4). Table 4 is the result of the measurement delay-testing of data transmission. This testing is counted when the start command is given to the smart prayer mat until the microcontroller executes the instructions and then sent to the smartphone.
To evaluate software development, we carried out user acceptance testing (UAT), also known as beta, application, or end-user testing. In contrast, the program is evaluated in the context in which consumers want to use it. Six persons participated in the test. The study focuses on a number of features, including transferring data to a smartphone and displaying prayer times and directions for the Qibla.
Evaluation
This study was conducted in response to the Muslim community's daily worship issues. According to preliminary research, we found information that most people have experienced forgetting the number of rak'ah while performing the prayer. Some of the people also need a tool for Qibla's direction. A device for rak'ah of the prayer counter and Qibla's orientation was built as a solution. This device was integrated into a monitoring system, which can recapitulate daily worship that a person carried out in terms of type, status, prayer time, and the number of rak'ah of compulsory prayer, and can remind the user of non-compulsory (sunnah) worship, for example, sunnah fasting. Integrating a smart prayer mat and a daily worship monitoring system differs from similar research.
Force Sensitive Sensor (FSR) is a device for identifying pressure when a person does prostration and an HMC 5883L module for Qibla direction. A rak'ah is two prostrations performed in an adjacent period. The system calculated the validity of the rak'ah prayer. If a person forgets the number of rak'ah, the system responds and gives a "beep" sound. FSR is sensitive enough to identify a prostration. It has varied resistance according to the pressure applied to the sensor. It is thin and suitable for implementing a smart prayer mat. Moreover, this sensor is relatively cheaper, easy to use, and reliable [34]. To be identified as prostration, FSR's value must be in the range of 75 to 88. From 6 tests, it has an average of 81.46.
For Qibla's direction, the HML 5883L module used success to read the coordinate where this device took place. Calculating the Qibla direction based on the center of the google maps map provides an estimate from the website https://www.al-habib.info/arahkiblat/ Tangerang Selatan area. The value of the Qibla direction was determined by an initial setting, from 280 to 310, because this module scans coordinates, and the result is very fluctuating. In the range of 280-310, it was a success in identifying Qibla's direction. After reading the coordinate, this module sends it to LCD TFT 1.8 for display.
Another device that is important for integration between these two systems is Bluetooth. We use the Bluetooth HC-05 module, which consists of 6 connector pins with each different functions. Bluetooth HC-05 module is easy for wireless serial communication because it uses radio frequency 2,4 GHz without any particular driver. It has 30 meters of maximum signal distance [5]. For those reasons, the Bluetooth HC-05 module is the right choice for integrating the two systems.
Although data transfer well between two systems, it is necessary to check the transmission delay. Knowing the data transmission delay makes it possible to develop this system using better data communication. Table 4 shows the result of the test. From Table 4, we had 20% of the tests failed, meaning the connections between the smart prayer mat and the daily worship monitoring application may have failed. Some causes for this failure are the signal transmission and uncovering distance between the Bluetooth module and smartphone (application).
CONCLUSION
The portable smart prayer mat, integrated into daily worship monitoring application, can be an alternative solution to the Muslims' problem of memorizing the rak'ah prayer and determining Qibla direction. The smart prayer mat can help people know the number of rak'ah they have done and can perform the prayer without any doubts. People can also determine the Qibla direction using this smart prayer mat because it has HMC 5883l as a compass. Users can also see the recapitulation of the prayer in the daily worship monitoring application. This application is connected through Bluetooth and receives information from the device when the pray is accomplished. This application also has many features, so people can use it to monitor their daily non-compulsory worship, such as a reminder for prayer time and recapitulation of their non-compulsory prayer,It will be fascinating to conduct further research to enhance the capabilities of the smart prayer mat so that it can function not only as a rak'ah counter and a qibla direction finder but also as a tool that can determine whether the number of rak'ah is incorrect and recognize the proper prostration. | 5,164.6 | 2023-07-29T00:00:00.000 | [
"Computer Science"
] |
Using Resistivity Measurements to Determine Anisotropy in Soil and Weathered Rock
This study uses electrical resistivity measurements of soils and weathered rock to perform a fast and reliable evaluation of field anisotropy. Two test sites at New Concord, Ohio were used for the study. These sites are characterized by different landform and slightly east dipping limestone and siltstone formations of Pennsylvanian age. The measured resistivity ranged from 19 Ω·m to 100 Ω·m, and varied with depth, landform, and season. The anisotropy was determined by a comparison of resistance values along the directions of strike and the dip. Measurements showed that the orientation of electrical anisotropy in the shallow ground may vary due to fluid connection, which is determined by the pore geometry in soil and rock, as well as by the direction of fluid movement. Results from this study indicated that a portable electrical resistivity meter is sensitive and reliable enough to be used for shallow ground fluid monitoring. Keywordssoil resistivity; resistance; soil anisotropy; resistivity measurements
INTRODUCTION
The hydrological anisotropy of soil and weathered rock beneath it is an important issue in many studies and applications. While the change of anisotropy may be expected in a vertical soil profile and rock stratification, the change of anisotropy along a horizontal layer of soil or rock is usually much less obvious. However, the horizontal variation of anisotropy is also a significant feature and has been demonstrated in several studies e.g. [1][2][3][4][5].
Measuring the electrical resistivity of the ground is a nondisturbing geophysical method that is commonly used to explore the properties of soil and rock. It has been extensively used in various environmental and engineering studies [6][7][8][9]. In [9], the authors demonstrated that if some ground variables can be monitored, then the on-field electric resistivity method is a good alternative to other much more expensive and difficult methods for quantitatively monitoring the moisture distribution of ground shallower than 10 meters. This method can be used to identify the direction of horizontal anisotropy in the soil by measuring and comparing the resistivity along profiles of different directions at the same locality. Moisture content is one of the major factors in determining ground resistivity [9,10].
Ground moisture condition depends on several other factors, such as weather, season, types of soil and rock [9,11,12]. As a consequence, the value of resistivity, particularly for shallow ground, may change quickly and significantly with time and ground conditions. Since the anisotropy is expressed by the ratio of resistivity values along different directions, common factors that will affect resistivity, such as moisture content, weather conditions, groundwater chemistry, and time will be cancelled out, and the difference on the ratios will only reflect the variation of controlling factors.
The purpose of this study is to use a portable resistivity meter to take measurements of soil resistivity, and to determine the nature of anisotropy of shallow ground up to a depth of 20 feet. Measurements of ground resistivity and anisotropy are described and interpreted.
A. Resistivity Meter and Measurements
Soil resistivity testing was performed using the 4-point Wenner Array Method [13]. This method is the most used test method to measure the resistivity of soil for electrical grounding design. The Wenner array, illustrated in Figure 1, consists of a line of four equally spaced electrodes. Current is injected through the outer electrodes C 1 and C 2 , and potential is measured between the inner electrodes P 1 and P 2 . The resistivity meter used is Model H-4385 made by Humboldt MFG© [14]. Based on [13], the pin separation should be approximately 20 times larger than the pin depth in the soil. Pin depth is important to properly measure the resistivity of deeper ground. The resistivity of soil using the Wenner array can be calculated using where a is the electrode separation, V is the difference in potential between P 1 and P 2 , and I is the current flowing between C 1 and C 2 . Using Ohm's Law, V/I=R. The value of R is given by the meter and thus ρ can be calculated using (1).
B. Testing Sites
Measurements of soil resistivity were taken at two different locations in New Concord, Ohio. Site 1 is a small flat area on the top of a hill. Site 2 is a lithology controlled slope. The slope surface is tilted 30° toward East. The soil profile at both sites received minimum disturbance. However, at site 1, there was an east-west oriented, 5 feet deep utility trench, which was dug and backfilled about 12 years ago.
Soils over the test area are derived from rocks of the Upper Pennsylvanian Conemaugh formation. In general, the bottom of site 2 is not deeper than 5 feet. The bedding of bed rock has a NS strike with a few degree dip toward the east. The strata include the Ames Limestone and siltstone layers above and below the limestone. The Ames Limestone is a 3-feet thick fresh crystalline packstone, which has no appreciable permeability. However, the slab shape limestone has been weathered through along joints and becomes isolated slabs of 1 to 5 meters in diameter. At both sites, the limestone is located between weathered siltstone layers.
Soil resistivity in both sites was measured using the 1D Wenner array along both NS and EW direction of selected locality. Resistivity at site 1 was measured in two different seasons, summer and winter, to acquire seasonal contrast. For measurements taken during the summer, additional readings were taken along profiles of N45E and N45W. At each array, the average resistance to the depths of 10, 15 and 20 feet was measured. Ground conditions of both sites were seasonally dry in the summer. In winter, the ground was cold and moist, but not frozen.
A. Resistivity
The results of resistivity values calculated from the measured resistances in site 1 during winter are summarized in Figures 2 and 3. Resistivity values in site 1 measured in different directions in summer. The EW* direction means that measurements were taken where the profile runs along a dug and refilled utility trench of 4 feet deep.
The resistivity at site 1 varied from 40 Ω·m to 100 Ω·m. This range of value is at the low end of common fresh sedimentary rocks reported in [15]. Considering that all the measured material is soil and weathered rock, resistivity at the low end of normal range was expected. a a a a/20 C 2 C 1 P 1 P 2
www.etasr.com Soto-Cabán and Law: Using Resistivity Measurements to Determine Anisotropy in Soil and …
A 1.524 m (5 feet) deep utility trench was dug and back filled at site 1 in 2001. A sequence of measurements were taken around this trench and are identified in Figure 4 as the EW* direction. The resistivity on the trench profile at 1.524 m (5 feet) and 3.048 m (10 feet) depth was measured. To provide a comparison, resistivity was also measured along the same direction, but with a 0.9144 m (3 feet) offset to the undisturbed ground (EW direction in Figure 4). The data shows that resistivity along the disturbed ground is 8 Ω·m to 6 Ω·m, lower than that of the undisturbed ground.
Measurements in site 2 where taken during the summer. Soil in site 2 is similar to the soil in site 1 but it has a sloped surface tilted 30° toward East. The measured values of resistivity ranged from 19 Ω·m to 28 Ω·m, as shown in Figure 5. The data have narrower variation and measured resistivity values are significantly lower than the measured resistivity values in site 1. These values were taken during the dry season, but the lower resistivity suggests that the ground probably has higher moisture content. Consider that site 2 is a 30° eastfacing slope, and is where shallow groundwater accumulates and exits. The significantly lower resistivity reflects the effect of a slope on surface moisture. Fresh carbonate rock has resistivity around 300 Ω·m [15] which is much higher than any measured resistivity in this study. The shallow dense Ames Limestone layer in the studied area does not raise the resistivity values beyond the range of the surrounding weathered siltstones. This could imply that the limestone is significantly fractured and therefore, it does not hinder the passage of electric current. This interpretation was confirmed by the observation on many constructions made in the neighborhood of the study area. The excavations showed that the limestone layer has been weathered through and is broken into slabs approximately 2 by 3 meters or smaller.
B. Anisotropy
In this study, the degree of anisotropy is evaluated by comparing the resistivity along the strike of the formation to that along the dip of the formation. The attitude of rock formation is the same at both sites, and has strike approximately in NS and dip a few degrees toward east. If the ratio is within ±10% of 1, then it is taken as a lack of anisotropy. The larger the ratio (i.e. larger than 1), the stronger the anisotropy. Absolute values of degree of anisotropy were not assigned to the resistivity ratios. When the activity ratio is significantly larger than 1, it means that the resistivity along the EW profile is lower. This is interpreted as that the EW direction has better moisture connectivity of pore fluids. It implies that either the pore spaces are better connected or the moisture is moving across the pore spaces along the EW direction. For site 1, the resistivity ratios are presented in Table I. The data show that when the pin separation is shorter than 6.096 m (20 feet), there is a lack of anisotropy down to 3.048-4.572 m (10-15 feet) deep at site 1 during the winter time. When the pin separation is extended to 6.096 m (20 feet), resistivity became lower, but the degree of anisotropy significantly increased. Higher resistivity was shown along the NS direction. This suggested that ground moisture became higher in the deeper ground, which reduced the resistivity. Also, the moisture connection in pore spaces is better along the EW direction, which is the direction of the formation dip. This feature is interpreted as that in the winter (wet) season, the moisture in the shallow ground is stagnant, but the moisture is moving toward east in the deeper ground. In the summer time, the degree of anisotropy at the shallow ground becomes significant. This is shown in Table II. The data further indicates that for both sites in the winter and summer seasons, the anisotropy remained similar in direction at the deeper ground. The anisotropy at the shallow ground changed from insignificant in winter to EW enhanced, which is the opposite direction to that in the deeper ground, during the summer time.
At site 1, the anisotropy shown in the summer dry season in the shallow ground indicates a better pore fluid connection along the direction of strike, which is the NS direction. This interpretation agreess with [16,17]. During the wetter winter season, more abundant pore fluid obscured this trend and made the shallow ground seems isotropic. On the other hand, the moisture in the deeper ground is always moving toward the East. This shows the EW anisotropy in both the wetter winter and the drier summer for the deeper ground. The higher amount of flow makes a stronger anisotropy during the winter time. Also, the shallow ground along the trench is also lacking anisotropy (0.97 ratio), but the undisturbed ground shows significant anisotropy (0.84). These data suggest that the
www.etasr.com Soto-Cabán and Law: Using Resistivity Measurements to Determine Anisotropy in Soil and …
material along the disturbed trench is more porous and homogenous than that on the undisturbed ground.
At site 2, in addition to the significantly lower resistivity, the anisotropy is also significant in both shallow and deep grounds. The direction of anisotropy is not only opposite in direction between the shallow ground and the deeper ground, but is opposite to that in site 1 at the same time (Table II). In the shallow ground, which is less than 1.524 m (5 feet) deep, resistivity is lower along the direction of dip (EW). At ground deeper than 3.048 m (10 feet), resistivity becomes lower along the direction of strike (NS). This switch of anisotropy suggests that ground moisture is moving down the slope (and the dip) near surface, but is lacking movement in deeper ground.
IV. CONCLUSIONS
This study measured the resistivity of soil and underlay weathered rock down to 6.096 m (20 feet) deep. It included the unsaturated zone and the shallow saturated zone. The measurements were taken along the strike and the dip directions of the rock formation. The ratio of resistivity along the two directions filtered out common factors that change the value of resistivity, but preserved the factors that affect the anisotropy. The abundance of moisture in the unsaturated zone and the direction of moisture movement in the saturated zone were two major factors.
Measured data suggested that the electrical resistivity and anisotropy of shallow ground, less than 6.096 m (20 feet) deep, are easily changed by conditions of depth, landform, geological structure and weather. Resistivity is also sensitive to the soil and rock conditions. The best illustration of the sensitivity is the comparison of resistivity along a 12-year old utility trench and its adjacent undisturbed ground. As a result, the method used in this study is useful in indicating minute variations of moisture conditions due to various reasons, and it can serve as a quick field method to check contemporary hydrological condition of shallow ground.
While the resistivity at site 1 is several times higher than that at site 2, the degree of anisotropy is approximately the same, even though the direction of anisotropy got reversed with depth during the time of testing, since the anisotropy is measured by the ratio of resistivity. Due to the difference of landform which affected the direction of moisture flow, the variation of anisotropy at the two sites more likely reflects the characteristics of pore geometry and the movement of pore fluid.
This study shows that the anisotropy is clearly indicated along the bedding plane. Resistivity data show that when the pore fluid in soil and shallow rocks is stagnant, the moisture in the pore space is better connected along the strike direction than that along the dip direction. This result is in agreement with the study of pore space geometry [18][19][20]. However, when the pore fluid starts to move due to either pore saturation or hydraulic pressure from a slope, it will produce a better electric conductivity along the direction of water movement. In this case, it is the dip direction that has a key role. Since the degree of water saturation in the unsaturated zone is easily modified by the weather and season, the direction of electrical anisotropy may also be changed when moisture conditions are changed. | 3,469.2 | 2013-06-23T00:00:00.000 | [
"Geology",
"Environmental Science"
] |
Opinion Dynamics with Varying Susceptibility to Persuasion via Non-Convex Local Search
A long line of work in social psychology has studied variations in people's susceptibility to persuasion -- the extent to which they are willing to modify their opinions on a topic. This body of literature suggests an interesting perspective on theoretical models of opinion formation by interacting parties in a network: in addition to considering interventions that directly modify people's intrinsic opinions, it is also natural to consider interventions that modify people's susceptibility to persuasion. In this work, motivated by this fact we propose a new framework for social influence. Specifically, we adopt a popular model for social opinion dynamics, where each agent has some fixed innate opinion, and a resistance that measures the importance it places on its innate opinion; agents influence one another's opinions through an iterative process. Under non-trivial conditions, this iterative process converges to some equilibrium opinion vector. For the unbudgeted variant of the problem, the goal is to select the resistance of each agent (from some given range) such that the sum of the equilibrium opinions is minimized. We prove that the objective function is in general non-convex. Hence, formulating the problem as a convex program as in an early version of this work (Abebe et al., KDD'18) might have potential correctness issues. We instead analyze the structure of the objective function, and show that any local optimum is also a global optimum, which is somehow surprising as the objective function might not be convex. Furthermore, we combine the iterative process and the local search paradigm to design very efficient algorithms that can solve the unbudgeted variant of the problem optimally on large-scale graphs containing millions of nodes. Finally, we propose and evaluate experimentally a family of heuristics for the budgeted variation of the problem.
Introduction
A rich line of empirical work in development and social psychology has studied people's susceptibility to persuasion. This property measures the extent to which individuals are willing to modify their opinions in reaction to the opinions expressed by those around them, and it is distinct from the opinions they express. Research in the area has ranged from adolescent susceptibility to peer pressure related to risky and antisocial behavior [4,18,20,43,48] to the role of susceptibility to persuasion in politics [22,39,41]. Individuals' susceptibility to persuasion can be affected by specific strategies and framings aimed at increasing susceptibility [10,11,12,34,38,47,46]. For instance, if it is known that an individual is receptive to persuasion by authority, one can adopt a strategy that utilizes arguments from official sources and authority figures to increase that individuals' susceptibility to persuasion with respect to a particular topic.
Modifying network opinions has far-reaching implications including product marketing, public health campaigns, the success of political candidates, and public opinions on issues of global interest. In recent years, there has also been work in Human Computer Interaction focusing on persuasive technologies, which are designed with the goal of changing a person's attitude or behavior [21,30,34]. This work has shown that not only do people differ in their susceptibility to persuasion, but that persuasive technologies can also be adapted to each individual to change their susceptibility to persuasion. Despite the long line of empirical work emphasizing the importance of individuals' susceptibility to persuasion, to our knowledge theoretical studies of opinion formation models have not focused on interventions at the level of susceptibility. Social influence studies have considered interventions that directly act on the opinions themselves, both in discrete models (e.g., [19,35,1,6,29,33]) and more recently in continuous models [27,40].
In this work, we adopt an opinion formation model inspired by the work of DeGroot [17] and Friedkin and Johnsen [23], and we initiate a study of the impact of interventions at the level of susceptibility. In this model, each agent i is endowed with an innate opinion s i ∈ [0, 1] and a parameter representing susceptibility to persuasion, which we will call the resistance parameter, α i ∈ (0, 1]. The innate opinion s i reflects the intrinsic position of agent i on a certain topic, while α i reflects the agent's willingness, or lack thereof, to conform with the opinions of neighbors in the social network.We term α i the agent's "resistance" because a high value of α i corresponds to a lower tendency to conform with neighboring opinions. According to the opinion dynamics model, the final opinion of each agent i is a function of the social network, the set of innate opinions, and the resistance parameters, determined by computing the equilibrium state of a dynamic process of opinion updating. We study the following natural question: Problem 1 Given an opinion dynamics model, and a set of agents, each of whom has an innate opinion that reflects the agent's intrinsic position on a topic, and a range for the resistance parameter measuring the agent's propensity for changing their opinion, how should we set the agents' resistance parameter in order to minimize the total sum of opinions at equilibrium? Observe that the problem is trivial if the resistance of each agent can be picked from the closed interval [0, 1]. For minimizing the equilibrium opinions, it suffices to make the agent with the minimum innate opinion the most resistant (setting its resistance to 1) and everyone else totally compliant (setting its resistance to 0). Similarly, for the maximization problem it suffices to make the agent with the maximum innate opinion the most resistant, and the rest of the nodes totally compliant. The problem is non-trivial if the resistance α i of each agent i can take value from some interval [l i , u i ], where 0 < l i < u i < 1. We discuss the model and Problem 1 in greater detail in Section 3.
Our Contributions. In this work, we make the following key contributions.
• Opinion Dynamics with Varying Susceptibility to Persuasion. We introduce a novel framework for social influence that focuses on interventions at the level of susceptibility.
• Analysis of the unbudgeted problem structure. We prove that the objective function is in general neither convex nor concave. We analyze the mathematical structure of the problem in Section 5. Perhaps the most important technical insight in this paper is that we show (in Lemma 5.9) that if the current vector solution is not optimal, then there exists a coordinate that can be flipped such that the objective will be strictly improved. This shows that an optimal vector can be found by a simple local search algorithm.
• Local search with irrevocable updates. In general, local search could still take exponential time to find an optimal solution, for instance, the simplex algorithm for linear programming. For minimizing the sum of equilibrium opinions, we show (in Lemma 6.1) that starting from the upper bound resistance vector, then the local search algorithm will flip each coordinate at most once, which implies that an optimal vector can be found in polynomial time.
• Efficient Local Search on Large-Scale Graphs. Typically, in local search, the objective function needs to be evaluated at the current solution in each step. However, since the objective function involves matrix inverse, its evaluation will be too expensive when the dimension of the matrix is in the order of millions. Instead, we use the iterative process of the opinion dynamics model itself to approximate the equilibrium vector. We have developed several update strategies for local search.
For conservative or opportunistic updates, one always makes sure that the error of the estimated equilibrium vector is small enough before any coordinate of the resistance vector is flipped. For optimistic update, one might flip a coordinate of the resistance vector even before the estimated equilibrium vector is accurate enough. However, this might introduce mistakes which need to be corrected later. Nevertheless, experiments show that mistakes are rarely made by the optimistic update strategy. In any case, for all three update strategies, an optimal vector will be returned when the local search terminates.
Our approaches are scalable and can run on networks with millions of nodes. We report the experimental results in Section 8. In particular, using multiple number of threads, the optimistic update strategy can solve the problem optimally on networks with up to around 65 million nodes.
• Scalable Heuristics for the Budgeted problem. We provide a family of efficient heuristics for the budgeted version of our problem, and a detailed experimental evaluation on large-scale real-world networks.
Comparison with Previous Versions. A preliminary version [2] of this work presented the problem, but it was overlooked that the objective function might not be convex or concave. A subsequent work [9] rectified this issue, and showed that local search can be performed efficiently to reach the optimal solution, even if the objective function is non-convex. The current presentation combines results from the aforementioned two works [2,9]. We have also included a more detailed Section 7 on heuristic algorithms for the budgeted version of the problem, and the related experiments in Section 9.
Related work
To our knowledge, we are the first to consider an optimization framework based on opinion dynamics with varying susceptibility to persuasion. In the following we review briefly some work that lies close to ours.
Susceptibility to Persuasion. Asch's conformity experiments are perhaps the most famous study on the impact of agents' susceptibility to change their opinions [5]. This study shows how agents have different propensities for conforming with others. These propensities are modeled in our context by the set of parameters α. Since the work of Asch, there have been various theories on peoples' susceptibility to persuasion and how these can be affected. A notable example is Cialdini's Six Principles of Persuasion, which highlight reciprocity, commitment and consistency, social proof, authority, liking, and scarcity, as key principles which can be utilized to alter peoples' susceptibility to persuasion [10,11]. This framework, and others, have been discussed in the context of altering susceptibility to persuasion in a variety of contexts. Crowley and Hoyer [12], and McGuire [38] discuss the 'optimal arousal theory', i.e., how novel stimuli can be utilized for persuasion when discussing arguments.
Opinion Dynamics Models. Opinion dynamics model social learning processes. DeGroot introduced a continuous opinion dynamics model in his seminal work on consensus formation [17]. A set of n individuals in society start with initial opinions on a subject. Individual opinions are updated using the average of the neighborhood of a fixed social network. Friedkin and Johnsen [23] extended the DeGroot model to include both disagreement and consensus by mixing each individual's innate belief with some weight into the averaging process. This has inspired a lot of follow up work, including [3,8,14,26,27].
Optimization and Opinion Dynamics. Bindel et al. use the Friedkin-Johnsen model as a framework for understanding the price of anarchy in society when individuals selfishly update their opinions in order to minimize the stress they experience [8]. They also consider network design questions: given a budget of k edges, and a node u, how should we add those k edges to u to optimize an objective related to the stress? Gionis, Terzi, and Tsaparas [27] use the same model to identify a set of target nodes whose innate opinions can be modified to optimize the sum of expressed opinions. Musco, Musco, and Tsourakakis adopt the same model to understand which graph topologies minimize the sum of disagreement and polarization [40] .
Inferring opinions and conformity parameters. While the expressed opinion of an agent is readily observable in a social network, both the agent's innate opinion and conformity parameter are hidden, and this leads to the question of inferring them. Such inference problems have been studied by Das et al. [13,15]. Specifically, Das et al. give a near-optimal sampling algorithm for estimating the true average innate opinion of the social network and justify the algorithm both analytically and experimentally [15]. Das et al. view the problem of susceptibility parameter estimation as a problem in constrained optimization and give efficient algorithms, which they validate on real-world data [13].
Non-Convex Optimization. In general, optimizing a non-convex function under non-convex constraints is NP-hard. However, in many cases, one can exploit the structure of the objective function or constraints to devise polynomial-time algorithms; see the survey by Jain and Kar [31] on non-convex optimization algorithms encountered in machine learning. Indeed, variants of the gradient descent have been investigated to escape saddle points by Jin et al. [32], who also gave examples of problems where all local optima are also global optima; some examples are tensor decomposition [24], dictionary learning [45], phase retrieval [44], matrix sensing [7,42] and matrix completion [25]. However, all these problems involve some quadratic loss functions, whose structures are totally different from our objective functions which involve matrix inverse.
Hartman [28] considered the special case that the objective function is the difference of two convex functions. Strekalovsky devised a local search method to optimize such objective functions. Even though the objective functions in our problem are somewhere convex and somewhere concave (see Figure 5.1), it is not immediately clear if they can be expressed as differences of convex functions.
Model
Let G = (V, E) be a simple, undirected graph, where V = [n] is the set of agents and E is the set of edges. Each agent i ∈ V is associated with an innate opinion s i ∈ [0, 1], where higher values correspond to more favorable opinions towards a given topic and a parameter measuring an agent's susceptibility to persuasion α i ∈ (0, 1], where higher values signify agents who are less susceptible to changing their opinion. We call α i the resistance parameter. The opinion dynamics evolve in discrete time according to the following model, inspired by the work of [17,23]: Here, N (i) = {j : (i, j) ∈ E} is the set of neighbors of i, and deg(i) = |N (i)| is the degree of node i. Equivalently, by defining A = Diag(α) to be the diagonal matrix with A ii = α i , I is the identity matrix, and P ∈ [0, 1] V ×V to be the row stochastic matrix (i.e., each entry of P is non-negative and every row sums to 1) that captures agents interactions, we can rewrite Equation 3.1 as z (t+1) := As + (I − A)P z (t) . Equating z (t) with z (t+1) , one can see that the equilibrium opinion vector is given by z = [I − (I − A)P ] −1 As, which exists under non-trivial conditions such as every α i > 0. In the rest of this paper, we always call P the interaction matrix.
We quantify Problem 1 as follows. The objective is to choose a resistance vector α to minimize the sum of equilibrium opinions 1, z = 1 z. Observe that one can also consider maximizing the sum of equilibrium opinions; however, since the techniques are essentially the same, we will focus on the minimization variant of the problem.
Definition 3.1 (Opinion Susceptibility Problem) Given a set V of agents with innate opinions s ∈ [0, 1] V and interaction matrix P ∈ [0, 1] V ×V , suppose for each i ∈ V , its resistance is restricted to some interval The objective is to choose α ∈ I V := × i∈V I i ⊆ [0, 1] V such that the following objective function is minimized: Observe that the assumption α > 0 ensures that the above inverse exists.
Unbudgeted vs Budgeted Variants. In Definition 3.1, we are allowed to modify the resistance of any agent, and this is known as the unbudgeted variant. We also consider the budgeted variant: given some initial resistance vector and a budget k, the resistance of at most k agents can be changed. In this paper, we focus on efficient algorithms that optimally solve the unbudgeted variant. In Section 4, we prove that the budgeted variant is NP-hard, and we propose efficient heuristics that scale to large networks. Designing algorithms with solid approximation guarantees for the budgeted variant is an interesting open problem.
Technical Assumption. To simplify our proofs, we assume that the interaction matrix P corresponds to an irreducible random walk. Irreducibility is satisfied if P arises from a connected graph.
NP-hard Budgeted Opinion Susceptibility Problem
We now consider the setting where there is a constraint on the size of the target-set. That is, we want to identify a set T ⊆ V of size k such that changing the resistance parameters of agents in T optimally maximizes (resp. minimizes) the sum of equilibrium opinions. We use α (0) to denote the given initial resistance vector. For T ⊆ V , we define F (T ) := max{f (α) : ∀i / ∈ T, α i = α (0) i }; observe that F is defined with respect to the initial resistance vector α (0) . The budgeted opinion optimization problem is to maximize F (T ) subject to the budget constraint |T | ≤ k. Proof: We give a reduction from the vertex cover problem for regular graphs. Given a d-regular graph G = (V, E) and an integer K, the vertex cover problem asks whether there exists a set S of nodes with size at most K such that S is a vertex cover, i.e., every edge in E is incident to at least one node in S. For simplicity, we assume that √ d is an integer.
We give a reduction from the above vertex cover problem to the decision version of the opinion optimization problem. In addition to a given graph G , the innate opinion vector s, the initial resistance vector α (0) and the budget k, an instance of the decision version of opinion maximization also has some threshold θ. The instance is "yes" iff there exists some node set T in G with size at most k such that F (T ) ≥ θ. To illustrate our ideas, we first give a reduction in which each agent's resistance parameter is in the range [0, 1]. Then, we show how to restrict the resistance to the range [ , 1] for some small enough > 0.
Reduction Construction. Suppose we are given an instance of the vertex cover problem for regular graphs. We construct an instance of the decision version of the opinion optimization problem. Define where V and E come from original vertex cover problem. For i ∈ V , s i = 1 and α (0) = 0; for i ∈ V , s i = 0, and we give more details on their initial resistance parameters. The additional nodes V and edges E are added as follows. Let σ = 2n 2 ( √ d + 1) (specified later).
For each i ∈ V , we add (σ + 1) Each such node has degree 1 and is connected only to node i; its initial resistance parameter in α (0) is 0.
d stubborn nodes. These nodes form √ d cliques, each of which has size σ. In each clique, exactly one node is connected to i. All the stubborn nodes have initial resistance parameters in α (0) being 1.
Observe that in G , the degree of each node in V is d + 2
√
d. Finally, we set the budget k = K and the threshold θ = ( To complete the reduction proof, we show that there exists a vertex cover of size k in G iff there exists some T ⊆ V ∪ V of size k such that F (T ) ≥ θ.
Forward Direction. Suppose in G, there is some vertex cover T ⊂ V with size k. We show that in G , F (T ) ≥ θ; we set α i = 1 for each i ∈ T , while the resistance parameters of all other nodes remain the same as in α (0) . We next analyze the equilibrium opinion of each node. Observe that all stubborn nodes in V have equilibrium opinion 0.
For i ∈ T , node i has equilibrium opinion 1; moreover, all its √ d flexible neighbors in V will also have equilibrium opinion 1.
For j ∈ V \ T , we compute its equilibrium opinion z j . Since T is a vertex cover, all d neighbors of j in V are in T and have equilibrium opinion 1. All √ d flexible neighbors of j in V have the same equilibrium opinion z j , while the √ d stubborn neighbors have opinion 0. Therefore, z j satisfies the equation The goal is to show that there is a vertex cover with size k in G. Observe that the innate opinions of nodes in V are 1; hence, if we are allowed to change the resistance of a node i ∈ V , we should set α i = 1 to maximize the total equilibrium opinion.
We consider the following two cases.
1. Case T ⊆ V . i.e. all vertices in T are from V . We prove that T is a vertex cover in G by contradiction.
Assume that there exists an edge {i, j} ∈ E such that both i, j / ∈ T . We derive an upper bound z for the equilibrium opinion of i and j. Observe that for node i, at most (d − 1) of its neighbors are in T . Hence, we have z ≤ d−1 Observe that for any node in V \ T , its equilibrium opinion is maximized when all its neighbors in V are in T .
In this case, we choose T of size k such that F (T ) is maximized; if there is more than one such T , we arbitrarily pick one such that |T ∩ V | is maximized. For contradiction's sake, we assume that T \ V = ∅ and T ∩ V is not a vertex cover of G. (We actually just need the weaker condition that V \ T is non-empty.) We further consider the following cases.
(i) There is some flexible node u in T \ V . Suppose the degree-1 node u is connected to i ∈ V . If i / ∈ T , then one can consider T := T − u + i; if i ∈ T , then pick any j ∈ V \ T and consider T := T − u + j. In either case, it follows that F (T ) ≥ F (T ) and |T ∩ V | > |T ∩ V |, achieving the desired contradiction.
(ii) There is some stubborn node u in T \ V . Suppose u is in the clique associated with i ∈ T .
Observe that at most k nodes in the clique are in T . Hence, it follows that the equilibrium opinion of any stubborn node is at most k+1 σ . Hence, for any j ∈ V \ T , its equilibrium opinion satisfies Note that the equilibrium opinion of stubborn nodes in the clique of u can drop by at most k(k+1) σ < n 2 σ . However, the equilibrium opinion of other nodes cannot decrease and that of j increases by at least This completes the reduction proof for which the resistance parameter is chosen in the range [0, 1]. We next show to how to modify the proof for the case in which the resistance value is chosen in the interval [ , 1] for some sufficiently small > 0.
The key point is that when we view F (T ) as a function of the resistance parameters in the network G constructed in the reduction, it is a continuous function. Define γ := θ − θ > 0, where θ and θ are defined as above.
One can choose > 0 small enough such that the following holds. In the above proof, if we replace any 0 resistance value with , then we have This completes the proof.
Structural Properties of Objective Function
In this section, we investigate the properties of the objective function f in Definition 3.1; we assume that the interaction matrix P and the innate opinion vector s are fixed, and f is a function on the resistance vector α.
Non-convex Objective. Contrary to the claim in a preliminary version of this work (see [2]), the objective f in Definition 3.1 is in general not a convex function of α. In fact, the following example shows that it might be neither convex nor concave. Suppose we fix α 2 = α 3 = 0.1 and consider the objective as a function of α 1 as g( . Then, the plot of g in Figure 5.1 (a) shows that it is not convex. Moreover, suppose this time we fix α 1 = α 2 = 0.1 and consider the objective as a function of α 3 as h(α 3 ) = 1 [I − (I − A)P ] −1 As. Then, the plot of h in Figure 5.1 (b) shows that it is not concave. Fortunately, we can still exploit some properties of the function. As we shall see, even when the function is not convex, every local optimum (which will be defined formally) is a global optimum. This enables us to use variants of the local search method to solve the problem optimally.
Marginal Monotonicity
As in [2], we show that when one chooses the resistance α i for each agent i ∈ V , it suffices to consider the extreme points {l i , u i }. Our approach explicitly analyzes the partial derivative ∂f (α) ∂α i which plays a major role in the local search algorithm that we later develop.
Intuition: Guidance by Current Equilibrium Vector. Observe that given the innate opinion vector s and irreducible interaction matrix P , for some resistance vector α ∈ (0, 1) V , the equilibrium opinion vector is given by z(α) = [I − (I − A)P ] −1 As, where A = Diag(α). For some i ∈ V , if the innate opinion s i is larger than its equilibrium z i (α), this suggests that by being more stubborn, agent i should be able to increase its equilibrium opinion. In other words, one would expect ∂z i (α) ∂α i and s i − z i (α) to have the same sign. However, what is surprising is that in Lemma 5.2, we shall see that even for any j ∈ V , i.e., the coordinates K of α are replaced with 0. Similarly, given . Since we wish to analyze the effect on f (α) of changing only a subset of coordinates in α, the next lemma will be used for simplifying matrix arithmetic involving the computation of inverses. Its proof is deferred to Section 10.
Lemma 5.1 (Inverse Arithmetic) Given K V and α ∈ (0, 1) V , let A := Diag(α) and recall that P is the irreducible interaction matrix. Then, the inverse M = [I − (I − A −K )P ] −1 exists, and every entry of M is positive. Moreover, for each k ∈ V , define a k = 0 if k ∈ K, otherwise a k = α k . Then, we have:
Lemma 5.2 (Sign of Partial Derivative)
In the Opinion Susceptibility Problem in Definition 3.1, given the innate opinion vector s and irreducible interaction matrix P , recall that z(α) := [I−(I−A)P ] −1 As, where A = Diag(α). Then, for any α ∈ (0, 1) V and any i, k ∈ V , the two values ∂z k (α) ∂α i and s i − z i (α) have exactly the same sign in {−, 0, +}.
In particular, this implies that ∂f (α) ∂α i also has the same sign as s i − z i (α).
Proof: By the definition of the inverse of a matrix B, we have BB −1 = I. The partial derivative with respect to a variable t is: By Lemma 5.1 with K = ∅, we know that every entry of M is positive. Thus, the sign of ∂z k (α) ∂α i is the same as that of the scalar s i − e i P z(α).
The next lemma shows that the sign of the partial derivatives with respect to coordinate i is actually independent of the current value α i . Its proof is deferred to Section 10.
Local vs Global Optimum
As shown in Corollary 5.4, it suffices to choose the resistance vector α from the extreme points in Definition 3.1. Lemma 5.2 readily gives a method to decide, given a current choice of α, whether the objective f can be decreased by changing the resistance of some agent. In Lemma 5.9, we show that if α is not a global minimum, then such an agent must exist. As we shall see, this implies that a local search method can find a global minimum.
Notation. When we wish to consider the effect of changing the resistance of only 2 agents i = k ∈ V , we write f (α) = f (α i , α k ), assuming that α −{i,k} is fixed.
Lemmas 5.6 and 5.7 give some technical results involving changing the resistance of two agents. Their proofs are deferred to Section 10.
In particular, the quantity in Lemma 5.3 can be rewritten as follows: The following lemma gives the key insight for why local search works. Intuitively, it shows that there does not exist any discrete "saddle point". Even though its proof is technical, we still include it here because of its importance.
Lemma 5.8 (Switching Lemma)
Recall that f is defined in Definition 3.1 with an irreducible interaction matrix P , and assume |V | ≥ 3. Suppose α, β ∈ (0, 1) V such that ∆(α, β) = {i, k} for some i = k. Moreover, suppose further that Proof: We prove the lemma by contradiction. Suppose Without loss of generality, suppose further that We remark that it is important to distinguish between the strict and non-strict inequality. We use the notation f i to denote the partial derivative with respect to coordinate i.
and the fact that f is marginally monotone (Lemma 5. 3) and f i (x, α k ) has the same sign in {−, 0, +} for x ∈ (0, 1), we have On the other hand, from the strict inequality f (β i , β k ) < f (β i , α k ), we know the partial derivative f k (β i , y) must have the same non-zero sign in {−, +}, again from Lemma 5.3. Therefore, we have: give the following: Next, using Lemma 5.3 and R as defined in Lemma 5.6, the above inequalities (5.1) to (5.4) become: Recall that α j = β j for j = i, k. Hence, we denote: Then, we have Observe that c i = 0, otherwise (5.6) and (5.7) contradict each other. Similarly, c k = 0, otherwise (5.5) and (5.8) contradict each other. We next argue that c i c k > 0.
From (5.5) and (5.8) we have If c i c k < 0, then the above expression will be positive, because g k (·) ≥ 0 (we shall see that later). Hence, we conclude that c i and c k have the same sign.
From (5.6) and (5.7) we have Rearranging (5.9) and (5.10), we have: Note that every entry of R is positive by Lemma 5.1 and we can easily prove g i (·) and g k (·) are both strictly increasing functions in [0, 1]. Since α, β ∈ (0, 1) V , the above two inequalities imply that where R ik R kk ≤ 1 and R ki R ii ≤ 1 are from Lemma 5.7. Notice that we get 0 < c i c k < 1 and 0 < c k c i < 1, which is a contradiction. Hence, the proof is completed. In other words, by switching one coordinate (corresponding to i) of α to that of β, the objective function f decreases strictly.
We consider the inductive step with |∆(α, β)| = q, for some q ≥ 2. Given a list S of coordinates from ∆(α, β), we use α [S] to denote the resulting vector obtained from switching coordinates S of α to those of β.
For contradiction's sake, we assume that for all j ∈ ∆(α, Next, starting from α, we shall fix all coordinates in V except i and k, and we write the objective f (x, y) as a function on only these two coordinates.
Observe that we have already assumed that which, together with (5.11), will contradict Lemma 5.8.
, which contradicts the choice of i ∈ ∆(α, β) to minimize f (α [i] ). This completes the inductive step and also the proof of the lemma.
Corollary 5.10
For the function f in Definition 3.1 every local minimizer is a global minimizer.
Efficient Local Search
In Section 5, we conclude in Corollary 5.4 that it suffices to consider the extreme points of the search space of resistance vectors. Moreover, Corollary 5.10 states that every local minimizer is a global minimizer. Since we know how to compute the sign of the partial derivative with respect to each coordinate using Lemma 5.2, we can design a simple local search algorithm to find a global minimizer.
However, it is possible that O(2 n ) extreme points are encountered before a global minimizer is reached. Fortunately, in this section, we will explore further properties of the objective function, and design a local search algorithm that encounters at most O(n) extreme points before finding a global minimizer.
Irrevocable Updates
Local Search Strategy. We shall start with the upper bound resistance vector, i.e., for each i ∈ V , α i = u i . This induces the corresponding equilibrium opinion vector z(α). According to Lemma 5.2, if there is some agent i such that α i = u i and s i − z i (α) > 0, then we should flip α i to the lower bound l i .
The following lemma shows that each α i will be flipped at most once. Essentially, we show that we will never encounter the situation that there is some agent k such that α k = l k and s k − z k (α) < 0, in which case we would have to switch α k back to u k . Proof: We first show that for each agent k ∈ V , the quantity s k − z k (α) cannot decrease when α is modified according to the local search strategy. According to the strategy, α is modified because there is some agent i such that α i = u i and s i − z i (α) > 0. By Lemma 5.2, ∂z k ∂α i > 0 for each k ∈ V . Hence, after α i is switched from u i to l i , z k (α) decreases, and the quantity s k − z k (α) increases.
Observe that if a coordinate α k is ever flipped from u k to l k , this means that at that moment, we must have s k − z k (α) > 0, which, as we have just shown, will stay positive after α is subsequently updated.
Approximating the Equilibrium Vector
Observe that in our local search algorithm, we need to compute the equilibrium opinion vector z(α) = [I−(I−A)P ] −1 As for the current resistance vector α, where A = Diag(α). However, computing matrix inverse is an expensive operation. Instead, we approximate z(α) using the recurrence z (0) ∈ [0, 1] V and z (t+1) := As + (I − A)P z (t) . The following lemma gives an upper bound on the additive error for each coordinate.
Lemma 6.2 (Approximation Error) Suppose for some > 0, for all i ∈ V , α i ≥ . Then, for every Proof: Using the Neumann series . We next prove, by induction, that for any The base case j = 0 is trivial because every coordinate of x is between 0 and 1. For the inductive step, assume that for some j ≥ 0, every coordinate of y = [(I − A)P ] j x has magnitude at most (1 − ) j .
Since P is a row stochastic matrix, it follows that P y ∞ ≤ (1 − ) j ; finally, since α i ≥ for all i ∈ V , we have (I − A)P y ∞ ≤ (1 − ) j+1 , completing the induction proof.
Finally, observing that both ∞ j=t [(I − A)P ] j As and [(I − A)P ] t z (0) have non-negative coordinates, we have as required.
Local Search Algorithm
Based on Lemmas 6.1 and 6.2, we give a local search framework in Algorithm 1. Observe that in line 1, we perturb the innate opinions s slightly to ensure that for each resistance vector α encountered, no coordinate of s and z(α) would coincide.
The while loop in line 4 combines local search to update α and estimation of the equilibrium vector z(α). Here are two general update strategies, which are both captured by the non-deterministic step in line 7: • Conservative Update. The opinion vector z is iteratively updated in line 5 until all coordinates of z and s are sufficiently far apart. Then, for every coordinate α i such that α i = u i and z i < s i , we flip α i to the lower bound l i .
After we update α, we reset t to 0, and continue to iteratively update z. Whenever we update α and set t to 0, we say that a new phase begins; we use the convention that the initial phase is known as phase 0.
• Opportunistic Update. Instead of waiting for the approximation error of every coordinate of z to be small enough, we can update some coordinates α i , if α i = u i and z i ≤ s i − err(t) is small enough. However, there is some tradeoff between waiting for the errors of all coordinates to be small enough and updating coordinates of α that are ready sooner. In Section 8, we will evaluate empirically different update strategies.
Optimistic Update. In both conservative and opportunistic updates, a coordinate α i is flipped only when we know for sure that the current estimate z i has small enough error with respect to the equilibrium z i (α); hence, no mistake in flipping any α i is ever made. However, our insight is that as the algorithm proceeds, the general trend is for every z i to decrease.
The first intuition is that if we start with some z (0) such that every coordinate of z (0) is at least its equilibrium coordinate of z(α), then z (t) should converge to z(α) from above. The second observation is that every time we flip some α i , this will not increase any coordinate of the equilibrium vector z(α), thereby preserving the condition that the current estimate z (t) ≥ z(α). Hence, without worrying about the accuracy of the current estimate z, we will simply flip coordinate α i to l i when z i drops below s i . However, it is possible that we might need to flip α i back to u i , if z i increases in the next iteration and becomes larger than s i again. We shall see in Section 8 that this scenario is extremely rare. Specifically, in line 8 of Algorithm 2, the set J is (almost) always empty.
Algorithm 1: Local Search Framework
Input: Innate opinions s ∈ [0, 1] V ; interaction matrix P ; for each agent i ∈ V , upper u i and lower l i bounds for resistance. Output: Optimal resistance vector α ∈ × i∈V {l i , u i }. 1 (Technical step.) Randomly perturb each coordinate of s slightly. 2 Initially, for each agent i, set α i ← u i to its upper bound; denote α := min i∈V α i . 3 Pick arbitrary z ∈ [0, 1] V , and set t ← 0; denote err(t) : (Non-deterministic step.) Pick arbitrary L ⊆ V (that can be empty) such that for each i ∈ L, z i ≤ s i − err(t) and α i = u i . 8 if L = ∅ then 9 for each i ∈ L do 10 Set α i ← l i to its lower bound (and update α ).
Algorithm 2: Optimistic Local Search
Input: Innate opinions s ∈ [0, 1] V ; interaction matrix P ; for each agent i ∈ V , upper u i and lower l i bounds for resistance. Output: Optimal resistance vector α ∈ × i∈V {l i , u i }. 1 (Technical step.) Randomly perturb each coordinate of s slightly. 2 Initially, for each agent i, set α i ← u i to its upper bound; denote α := min i∈V α i . if L ∪ J = ∅ then 10 for each i ∈ L do 11 Set α i ← l i to its lower bound (and update α ). 12 for each i ∈ J do 13 Set α i ← u i to its upper bound (and update α ). 14 t ← 0. 15 return Resistance vector α.
Marginal Greedy
We propose the Marginal Greedy in Algorithm 3 which has similar framework as the greedy heuristic in [2] but employs the optimistic update strategy to approximate the equilibrium opinion vector z(α).
Batch Gradient Greedy
We also give a gradient-based heuristic, called Batch Gradient Greedy (BGG), in Algorithm 4. The while loop in line 6 employs the optimistic update strategy to approximate the equilibrium opinion vector z(α) (as well as r(α)) until it is far apart enough from s to enter the following procedures.
Observe that in Line 17, we introduce the batch approach to accelerate the algorithm. When dealing
Algorithm 3: Marginal Greedy
Input: Innate opinions s ∈ [0, 1] V ; initial resistance vector α (0) ; budget k; interaction matrix P ; for each agent i ∈ V , upper u i and lower l i bounds for resistance. Output: The optimal set T of agents with changed resistance and the corresponding resistance vector α with ∀i ∈ T : α i ∈ {l i , u i } and ∀i ∈ V \ T : α i = α for each v ∈ V \ T do 6 Set α (j) ← α (j−1) and α (j) v ← u v to its upper bound (and update α ). 7 Set z ← (1, 1, . . . , 1) and t ← 0. 12 if L ∪ J = ∅ then 13 for each i ∈ L do 14 Set α (j) i ← l i to its lower bound (and update α ). 15 for each i ∈ J do 16 Set α (j) i ← u i to its upper bound (and update α ). 17 t ← 0. 18 if f > i∈V z i then 19 Set f ← i∈V z i . 20 Update the selected agent v ← v and the corresponding resistance vector α ← α (j) . 21 Update the set of selected agents T ← T ∪ {v } and the corresponding resistance vector α (j) ← α . 22 return The set of agents T and resistance vector α (k) .
with a large scale network and a large budget, we can set the batch size proportional to the budget, e.g. 1% of budget, to limit the times to run the outer while loop.
From line 18 to 24, we consider a measure δ i for each agent i, that is the partial derivative times the change of resistance, to decide which agent to include in the batch in order to maximize the decrease of the equilibrium. Since we are using the optimistic update strategy to approximate z and r, we need to estimate its lower δ Then in lines 25 to 28, we try to pick a subset of agents such that their minimal measure lower bound is greater than the maximal measure upper bound of the rest agents in V \ T , i.e. to make sure that we select a batch of agents with the greatest measure. Otherwise, we discard the subset and do one more update of r and z until we can find such a subset.
Approximating the Derivative Vector
In Algorithm 4, we compute the partial derivative vector d(α) = × i∈V ∂f (α) Input: Innate opinions s ∈ [0, 1] V ; initial resistance vector α (0) ; budget k; batch size b, interaction matrix P ; for each agent i ∈ V , upper u i and lower l i bounds for resistance; precision ρ. Output: The optimal set T of agents with changed resistance and the corresponding resistance vector α with ∀i ∈ T : α i ∈ {l i , u i } and ∀i ∈ V \ T : i . 1 (Technical step.) Randomly perturb each coordinate of s slightly. 2 Initialize the resistance vector α ← α (0) and the set of agents T ← ∅. 3 Denote α := min i∈V α i and err(t) : Set z ← (1, 1, . . . , 1); r ← (1, 1, . . . , 1); t ← 0. 6 while ∃i ∈ V : |s i − z i | ≤ err(t) do 7 r ← 1 + P (I − A)r and z ← As 10 if L ∪ J = ∅ then 11 for each i ∈ L do 12 Set α i ← l i to its lower bound (and update α ). 13 for each i ∈ J do 14 Set α i ← u i to its upper bound (and update α ). Set α i ← u i to its upper bound (and update α ). 34 Update the set of selected agents T ← T ∪ T . 35 The code as Lines 5 to 15 without updating r. 36 return The set of agents T and resistance vector α.
Proof: We use the Neumann series The base case j = 0 is trivial.
For the inductive step, assume that for some j ≥ 0, 1 [P (I − A)] j 1 ≤ n(1 − ) j . Since P is a row stochastic matrix, it follows that 1 P = 1 . Hence, where the inequality holds because every entry of the row vector 1 [P (I − A)] j is non-negative. The inductive step is completed by using the induction hypothesis.
Finally, we have as required.
Experiments
Experimental Setup. Our experiments run on a server with 2.1 GHz Intel Xeon Gold 6152 CPU and 64GB of main memory. The server is limited to activate at most 24 threads by the administrator. The real network topologies we use in our experiment are shown in Table 1; we interpret each network as an undirected graph. The number n of nodes in the dataset networks ranges from about 1 million to 65 million; in each network, the number m of edges is around 2n to 30n. Input Generation. For each dataset, we utilize the network topology and generate the input parameters as follows. The innate opinion s i of each agent i is independently generated uniformly at random from [0, 1]. For each edge {i, j} in the network, we independently pick w ij uniformly at random from [0, 1]; otherwise, we set w ij = 0. For (i, j) ∈ V × V , we normalize P ij := w ij k∈V w ik . From Lemma 6.2, one can see that approximating the equilibrium opinions is more difficult when the resistance is low. However, since we still want to demonstrate that the resistance for each agent can have varied lower and upper bounds, we set the lower bound l i of each agent i independently such that with probability 0.99, l i equals 0.001, and with probability 0.01, it is picked uniformly at random from [0.001, 0.1]. Similarly, each upper bound u i is independently selected such that with probability 0.99, it equals to 0.999, and with probability 0.01, it is chosen uniformly at random from [0.9, 0.999].
Update Strategies Comparison
We compare the following three update strategies described in Section 6: conservative, opportunistic and optimistic. For the three smaller networks (com-Youtube, com-LiveJournal, LiveJournal), we apply all three update strategies. For the largest network (com-Friendster), we only report the performance of the optimistic update strategy, as the other two update strategies are not efficient enough for such a large dataset.
Experimental Setup. For fair comparison among the update strategies, we always initialize z = (1, 1, ..., 1). To compare their performances, we plot a curve for each strategy. The curves have a common x-axis, which corresponds to the number of times that the vector z has been updated so far, i.e., the number of times line 5 (in both Algorithms 1 and 2) has been executed. Since line 5 is the most time-consuming part of the algorithms, it will be a suitable common reference. We use the term iteration to refer to each time z is updated. For each update strategy, we explain what is plotted for the y-axis.
• Conservative Strategy. We run Algorithm 1 such that in line 7, L is non-empty only if for all i ∈ V , |s i − z i | > err(t), in which case, we pick L to be the collection of all i's such that α i = u i and z i ≤ s i − err(t).
For the y-axis, we plot the ratio of agents i for which currently α i = l i , or we know definitely that α i should be switched to l i , i.e., currently α i = u i and z i ≤ s i − err(t).
In Algorithm 1, the iterations (referring to each time z is updated) are grouped into phases, where a non-empty L in an iteration marks the end of a phase. Observe that at the end of a phase, for all i ∈ L, α i is set to l i and t is reset to 0. Hence, in the next iteration, no coordinate α i is certain to be switched. Hence, the curve has a step-like shape, where each plateau occurs after the end of each phase.
Observer that initially α = min i u i ≥ 0.9. Hence, it takes very few number of iterations to satisfy ∀i ∈ V : |s i − z i | > err(t); we call this the phase 0. At the end of the phase 0, we set some α i = l i and α decreases significantly. Hence, subsequent phases have many more iterations.
Observe that we can stop the iterative process, when for all i ∈ V , |s i − z i | > err(t), but there is no i ∈ V such that z i < s i and α i = u i . This marks the end of the curve.
In each phase, we pick L of line 7 as the collection of all i's such that z i ≤ s i − err(t) only when ∀i ∈ V : |s i − z i | > err(t) (otherwise, we pick L = ∅). Then, we set α i = l i for each i ∈ L and t = 0. We call such a phase a conservative phase.
• Opportunistic Strategy. We run Algorithm 1 similarly as before. Phase 0 is the same as the conservative strategy; we call a phase conservative, if it follows the conservative update strategy.
Starting from phase 1, we can perform it in an opportunistic manner as follows. Recall that at the beginning of a phase, t has just been reset to 0. At the t-th iteration of that phase, we use L(t) to denote the collection of i's such that z i ≤ s i − err(t). For every 1000 iterations, we compute an estimate slope(k) := |L(1000k)|−|L(1000(k−1))| slope computed so far. After some iteration, if the estimated slope drops below some factor (we use 0.1 in our experiments) of k m , then we end this phase. Intuitively, each additional iteration flips only a small number of coordinates α i , and hence, one would like to end this phase. We call such a phase opportunistic.
Since typically the total number of phases is around 8, we run phase 1 to 6 opportunistically, after which we run the remaining phases conservatively to make sure that all coordinates α i that need to be changed will be flipped.
As in the conservative update strategy, for the y-axis, we plot the ratio of coordinates α i that currently α i = l i , or we know for sure should be switched to l i , i.e. α i = u i and z i ≤ s i − err(t).
• Optimistic Strategy. We implement Algorithm 2, where in each iteration after z is updated, a coordinate α i is (re)set to l i if z i < s i , and (re)set to u i if z i > s i . For the y-axis, we plot the ratio of coordinates that currently take their lower bounds. The curve ends when enough iterations are performed after some coordinate of α is last updated, in order to ensure that the estimate z is close enough to the equilibrium vector according to Lemma 6.2.
Experiment Results. Each of Figures 8.1, 8.2 and 8.3 shows the plots for the three strategies in the corresponding network (com-Youtube, com-LiveJournal or LiveJournal). Figure 8.4 shows the plot of the optimistic strategy in the com-Friendster network, where the other two strategies are not efficient enough for such a large network. As expected, the opportunistic strategy is slightly better than the conservative strategy. From the positions of the plateaus, we can see that the initial opportunistic phases end sooner than their conservative counterparts. Hence, overall the opportunistic strategy performs slightly better than the conservative strategy; in increasing sizes of the three tested networks, the numbers of iterations taken by the opportunistic strategy are 79.2%, 77.9% and 71.5%, respectively, of those taken by the conservative strategy.
On the other hand, the optimistic strategy can achieve the optimal resistance vector with much fewer number of iterations than the other two strategies. In increasing sizes of the three smaller networks, the numbers of iterations taken by the optimistic strategy are only 12.8%, 13.4% and 12.4%, respectively, of those taken by the conservative strategy. Moreover, the optimistic strategy makes very few mistakes; in increasing sizes of the four networks, the numbers of times coordinates are flipped from lower bounds back to upper bounds are 1, 0, 13 and 168, which are negligible for networks with millions of nodes.
Running Time with Multiple Threads
We compare the actual running time using different number of threads for the optimistic strategy on only the three smaller networks, since the largest network takes too long using only one thread. Using all 24 available threads, running the optimistic strategy on the com-Friendster network already takes around 50 hours.
The three bar graphs in Figure 8 networks. Since updating z (line 5 of Algorithm 2) is the most time-consuming part of the algorithm, the fact that it is readily parallelizable supports the empirical results that using multiple threads can greatly reduce the running time, where the effect is more prominent for larger networks.
Experiments for Budgeted Variant
Experimental Setup. We conduct the experiments on a server with 3.4 GHz Intel(R) Core(TM) i5-3570 CPU and 16GB of main memory. The server can activate at most 4 threads. The real network topologies we use are also shown in Table 1.
Input Generation. In each instance of the following experiments, we generate a setup of s, P, u, l and α (0) randomly in a similar way to that in Section 8. Particularly, the initial resistance α (0) i of each agent i is independently selected uniformly at random from [l i , u i ] after l i and u i are generated.
Agent Selection Strategies Comparison
As shown in Algorithm 3 and 4, we have two heuristic strategies, Marginal Greedy and Batch Gradient Greedy (BGG), to select new agent into T . We also run experiments on a trivial random node selection To compare their performance, we give the average equilibrium opinion z avg employing these three agent selection strategies on small networks in Figure 9.1. For fair comparison, we use the same setup of s, P, u, l and α (0) on the same network. In each graph, one curve represents one strategy. The curves share the same x-axis, which corresponds to the ratio of agents selected in T .
Observe that Marginal Greedy and Batch Gradient Greedy with batch size 1 have almost the same z avg with the change of ratio of agents in T , which implies that they share similar performance. While the random node selection strategy performs the worst among them. We also run experiments using Batch Gradient Greedy with different constant batch sizes. When the batch size is small enough relative to the number of agents in the network, Batch Gradient Greedy have similar performance to Marginal Greedy. We will show more results on choosing different batch sizes in the next section.
Batch Size Comparison
When the budget is proportional to the number n of agents, if we use constant batch size, the number of times to pick a batch would be O(n) which can be too many on large scale network. One way to solve this problem is to choose the batch size proportional to the number of agents (or the budget), e.g. 1%n.
Then the number of times to pick a batch would become O(1). Figure 9.2 gives the average equilibrium opinion when using Batch Gradient Greedy (BGG) with different batch sizes on large graphs. We can see that the batch sizes from 1% to 10% get similarly good performance. The results of batch size 20% is slightly worse but acceptable. While the batch size 50% performs the worst among them and is not a good choice. Thus, it is recommended to select 10% of the number of agents (or the budget) or less as the batch size to balance the speed and performance.
Running Time Comparison
We compare the actual running time of Marginal Greedy and Batch Gradient Greedy using different methods to compute (or approximate) the equilibrium on only small networks, since the results on larger networks are too time-consuming to collect. For each network, we run 30 different random setups of s, P, u, l and α (0) . Then we record the running time to select the first batch of different batch sizes in each setup and give the average and standard deviation in the bar graphs. Observe that the matrix inverse is faster for networks with less than a few thousands agents. But roughly starting from Chess, the matrix inverse requires more time to select a new batch than the random walk recurrence, which implies that the complexity of matrix inverse is much higher.
While in Figure 9.3 (e), we show the running time results using Marginal Greedy. We run only one random setup of s, P, u, l and α (0) on the same network and collect the running time to select the first 30 agents. Then we report the average and standard deviation of running time to select a new agent. Together with Figure 9.3 (a) to (d), we see that Marginal Greedy is significantly more time-consuming than Batch Gradient Greedy since Marginal Greedy has to compute the equilibrium opinion vector for adding each candidate before selecting the best one. Considering they have similar performance as shown in Section 9.1, Batch Gradient Greedy with proper batch size is much more efficient than Marginal Greedy.
Resistance Generation From Power Law Distribution
We run experiments with the initial resistance vector α (0) generated from the power law distribution instead of the uniform distribution to see how the heuristic algorithms respond to different distributions. Particularly, each coordinate α (0) i is independently generated from [l i , u i ] with probability density function f (x) = Ax −2 , where A is the normalization constant. Observe that the initial resistance vector generated in this way would have most of its coordinates being low values. Figure 9.4 (a) shows the results using the above setup (denoted as power law distribution, low) on Hamsterster where s, P, u, l are unchanged. We also run experiments using a resistance vector with most of its coordinates having high values. For each coordinate α ity density function f (x) = Ax −2 and then compute α The corresponding results (denoted as power law distribution, high) on Hamsterster are given in Figure 9.4 (b). We can see that Marginal Greedy and Batch Gradient Greedy with batch size 1 still have similar performance under different resistance distributions. Figure 9.5 gives the average equilibrium opinion on Google+ with initial resistance vector α (0) generated from different distributions. We only run experiments with batch size 1% since the performance is good enough. We can conclude that under the same s, P, u, l and budget, the obtained average equilibrium opinion would be higher if the agents of the network tend to have higher resistance. Proof: Observe that P corresponds to an irreducible random walk. Hence, (I − A −K )P represents a diluted random walk, where at the beginning of each step, the measure at nodes i / ∈ K will suffer a factor of 1 − α i ∈ (0, 1). The irreducibility of the random walk P means that every state is reachable from any state. Hence, starting from any measure vector, eventually the measure at every node will tend to 0. This means that (I − A −K )P has eigenvalues with magnitude strictly less than 1. Therefore, we can consider the following Neumann series of a matrix: which implies that the inverse M exists, and every entry of M is positive; in particular, for every k ∈ V , M kk > 1. Observe that e T i P M e i = (P M ) ii and (M e i e T i P M ) ij = M ii (P M ) ij for each j ∈ V . Then, by Lemma 5.1 with K = {i}, we have and for j = i, By Lemma 5.2, we know ∂f (α) ∂α i and s i − z i (α) have the same sign in {−, 0, +}. Recall that z(α) = [I − (I − A)P ] −1 As = XAs. Applying the above results, we have Since α i ∈ (0, 1), we conclude that 1−α i 1−α i +α i M ii > 0. Thus ∂f (α) ∂α i , s i − z i (α) and s i − j =i M ij α j s j have the same sign. In particular, the quantity in Lemma 5.3 can be rewritten as follows: Proof: Using the Sherman-Morrison formula, we have M = [I − (I − A −{i,k} )P + α k e k e T k P ] −1 = R − α k 1 + α k e T k P Re k Re k e T k P R We can compute that e T k P Re k = (P R) kk and (Re k e T k P R) jh = R jk (P R) kh for j, h ∈ V . Then we have M jh = R jh − α k R jk (P R) kh 1 + α k (P R) kk .
By Lemma 5.1, we obtain M jh = R jh − α k R jk R kh 1 + α k R kk − α k for j, h ∈ V and h = k, as required. (1 − α j )P jh R hi = 0, for j = i, k.
After rearranging, we have and R ji = (1 − α j ) h∈V P jh R hi , for j = i, k.
Now it suffices to show that for j = i, k, the above R ji cannot be the maximum among them, and R ki ≤ R ii .
First, we show that R ji cannot be the maximum. Since h∈V P jh = 1 and α j ∈ (0, 1), we have Thus, R ji cannot be the maximum.
Next, we show that R ki ≤ R ii by contradiction. Suppose R ki > R ii , then R ki is the unique maximum in the i-th column of R. Since h∈V P kh = 1 and R ki = h∈V P kh R hi , it must be the case that P kk = 1. This means P corresponds to a random walk with absorbing state k, which contradicts that P is irreducible. Therefore, we have R ki ≤ R ii , and hence R ii = max h∈V R hi .
Observe that we already know R ji < R ii for j = i, k, and R ki = h∈V P kh R hi . Hence, R ki = R ii implies that P kk + P ki = 1.
Conversely, P kk + P ki = 1 implies that R ki = P kk R ki + P ki R ii . As argued above, we must have P kk = 1, which implies R ki = R ii .
Conclusion and Future Work
In this work we have introduced a novel formulation of social influence, that focuses on interventions at the level of susceptibility using a well-established opinion dynamics model. We give a solid theoretical analysis of the unbudgeted variant of the opinion susceptibility problem, and designed scalable local search algorithms that can solve the problem optimally on graphs with millions of nodes. We also prove that the budgeted variant is NP-hard, and provide scalable heuristics that we evaluate experimentally. We believe that our techniques for the unbudgeted variant will lead to insights for the analysis of the budgeted variant of the problem. We leave the task of providing theoretical guarantees for greedy algorithms on the budgeted variant as future work. | 15,632.4 | 2020-11-04T00:00:00.000 | [
"Mathematics"
] |
Comment on Self-Consistent Model of Black Hole Formation and Evaporation
In an earlier work, Kawai et al proposed a model of black-hole formation and evaporation, in which the geometry of a collapsing shell of null dust is studied, including consistently the back reaction of its Hawking radiation. In this note, we illuminate the implications of their work, focusing on the resolution of the information loss paradox and the problem of the firewall.
Introduction
The information loss paradox [1] has intrigued theoretical physicists since Hawking's discovery that black holes evaporate. The unitarity of quantum mechanics demands that Hawking radiation be not thermal, and that it carry information of the matter inside the horizon. However, to propose a mechanism to transfer information from the collapsed matter to Hawking radiation, there are various theoretical obstacles such as the no-hair theorem, the no-cloning theorem, the monogamy of entanglement, as well as the causality and locality of semiclassical effective theories. An important progress was the proof of the small-correction theorem [2], which states that the information loss cannot be recovered by anything less than order-one correction at the horizon. See Ref. [2] for a more precise explanation of the information loss paradox.
As Hawking radiation is originated from the horizon, the collapsing matter has to leave all of its information before entering the horizon (unless the information can travel nonlocally from the inside to the outside of the horizon at a later time). On the other hand, it is commonly believed that nothing special should happen to an infalling observer at the horizon, and so the information should be carried into the horizon. The black-hole complementarity [3,4] is thus proposed as a resolution of the puzzle.
The puzzle can be presented through the Penrose diagram for the conventional model of the formation and evaporation of a black hole. (See Fig. 1.) An object falling in through the horizon ends up at the singularity at r = 0, while all of its information must come out of the horizon through Hawking radiation to a distant observer to avoid information loss. The puzzle about information arises essentially because there is a region (the region behind the horizon) from which outgoing light cones can not reach to the future infinity.
On the other hand, a different model of the (non-)formation and evaporation of a black hole was presented in the work of Kawai, Matsuo and Yokokura [5], which can be directly taken as a resolution of the information loss paradox. We will refer to their model [5] as the KMY model. It is the purpose of this article to illuminate the implications of their work on problems about the information loss paradox.
The key message of the KMY model is that the back reaction of Hawking radiation must be taken into consideration before the black hole forms. 1 The formation and evaporation of a black hole happen at the same time as a single process. Hawking radiation appears before there is a horizon, and as it takes away energy from the system, its back reaction makes it harder for the horizon to emerge. In fact, it was shown [5] that no horizon ever appears (unless it is already there in the initial state), not only from the viewpoint of a distant observer, but also from the viewpoint of an infalling observer. The collapsing matter never completely falls inside the Schwarzschild radius. As a result, the conventional assumption of the empty horizon of a black hole is replaced by the presence of collapsing matter around the Schwarzschild radius, and the information loss paradox is resolved. Similar viewpoints were taken in [6,7], although the reasoning and arguments are different.
Since there will never be a horizon, strictly speaking there is no black hole. In the literature, a to-be-formed black hole is called an incipient black hole, which will sometimes also be simply called a black hole in this note. 2 In this note, we will point out the implication of the KMY model that there is no horizon regardless of the nature of the collapse of matter. We will give a proof of no horizon based on semiclassical calculations. We will also point out another implication of the KMY model, which reduces the firewall [8] to a weak radiation.
Incidentally, one may wonder what happens to a static black hole. A static black hole can be constructed through an adiabatic process in a heat bath, with the heat inflow balanced by Hawking radiation, and it was found that there is still no horizon [5,9]. We will not discuss this case in this note.
Geometry Outside Collapsing Shell
It is well known that, from the viewpoint of an observer outside the black hole, the motion of an infalling particle gets slower as it gets closer to the horizon, and it can never cross the horizon. Due to Hawking radiation, the black hole evaporates and disappears within a finite amount of time. Therefore, for a distant observer, the black hole evaporates before the infalling particle reaches the horizon. This does not however imply that an infalling object cannot cross the horizon within finite proper time. According to the Penrose diagram of the traditional view of the black-hole space-time ( Fig. 1), an infalling object can pass through the horizon in finite proper time. But to a distant observer it never reaches the horizon until the horizon shrinks to zero so that it touches the singularity at the origin at the same moment when the black hole is completely evaporated.
The KMY model claims, however, that if the effect of the back reaction of Hawking radiation is included from the very beginning, the Penrose diagram has to be significantly modified. An infalling observer can never pass the horizon, as there is no horizon.
The KMY model describes the collapse of a spherical matter shell, and the corresponding space-time geometry. Both the collapsing shell and the Hawking radiation are approximated by spherical configurations. It also ignores massive particles in the Hawking radiation. But it includes the back-reaction of Hawking radiation on the geometry, and thus the timedependence of the Schwarzschild radius.
In this section, we study the space-time geometry outside the shell in the semiclassical approximation. The energy-momentum tensor in the Einstein equation is given by the expectation value of the quantum energy-momentum operator of matter fields, which is identified with the Hawking radiation outside the collapsing shell. It is assumed that there is nothing but Hawking radiation outside the spherical matter shell. Under these assumptions, the most general solution to Einstein's equation for the outside of a collapsing spherical shell is the outgoing Vaidya metric [10]: 3 In the units G = c = 1, the Bondi mass for this metric is M (u) = a(u)/2 and the only non-vanishing component of the Einstein tensor is This corresponds to an energy-momentum tensor with the only non-vanishing component The weak energy condition demands thatȧ < 0. The outgoing Vaidya metric has been known as a solution to Einstein's equation for a spherical star emitting null dust. Referring to the outer radius of the collapsing shell as R(u), the metric (2) should be valid for the outside of a spherical collapsing shell for all r ≥ R(u), assuming that there are only massless particles in Hawking radiation [5].
Recall that the geometry outside a static mass at the origin is given by the Schwarzschild solution: where the Schwarzschild radius is and M is the total mass. It can also be rewritten in terms of the outgoing Eddington-Finkelstein coordinates as where through a change of coordinates. This metric is in the form of the outgoing Vaidya metric (2), and we will refer to a(u) in (2) as the Schwarzschild radius. As energy is radiated away through Hawking radiation, the Bondi mass M (u) decreases with time, and so is the Schwarzschild radius a(u).
Let us emphasize here that, despite the connection between the outgoing Vaidya solution and the Schwarzschild solution, whether there is a horizon for the outgoing Vaidya metric (2) is still a question to be answered. The initial state of the collapsing shell is assumed to satisfy R(u 1 ) > a(u 1 ) at the initial time u = u 1 . Since the shell continues collapsing, and the energy flux in Hawking radiation is tiny for a large initial mass M (u 1 ), it is natural to expect that at some later point a horizon would appear when the shell shrinks to a size smaller than the Schwarzschild radius. Our task is to examine carefully whether this naive expectation is really what Einstein's equation tells us through the outgoing Vaidya metric.
If there really would be a horizon, at least one of the two following things must happen: (i) R(u) < a(u) at a later time u > u 1 ; 4 or (ii) the outgoing Vaidya metric (2) is geodesically incomplete, so that a horizon can hide behind the infinity u = ∞. We will show in the following that neither of the two things happen, so there is no horizon, contrary to the naive expectation.
Evaporation
One may wonder if Hawking radiation would still appear if there is no horizon. We argue that, as the collapsing shell gets very close to the Schwarzschild radius, the geometry just outside the shell is indistinguishable from the near horizon region of a real black hole and thus Hawking radiation is expected to appear, although its spectrum can be modified. This view is supported by the literature [13].
In fact, it has been calculated in this context in [5], in the absence of a horizon. Hawking radiation appears on the shell as well as outside the shell. The energy flux in Hawking radiation determines the rate of change in the massṀ (u) =ȧ(u)/2. In 4 dimensions, the Hawking temperature T H (u) is roughly 1/a(u), 5 and the energy flux in the radiation of massless particles is [5] Thus in the semiclassical, spherical symmetry approximation, for some constant C. (C ≡ N 48π , which is proportional to the number N of species of massless particles.) The solution is for some constant u 0 when the matter shell is completely evaporated. The only information we need from this calculation is that the Schwarzschild radius a(u) decreases monotonically to zero in a finite elapse in u, and after that the spacetime becomes Minkowskian. The argument below for the absence of horizon will not rely on the detail of this solution (11). Of course, the semiclassical analysis of Hawking radiation is not expected to be valid all the way to the Planck scale. In this work we are not really concerned with the fate of the collapsing shell when it is reduced to a Planck size. Our aim is to understand the semiclassical physics involved to explain the formation and evaporation of an astronomical black hole. We will say that a macroscopic black hole is completely evaporated even when there are Planck-scale remnants (see e.g. [14]), whose entropy is negligible compared with an astronomical massive object. In this sense we assume that an incipient black hole evaporates completely within finite u.
Notice that it is generally assumed that a macroscopic black hole evaporates away in finite time t. At large r (r a(u)), we have dt du + dr for the proper time t of a distant observer located at a fixed r, hence an elapse in finite t means the same as finite u.
Proof of No Horizon
To prove that there is no horizon, we will show the following two things: (i) R(u) > a(u) for all u > u 1 as long as R(u 1 ) > a(u 1 ) at an initial time u = u 1 , until the collapsing shell is completely evaporated; and (ii) when the collapsing shell is completely evaporated, R(u) > 0 and extends into the Minkowski space, which is geodesically complete, so that a horizon cannot hide behind the infinity u = ∞. where the matter shell is completely evaporated, and the spacetime is Minkowskian for u ≥ u 0 . The dashed curve represents a(u), and the solid curves R(u) for different initial conditions. The two figures differ in the trajectory of a(u).
It is a robust feature that R(u) remains at a finite value at u 0 when a(u) goes to zero.
First we sketch a proof by contradiction to prove that the outer radius of the collapsing shell is always outside the Schwarzschild radius, in the context of semiclassical theories with minimal assumptions.
Assume that R(u) eventually falls inside a(u). The outer radius would have to pass through the Schwarzschild radius, with the velocity of the outer surface (r = R(u)) larger (moving faster towards the origin) than the velocity of the Schwarzschild radius (r = a(u)) at a certain instant u = u when R(u ) = a(u ). The outgoing Vaidya metric (2) would be applicable to the point r = a(u) for u ≥ u , and the trajectory of the Schwarzschild radius (r = a(u)) would be space-like: 6 becauseȧ < 0. As the trajectory of a point on the outer surface of the shell (r = R(u)) is either light-like or time-like, it is impossible for the outer surface of the shell to move faster towards the origin than the Schwarzschild radius, and thus we have a contradiction. In other words, if R(u) > a(u) for an initial state at u = u 1 , it is impossible to have R(u) < a(u) at a later time u > u 1 . The radius R(u) of the collapsing shell can never catch up with the Schwarzschild radius a(u), and the horizon can never emerge, at least when the outgoing Vaidya metric (2) is still valid.
The outgoing Vaidya metric remains valid as it turns into the Minkowski space metric when a(u) = 0 after the shell disappears at u = u 0 . In Fig. 2, we demonstrate the trajectories of R(u) for different initial conditions, assuming that the outer radius of the matter shell shrinks at the highest speed possible -the speed of light, which is still slower than a(u).
Hence, for arbitrary initial conditions, R(u 0 ) > 0 and it extends into the Minkowski space smoothly. As the Minkowski space has no horizon and is geodesically complete, the trajectory of R(u) is geodesically complete without crossing the Schwarzschild radius anywhere. At a speed lower than or equal to light, an infalling observer originally outside the collapsing shell would therefore also have a geodesically complete trajectory without crossing a horizon (or hitting any singularity). We conclude that there can be no horizon at all for a collapsing matter shell no matter how fast it collapses.
A few assumptions were made in the argument above. The use of the Vaidya metric has to do with the fact that the emission of massive particles in Hawking radiation is ignored. But it is a robust feature that the trajectory r = a of the Schwarzschild radius is space-like wheṅ a < 0. This is because the trajectory r = a 0 is null-like when a = a 0 is constant. If we had used the metric (5) and replaced a 0 by a(t) withȧ(t) < 0, we could still show that, for any finite |ȧ(t)| = 0, the trajectory r = a(t) + of a point just outside the Schwarzschild radius is space-like for sufficiently small > 0. Furthermore, the inclusion of massive particles in Hawking radiation would lead to a faster evaporation, making it even harder for R(u) to catch up with a(u).
In addition to spherical symmetry, we have also assumed thatȧ < 0 due to Hawking radiation until a(u 0 ) = 0, and that the shell evaporates completely at a finite value of u = u 0 . The value of |ȧ| can otherwise be arbitrarily small.
Note that we have not made any assumption about the constituents of the matter shell, except that its trajectory is time-like or light-like, unlike other works with a similar conclusion [15].
Geometry of Collapsing Shell
Let us give more details for the simple example of a collapsing matter shell composed of massless dust [5]. The special case of null dust is interesting because if even a shell collapsing at the speed of light cannot form a horizon, a generic time-like collapsing shell most certainly cannot, either.
For spherically symmetric configurations, the light-cone directions for the outgoing Vaidya metric (2) are given by The solutions to the first equation du = 0 give trajectories of the outgoing massless particles in Hawking radiation. The solutions to the second equation then describe the trajectories of infalling massless particles, including those at the outer surface of the shell of null dust. Therefore, the outer radius R(u) of the collapsing spherical shell satisfies [5] dR(u) du (14) for different initial conditions are given, in comparison with that of a(u) given by (11) as a dashed curve. For each trajectory of R(u), it starts from its initial value on the left and approaches a(u) quickly. The middle part is described by (15) when R(u) is very close to a(u). When a(u) goes to 0, R(u) is approximated by (16), and this regime was the focus of Fig. 2(a), although with different initial conditions.
According to this equation, dR/du → 0 as R → a, and so R can never get inside the Schwarzschild radius. Samples of the null-like trajectories of R(u) for different initial conditions at u 1 < u 0 are illustrated in Fig. 2 and Fig. 3. The fact that R(u) remains outside the Schwarzschild radius a(u) is a robust feature that does not rely on the explicit dependence of a(u) on u, but only the fact thatȧ < 0. In general, since the trajectory of r = R(u) is null-like but that of r = a(u) is space-like. R(u) can never catch up with a(u) before u 0 , regardless of the initial condition. The trajectory of R(u) can always be smoothly extended into the Minkowski space, and thus the trajectory of R(u) is geodesically complete.
For a sufficiently large collapsing shell, typically R(u) gets very close to the Schwarzschild radius for a long period of time (see Fig. 3). During the stage when R(u) is very close to a(u), 7 to the first order approximation, the solution of R(u) to the equation above is [5] This is a good approximation if |ȧ| 1, when the outer radius of the shell gets close to the Schwarzschild radius (|R − a| a), before the matter shell evaporates towards the Planck scale when a → 0.
At the final stage of evaporation (u ∼ u 0 but u < u 0 ), with the Schwarzschild radius a(u) ∼ 0, the trajectory of R(u) can be solved from the light-cone condition (14) in the limit u → u − 0 (when a(u) R(u)) by for some constant c > 0 determined by the initial condition. (See Fig. 2.) Hence the matter shell remains a finite size when it completely evaporates.
Since the trajectory of R(u) is null-like and is geodesically complete, it is impossible for a time-like or null-like geodesic originated from the region outside the shell to pass through the Schwarzschild radius, as we commented in the previous section.
Geometry of Full Space-Time
In the above we have discussed the geometry of the region outside the matter shell. The space enclosed by the collapsing shell is by assumption in vacuum (with vanishing energy momentum tensor), hence it has to be the Minkowski space, as required by Birkhoff's theorem.
To understand the geometry of the region occupied by the collapsing matter, one can divide the collapsing shell of finite thickness into infinitely many infinitesimally thin layers, separated by infinitesimal gaps [5]. For the inner most layer of dust, the space inside is Minkowskian. Assuming that the dust cannot pass through itself, the first layer comes in at the speed of light and piles up at the origin. The second layer comes in after that, slowing down at the Schwarzschild radius defined by the first layer. Then the third layer falls in, and so on. This description can also be applied to a collapsing solid sphere as well.
One can treat the trajectory of each layer in a way similar to what was done for R(u) in the above [5]. For a sufficiently large and dense body of collapsing matter, eventually the radius of each layer approaches to its own Schwarzschild radius defined for the total mass enclosed by that layer, until the last stage when the evaporation is approaching the Planck scale.
The ultimate fate of the collapsing shell at the Planck scale (whether it evaporates completely) and the resolution of the singularity at the origin (if any) by a UV-complete theory are outside the scope of this article. Semiclassical considerations of the final stage of the evaporation can be found in e.g. [5,16,17]. The purpose of this article is to resolve puzzles about black holes at the semiclassical level as much as possible. It does not exclude the potential relevance of Planck-scale physics such as string theory to black holes (e.g. through the idea of the fuzzballs [18]) at a more detailed level.
According to our discussions above, the whole space-time can be divided into four regions: 1. the Minkowskian region inside the shell with the metric 2. the region occupied by the collapsing matter shell.
3. the region outside the shell with the outgoing Vaidya metric (2).
4. the Minkowskian region after the shell completely evaporates (a(u) = 0 for u > u 0 in the Vaidya metric). Put together, they constitute a geodesically complete spacetime. The corresponding Penrose diagram is depicted in Fig. 4(b), along with the Penrose diagram for the conventional view in Fig. 4(a) for comparison.
Infalling Observer
In principle, since the distant observer's view is already complete, there is no need to consider the viewpoint of an infalling observer. Nevertheless, we repeat here the conventional story about how an infalling observer passes through the horizon, and point out the reason why it breaks down in the KMY model. The (static) Schwarzschild solution can be expressed in the ingoing Eddington-Finkelstein coordinates as where v = t + r * (r).
The light-like trajectories are given by An infalling trajectory with dr dv < 0 moves at a speed slower than light as long as dr dv is finite. The proper time it takes an infalling observer to reach the Schwarzschild radius from which is finite as long as | dr dv | −1 is finite.
where r * is given in (8), so that a point on the horizon r = a 0 at finite v corresponds to u = ∞. This is why a distant observer can never see the infalling observer to pass the horizon of a static black hole. It is conventionally assumed that for a very small Hawking radiation, there is no reason for the infalling observer to have a dramatically different experience. The infalling observer is still expected to be able to pass through the horizon within finite proper time.
In the KMY model, the geometry outside the collapsing shell looks just like that of a black hole. The effect of a small Hawking radiation also makes a small difference to a distant observer. However, the matter shell evaporates away in finite time and the space is Minkowkian for u > u 0 . A coordinate transformation of the form (22) implies that the infalling observer should fall into the Minkowski space before r reaches the Schwarzschild radius a(u), because r → a implies r * → −∞, which in turn implies u → ∞ for a finite v, and we have a(u) = 0 for large u.
Therefore, in agreement with the viewpoint of the distant observer, the infalling observer can never pass through the Schwarzschild radius. From the viewpoints of both the distant observer and the infalling observer, the collapsing shell as well as the infalling observer stay outside the Schwarzschild radius at all times. They both see the collapsing shell evaporate completely, and the space-time becomes Minkowskian again after all of the Hawking radiation dissipates to the infinity.
Information and Firewall
From the viewpoint of a distant observer, as a collapsing matter shell gets closer to the Schwarzschild radius, the shell gets dimmer and looks more like a black hole. While the matter never falls inside the horizon, Hawking radiation is created in the neighborhood of the matter. It is thus natural to assume that Hawking radiation carries the information about the details of the matter shell, and the information loss paradox is resolved.
It used to be a common belief that, for a large black hole, an infalling observer can pass through the horizon without feeling anything dramatically different from the ordinary Minkowski space in vacuum. We find that, on the contrary, if the effect of Hawking radiation is consistently taken into account before the horizon appears, as it is in the KMY model, the surface of the collapsing matter stays outside the Schwarzschild radius, so it is impossible for an infalling observer to mistakenly assume that the space around the Schwarzschild radius is in vacuum.
In fact, it is proven as a theorem [2] that the unitarity of quantum mechanics is inconsistent with the assumption that nothing happens at the horizon, and fuzzballs [18] were proposed to replace the horizon in vacuum. A more recent proposal for an eventful horizon was that of the firewall [8]. It states that, in order for the Hawking radiation to be a pure state (when the collapsing matter comes in a pure state), there has to be a high energy flux -the so-called "firewall" -near the horizon of a sufficiently old black hole.
The firewall is essentially the blue-shifted Hawking radiation generated near the horizon, before it propagates to a distance with a large red shift. For the KMY model, we can check how much energy is there in the blue-shifted Hawking radiation. The question is whether the presence of the collapsing shell, which cuts off the near horizon region at a certain distance from a(u), would reduce the firewall to a cold shower.
At the outer surface of the collapsing shell, the energy-momentum tensor of the Hawking radiation is where dx + is the local outgoing light-cone coordinate normalized by ds 2 = dx + 2 along an outgoing null trajectory. If the radius R(u) of the shell could get arbitrarily close to the Schwarzschild radius a(u), the energy-momentum tensor T ++ can be arbitrarily large at the collapsing shell due to an arbitrarily large blue shift (1 − a/R) −1 . However, the formula (23) is valid only for r ≥ R(u), and R(u) always lags behind a(u). Hence R(u) may never be sufficiently close to a(u) for T ++ to be incredibly large. Indeed, when the shell radius R(u) is very close to a(u), it can be approximated by the expression (15), so that which is very small for a large mass. For a shell collapsing at a speed slower than light, the radius R(u) would be even farther away from a(u), and T ++ at r = R(u) would be even smaller. Our conclusion is thus that there is nothing that can be justified to be called a firewall.
Generalization
The basic ideas involved in the discussions above allow straightforward generalizations to higher dimensions. The outgoing Vaidya metric in D-dimensional space-time is [19] where dΩ 2 D−2 is the metric for a (D − 2)-dimensional unit sphere. In all dimensions higher than 4, this metric can be used to describe the region outside a collapsing null shell as we did above in 4 dimensions. The Schwarzschild radius a(u) = 2M (u)/(D − 3) has a space-like trajectory (ds 2 > 0) as long as the mass M (u) decreases with time, as it should due to Hawking radiation.
The only non-vanishing component of the energy-momentum tensor is which can be interpreted as that of the Hawking radiation. We expect that the basic ingredients of the arguments above for no horizon can be readily applied to black holes in higher dimensions, and to generalized Vaidya metrics [20], as well as other metrics including the effect of Hawking radiation.
Summary
The traditional view of the formation and evaporation of a black hole depicted in Fig. 1 is obtained as a composition of two Penrose diagrams: the Penrose diagram for the formation of a black hole without the back reaction of Hawking radiation, and that for the asymptotically flat spacetime with Hawking radiation. The formation of a black hole and its evaporation are treated separately as independent processes.
The KMY model, on the other hand, consistently includes the effect of the back reaction of Hawking radiation from the very beginning to the very end. All infalling time-like or light-like geodesics originated from the outside of the collapsing shell can be extended to be geodesically complete without encountering any horizon. The Penrose diagram for the KMY model in Fig. 4(b) shows no horizon and no information loss at a macroscopic scale.
Furthermore, the conventional assumption about an empty horizon is replaced by the collapsing matter outside the Schwarzschild radius. There is no firewall because the blueshift of the Hawking radiation is cut off by the radius of the collapsing shell.
For a distant observer, it takes a longer time for light to reach him or her from the collapsing star as its outer radius gets closer to the Schwarzschild radius, hence the star looks darker. Therefore the fact that there is no horizon is not in contradiction with observational evidences of phenomenological black holes.
Despite the fact that we have ignored non-spherical and massive contributions to Hawking radiation, the qualitative features presented above should be valid for a generic incipient black hole, and provide us with a comprehensive semiclassical understanding of the information loss paradox, as well as related problems such as the firewall. | 7,016.8 | 2015-05-10T00:00:00.000 | [
"Physics"
] |
A Simplified and High Accuracy Algorithm of RSSI-Based Localization Zoning for Children Tracking In-Out the School Buses Using Bluetooth Low Energy Beacon
: To avoid problems related to a school bus service such as kidnapping, children being left in a bus for hours leading to fatality, etc., it is important to have a reliable transportation service to ensure students’ safety along journeys. This research presents a high accuracy child monitoring system for locating students if they are inside or outside a school bus using the Internet of Things (IoT) via Bluetooth Low Energy (BLE) which is suitable for a signal strength indication (RSSI) algorithm. The in/out-bus child tracking system alerts a driver to determine if there is a child left on the bus or not. Distance between devices is analyzed for decision making to affiliate the zone of the current children’s position. A simplified and high accuracy machine learning of least mean square (LMS) algorithm is used in this research with model-based RSSI localization techniques. The distance is calculated with the grid size of 0.5 m × 0.5 m similar in size to an actual seat of a school bus using two zones (inside or outside a school bus). The averaged signal strength is proposed for this research, rather than using the raw value of the signal strength in typical works, providing a robust position-tracking system with high accuracy while maintaining the simplicity of the classical trilateration method leading to precise classification of each student from each zone. The test was performed to validate the effectiveness of the proposed tracking strategy which precisely shows the positions of each student. The proposed method, therefore, can be applied for future autopilot school buses where students’ home locations can be securely stored in the system used for references to transport each student to their homes without a driver.
Introduction
Today it has been found that children using school buses have encountered a number of problems, such as the kidnapping of students, or the case of a student who became stuck and died in a school bus, which was found in China. With these problems, no one can be sure that students will arrive safely at home or school. Therefore, all safety-conscious parents are looking for ways that can track real-time locations of a school bus and their children to ensure the safety of their children when using the school bus. The location tracking systems consist of two systems which are the outdoor and indoor positioning systems. The real-time location tracking of children using the school bus can be used in both systems that are school bus location tracking, which is an outdoor positioning system and the other is the tracking system for the location of students that are still stuck in the car, which is an indoor positioning system.
The location tracking is used to track the location of children using the school bus to prevent and check if the child is still stuck in the car after the school bus has been sent from the bus stop. It may be a case of the child falling asleep, etc. Nowadays, there are several approaches for Indoor Positioning such as infrared, ultrasound [1][2][3][4] radio frequency [2], magnetic [5], microelectromechanical systems [6,7] vision-based [8], audible sound [9,10], WiFi-based, Bluetooth Low Energy (BLE), Radio Frequency Identification (RFID)-based and Global Positioning Systems (GPS). With general indoor positioning technologies, there is no single technology that can balance cost, accuracy, performance, robustness, complexity, and limitations [11].
The location tracking of children in the school bus using Bluetooth Low Energy (BLE) technology is growing in popularity and use due to improvements in accuracy while reducing costs. BLE is a low power wireless technology developed from Bluetooth Special Interest Groups (SIG). The special feature of Bluetooth 6.0 is to support next-generation wireless devices which have low power and low latency. It can be used in a short distance (not more than 50 m or 160 feet). The BLE is guaranteed on the IEEE 802.15.1 standard. The main function of the BLE central device is to scan and connect to the BLE peripheral devices. Moreover, the BLE is a small and low-cost hardware device that also consumes low power. BLE technology is widely used in several applications, for example, BLE used in occupancy tracking in office spaces [12], in building emergency management [13], to identify occupantappliance interaction patterns [14], and for smart energy management [15], etc.
The summary details of BLE are shown in Table 1. The BLE positioning algorithms are divided into two methods: the received signal strength indication (RSSI) distance method and wireless fingerprint positioning technology. Most of the past pioneering work simply used the BLE received signal strength indication (RSSI), which made BLE widely used in various indoor environments. In general, the RSSI distance method will receive the received signal strength (RSS) from the primary Bluetooth reference point to determine distance loss patterns, and then estimate the user's position through some algorithms. This method can find a precise distance with accurate reception signal strength. However, in practical tests, the accuracy was still average. Several pieces of research have improved and solved the accuracy problem such as using a back diffusion neural network (BPNN) by creating an RSS distance model to estimate the values, which improves the estimation accuracy of the distance loss model. However, this method requires a lot of data to train the neural network model to be accurate [16]. Kalman's filter is used to process the initially collected RSSI. Then, create an RSSI distance model by modulating curves and positions using a weighted least square algorithm [17]. However, RSSI noise continues to fluctuate in both methods, as RSSI is caused by complex noise, and Gaussian distribution with Kalman filters may not yet be clearly optimized. Moreover, either method may improve the accuracy of distance estimation to some extent with the constant transmit power of Bluetooth beacons in one position. However, power transmission fluctuates from time to time, resulting in incorrect distance loss models. Therefore, to solve this problem, this research proposes a real-time modified RSSI method to mitigate the influence of volatility. This research will be organized into five sections as follows. Section 2 presents the methods of the location tracking with the received signal strength indication (RSSI), is used to analyze the distance between the devices. Then, the RSSI value is calculated to increase the accuracy and positioning ability of the device. The proposed system includes the RSSI distance model algorithm. Section 3 describes presents the experiments by using BLE to track the location of the school bus. The test is to display the result from the analysis of the
RSSI Distance Model
The technology is of great interest for school bus children's RSSI location tracking, namely WiFi and BLE, both of which are classified as types of indoor location tracking, while GPS is often used for outdoor location tracking. Both WiFi and BLE operate on the 2.4 GHz band. WiFi has over 50 sub-bands, while BLE has 40 channels. The WiFi received signal strength indicator (RSSI) reflects the total value of all channel information. For this reason, WiFi is not useful for indoor applications. The BLE is becoming the focus of current indoor positioning technologies because it takes advantage of low power consumption and easy deployment [18]. Bluetooth RSSI is measured with constant power transmission. There will be complex noise that greatly affects positioning accuracy and results in the appearance of the time-varying characteristics of RSSI. In addition, indoor electromagnetic environments, multiple fading and other noise, as well as the RSSI fluctuation of two Bluetooth beacons made by different companies all affect the accuracy and Bluetooth RSSI measured in the system [19][20][21][22]. In order to reduce random fluctuations and power consumption of RSSI and improve positioning accuracy, this research presents a design of a student-tracking smartwatch device by measuring signal strength with RSSI to determine whether the child is still inside or got off the school bus. Moreover, there is a notification to the carpool teacher and the driver.
The distance of propagation path-loss shows the channel fading characteristic follows a lognormal distribution. Thus, the instant RSSI distance measurement generally uses the logarithmic distance path-loss model, the propagation model that reveals the corresponding relationship between distance and RSSI can be expressed as Equation (1) [16][17][18].
where RSSI is a dependent variable of the received signal strength indication, D is the estimated distance between the transmitter and the receiver, and n is a path-loss parameter related to the specific wireless transmission environment. The more obstacles there are, the larger n will be. A is the RSSI with distance D 0 from the transmitter, which is a constant value. σ is a parameter representing the path loss exponent while X σ is a Gaussian-distribution random variable with mean 0 and variance σ 2 . For the convenience of calculation, D 0 usually takes a constant value. Since X σ has a mean of 0, the distance-loss model can be obtained with where d k is the distance from the unknown transmitter node to the kth the receiver node, and A k and n k are the model parameters of the kth receiver node. A k is the measured RSSI when the received node is a fixed distance away from the transmitting node. Form (2) found that the parameters A k and n k should be accurately estimated to improve the accuracy. The n k parameters are relevant with the wireless transmission environment which can be obtained through the optimization of many experimental measurements. A k depends on the transmitting power of Bluetooth. Ideally, A k should be determined by specifying one of the Bluetooth signals. The transmission power of Bluetooth varies with time.
Accurately calculating the relationship between RSSI and distances using the logarithmic distance loss model due to the complex indoor environment is extremely difficult for researchers. Therefore, there are several methods for modelling the RSSI with the most accurate distances based on any application system. However, several applications do not require a high accuracy localization, but need area-based localization; similarly, this article does not require the actual position of the tracked object, but rather to ensure the objects are within a certain area. Rather than having an accurate position of the object, users are more interested to know with absolute certainty that the tracked object is within a certain area. Therefore, this research presents a simplified method to locate tracked objects in zones with high certainty and reliability to separate the positional zone between in the car and outside the school bus only to track whether the students, especially young children, are still stuck in the school bus or not. Thus, the least mean squares (LMS) algorithm is used to fit the parameters by gathering the RSSI values for Bluetooth beacons at different distances. Suppose that the distance estimation is based on M samples of RSSI (k,i) , which represents the ith RSSI sample measured by the kth the receiver node. For getting a good performance, the median value of RSSI (k,i) is used to obtain the distance estimate: where RSSI k is the median RSSI value measured by the kth and the receiver node is given by To characterize the RSSI model indoor environment, measurements have been realized. The experiment has been done on a school bus.
RSSI Mapping and Experimental Setup
RSSI mapping is a testing location arranged by a set of devices in the indoor/outdoor space of the school bus, as shown in Figure 1a. The position of each device is precisely defined, which is modelled as a grid as shown in Figure 1b, where it is supposed that at one of the intersection dots of a grid, the distance is defined as about 0.5 m of each of the intersection points (the size of the grid is 0.5 m × 0.5 m). The testing location is covered by the antenna of the access point (AP), which is a Raspberry pi 3 Model B+ and is shown as the blue point in Figure 1. The AP is used to receive the Bluetooth signal from the smartwatch. To test the signal strength, there are 30 positions of A and B zones, which are located in and outside the school bus. If there are obstacles in working conditions, it causes signal degradation in the central device and Bluetooth. Therefore, each of the positions is relocated and tested in a loop based on the movement of the smart wristband. Each position of zone A and B are tested 40 times to find the accurate value of the RSSI.
Tracking and Calculating a Location from Signal Strength
When students get on the school bus, the device will start to detect the BLE signal sending from a smart wristband. The BLE signal will calculate the distance by using the signal strength. In order to find the distance using the signal strength, the RSSI values are collected in a loop to find the average value. The average RSSI value is used to detect whether the students are inside or outside the bus. BLE can calculate the signal strength in the unit of dBm. The calculation of distance using the signal strength to find the average value of the signal strength uses the trilateration algorithm, which is a basic positioning method. The trilateration algorithm was chosen in this study because of the fact that a precise alignment between a reference point (a transmitter node) and smartwatches (unknown nodes) is not required and only distances between nodes are needed, which is suitable for the studied application where a school bus and each student are continuously moving during a journey. In this article, the position of the transmitter nodes is assumed to be known. The relationship between the unknown nodes' position and the transmitter node positions can be expressed as: where (x,y) are the coordinates of the reference or unknown nodes (x k , y k ), and (x 3 , y 4 ) are the coordinates of the transmitter nodes.
Tracking and Calculating a Location from Signal Strength
When students get on the school bus, the device will start to detect the BLE signa sending from a smart wristband. The BLE signal will calculate the distance by using the signal strength. In order to find the distance using the signal strength, the RSSI values are collected in a loop to find the average value. The average RSSI value is used to detec whether the students are inside or outside the bus. BLE can calculate the signal strength in the unit of . The calculation of distance using the signal strength to find the average value of the signal strength uses the trilateration algorithm, which is a basic positioning method. The trilateration algorithm was chosen in this study because of the fact that a precise alignment between a reference point (a transmitter node) and smartwatches (un known nodes) is not required and only distances between nodes are needed, which i suitable for the studied application where a school bus and each student are continuously moving during a journey. In this article, the position of the transmitter nodes is assumed to be known. The relationship between the unknown nodes' position and the transmitte node positions can be expressed as:
Description of the System
Any commercially available brands of smartwatches worn by students in a schoo bus can be set up for connection with the BLE system. The Bluetooth MAC address is installed to specify the ID of each targeted smartwatch. The Raspberry pi is used to scan for nearby Bluetooth devices with the AP and measure the RSSI signal strength of the BLE from every targeted node. The signal strength from each targeted node is compared with the reference value, set at −60 dBm. If the signal strength is less than the reference value (<−60 dBm), the system identifies that a student is at an "indoor" state (A-Zone) and if the
Description of the System
Any commercially available brands of smartwatches worn by students in a school bus can be set up for connection with the BLE system. The Bluetooth MAC address is installed to specify the ID of each targeted smartwatch. The Raspberry pi is used to scan for nearby Bluetooth devices with the AP and measure the RSSI signal strength of the BLE from every targeted node. The signal strength from each targeted node is compared with the reference value, set at −60 dBm. If the signal strength is less than the reference value (<−60 dBm), the system identifies that a student is at an "indoor" state (A-Zone) and if the signal strength is greater than the referent value (>−60 dBm), the system identifies that a student is at "outdoor" state (B-Zone). At every scanning, the system counts for existing nodes which represents the number of students in the school bus and stores that number until the last scanning round when the bus reaches its destination. If there is still an existing node being recorded in the system, two rounds of scanning will be performed to confirm that there is no student in the bus. If there is still an existing node in the bus, a sound and message notification will be sent to the chosen application (LINE application in this study) in the smartphone of the driver, and also a person who is responsible for the child. The sound and message notification were coded by Python software with systemctl and btmgmt command lines and the overall system architecture comprises of smartwatches, an RSSI detector and processing. A smartphone used for receiving notifications is presented in Figure 2.
firm that there is no student in the bus. If there is still an existing node in the bus, a sound and message notification will be sent to the chosen application (LINE application in this study) in the smartphone of the driver, and also a person who is responsible for the child. The sound and message notification were coded by Python software with systemctl and btmgmt command lines and the overall system architecture comprises of smartwatches, an RSSI detector and processing. A smartphone used for receiving notifications is presented in Figure 2.
Results
Practical channel modelling separates zones A and B. Numerous measurements are performed at different locations by placing the receiver device in the manner shown in Figure 1b, which is tested outdoors. Position the RSSI value receiver at each point by having equal distance between each access point 0.5 m apart, then repeat the measurements of each point 30 times and average the distance to the AP. We can obtain a large amount of RSSI data by changing the distance from 0.5 m to 3.6 m with an interval of 0.5 m and calculating the distance vector according to Equation (5). The RSSI results each point of , which is calculated in Equation (5), is presented in Table 2.
Results
Practical channel modelling separates zones A and B. Numerous measurements are performed at different locations by placing the receiver device in the manner shown in Figure 1b, which is tested outdoors. Position the RSSI value receiver at each point by having equal distance between each access point 0.5 m apart, then repeat the measurements of each point 30 times and average the distance to the AP. We can obtain a large amount of RSSI data by changing the distance from 0.5 m to 3.6 m with an interval of 0.5 m and calculating the distance vector according to Equation (5). The RSSI results each point of dk, which is calculated in Equation (5), is presented in Table 2. Based on the measured RSSI data, the median RSSI was calculated for a single distance. Figure 3 shows the median RSSI measured as a function of distance. As expected, it was found that the median RSSI decreased with distance. From these results, the parameters in Equation (2) presented the results of the measured RSSI data by separating values for Based on the measured RSSI data, the median RSSI was calculated for a single distance. Figure 3 shows the median RSSI measured as a function of distance. As expected, it was found that the median RSSI decreased with distance. From these results, the parameters in Equation (2) Meanwhile, the standard deviation of the noise for each distance can be estimated by: where is the mean value of ( , ) , given by: The obtained results are shown in Figure 4. From the experimental results, the standard deviation of the noise, in terms of the distance from 0.5 to 2.5 m of zone A and from 2.0 to 3.6 m for zone B, can be expressed as: Meanwhile, the standard deviation of the noise for each distance can be estimated by: where RSSI k is the mean value of RSSI (k,i) , given by: The obtained results are shown in Figure 4. From the experimental results, the standard deviation of the noise, in terms of the distance from 0.5 to 2.5 m of zone A and from 2.0 to 3.6 m for zone B, can be expressed as:
Discussion and Conclusions
Using BLE to locate position is of interest, and thus several applications have embedded BLE into electronic devices. In this research, smartwatches were also run with BLE to locate whether a student is inside or outside a school bus, which helps the driver and person responsible for transporting students to/from a school or to their homes to be sure that there is no student left in the bus for each journey. Although the BLE is widely used in several applications, its Bluetooth signal strength varies due to many obstructions and bus movement during a journey, which compensates the location accuracy in the students' tracking application. Therefore, in this research, the average signal strength is proposed to be compared with a known location resulting in improved accuracy.
The results of the positioning test calculated from signal strength between each position (A and B) were performed and it suggests that the RSSI values unpleasantly vary which could be improved by recollecting the values multiple times and using its averaged value instead. The tested position inside the bus (A area) gives the averaged RSSI value
Discussion and Conclusions
Using BLE to locate position is of interest, and thus several applications have embedded BLE into electronic devices. In this research, smartwatches were also run with BLE to locate whether a student is inside or outside a school bus, which helps the driver and person responsible for transporting students to/from a school or to their homes to be sure that there is no student left in the bus for each journey. Although the BLE is widely used in several applications, its Bluetooth signal strength varies due to many obstructions and bus movement during a journey, which compensates the location accuracy in the students' tracking application. Therefore, in this research, the average signal strength is proposed to be compared with a known location resulting in improved accuracy.
The results of the positioning test calculated from signal strength between each position (A and B) were performed and it suggests that the RSSI values unpleasantly vary which could be improved by recollecting the values multiple times and using its averaged value instead. The tested position inside the bus (A area) gives the averaged RSSI value between −68.2 and −83.0 dBm, while that outside the bus (B area) gives the averaged RSSI value between −40.6 and −57.0 dBm. The averaged value of the signal strength proposed in this research can, therefore, be used to analyze the positioning with high accuracy, leading to the precise classification of each student from zone A and zone B The high accuracy positioning measurement method proposed in this research can be further applied to a smarter student transportation bus system where a home location of students can be added to the proposed system which can be linked with each student's individual smartwatch. When the school bus approaches each student's home, notifications can be sent to the driver, the person in charge, and also the student. As a result, drivers may be flexible with the lowest risk of skipping students from their homes. Moreover, the mentioned smart system may be used with unmanned school buses in the future.
Funding: This study did not receive grants from any funding agencies in the public commercial, or not-for-profit sectors.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable. | 5,684 | 2021-09-25T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
A FRAME WORK FOR ENHANCING THE USE OF INDIGENOUS KNOWLEDGE SYSTEM IN TECHNOLOGY DEVELOPMENT AND UTILIZATION IN DEVELOPING ECONOMY
The productivity of agriculture in Nigeria over the years has relied extensively on the scientific innovations transferred to farmers from research institutes via extension services. Thus, technology development most often has been based on scientific research approach, with little or no collaboration with the indigenous people. This is based on the argument that indigenous people are ignorant, fatalistic and have nothing to offer as solutions to their problems. Incidentally, the imposition of scientific technology development framework, without the incorporation of the indigenous knowledge systems of local communities to a large extent, is believed to have contributed to failures in sustainable resource use and erosion of biodiversity. It is evident that indigenous societies have profound and detailed knowledge of the system, environment and species (plants and animals) which they are in contact with for generations and have developed strategies based on their own indigenous technical knowledge to solve their own problems. Therefore, full recognition of indigenous knowledge system of the “supposed users” of technology, their local traditions and technology endowment capabilities are central to the issue of sustainable and equitable technology development and utilization. This paper examines critically the technology development processes and suggest framework for enhancing the integration and use of indigenous knowledge system in technology development
INTRODUDTION
In many African production systems, researchers have blamed the retrogression in agricultural production particularly food production on non-utilization and nonadoption of modern technology developed from modern scientist knowledge (Tripp 1985).
Unfortunately, agricultural production technologies were designed as if all the farm families in Africa share the same economic, social, cultural and ecological conditions. This mistaken assumption has led to technology meant for the rural farmers being designed outside the user's immediate environment, resulting in negative effects on the users. Lack of effective research approach for tackling the complex problems of adapting available technology to the highly diverse conditions of small farmers has been identified as one major reason for non-adoption of scientifically approved technologies, (Ashby, 1998). Inspite of many efforts from both private and public agencies and organizations towards agricultural development, it has been established A paper presented at the 22 nd Annual Congress of the Nigerian Rural Sociological Association held at University of Uyo,uyo,Akwa Ibom State,September,[17][18][19][20] that technology development process has been based on scientific research with little or no collaboration with the local people. Rajasekaran (1993) observed that the scientific systems of technology development as commonly used especially in the developed economies, do not really integrate indigenous knowledge system and do not carry along the entire farm firmly as they are based on scientific control processes, which underscores the roles, values and means of integrating local knowledge into technology development. In recent times, many researchers have observed that full recognition of local knowledge system is central to the issue of sustainable development. No society can ever hope to achieve a long term goal of sustainable development unless it builds upon its own knowledge, tradition, ethical foundations and technological endowments, (Gupta,1992).
Thus, indigenous knowledge has been found to be very relevant in resource conservation because indigenous societies of the different groups tend to have profound and detained knowledge of the systems and species with which they are in contact with for generations.
Indigenous knowledge system which entail knowledge, innovation and practices of indigenous people is unique to a given culture or society because it optimally utilize available resources, explores and exploits existing diversities, takes into account the instability of the environment and promote livelihood and sustained use of productive resources, (Warren 1991, Ajibade 1999Warren, et al 1989). In recent times, concern towards agricultural and sustainable development have given increasing attention to the question of indigenous knowledge and practices of local people, as well as their participation in the process of development. However, indigenous knowledge which is found to exist in every region in sub-Saharan Africa is said to be overlooked in development efforts, particularly in the area of technology design and development (Phillps and Titilola 1995). The potential contributions of indigenous knowledge to uplifting the livelihood of the local people notwithstanding, some people still believe that it is backward, conservative, inefficient, inferior and based largely on myths (Kolawole 2001;Titilola 2003). Even though there are ample ethno-scientific information presently, detailing the relevance of the indigenous practices in agriculture within the African continent, indigenous knowledge is still largely ignored by scientists or designers of production technologies. Hence, the technology development approaches utilized over the years have not taken into consideration the unique resource endowment such as ecosystem fragility, skills preferences and knowledge base of society. The wrong notion that the local people especially the non-literate rural dwellers in developing countries, have nothing to offer as solutions to their problems have to a large extent contributed to the neglect of indigenous knowledge systems of the local people in the process of technology development. The common practice adopted in initiating technology development has remained the researcher or scientist taking full control in developing policies, strategies and methodologies of research, while the indigenous people's initiatives are not sought for or utilized, (Roling and Pretty 1995). Efforts at solving local people's sociocultural problems or addressing the problems of agricultural production have not however systematically involved small farmers as active participants in the planning, execution and evaluation of research. Instead, technology has been designed with little or no consideration of the role of local people's knowledge in the entire processes. Given the fact that local people have continued to use traditional technologies in food production, there is the need to develop strategies capable of tapping or re-trieving their existing knowledge and incorporating same in the development of sustainable production system. . But the pertinent questions that confront us have been: What is the process of technology development commonly adopted by researchers in the sub-Saharan African countries? What is the value of indigenous knowledge in technology development and how can it been harnessed, used and integrated into technology development? It has become necessary therefore for scientists and other development practitioners to harness community-based initiatives through collaborative research, in which activities jointly designed will not only empower communities, but will generate sustainable strategies to conserve local environments and revitalise traditional cultures. Thus paper is therefore designed to review methods and techniques adopted in technology development with the aims of identifying their short comings; determine the value and relevance of indigenous knowledge and local people's participation in technology development and finally propose a framework for the integration of indigenous knowledge for sustainable agricultural development
MODELS/METHODS/APPROACHES UTILISED IN TECHNOLOGY DEVELOPMENT
There have been a growing awareness that the socioeconomic and agro-ecological conditions of resourcepoor farmers are complex, diverse and risk-prone and efforts are being focused mostly on increasing the involvement of farmers in technology development and transfer. An enormous variety of methodologies have been developed and used by thousands of professionals over the years in the process of developing technologies. In recent times, a number of approaches and methodologies have been tried and utilised to facilitate farmers participation in technology development and a good numbers researchers have started to apply them. Below is a review of some of the approaches, models or methods tried and utilised in technology development
CONVENTIONAL TECHNOLOGY DEVELOPMENT APPROACH
In Nigeria and other developing countries before now, scientists, particularly those involved in agricultural matters are trained to view agricultural innovation as a process of vertical transfer by bringing in foreign technologies, adapting and transferring them to farmers. However, the indigenous technical knowledge embodied in farmers experimentation, considered a valuable resources, remain untapped and used in technology development process. The neglect of farmers' expertise is linked to the formal methodological approaches which provide scientists with techniques for conducting and implementing the result of research in ways that do not enable small farmers to utilize their expert knowledge of local conditions, their skills and capacity for self-help. Under this approach the farmer is seen as a user or beneficiary of technology outcomes and has no input in the technology development. Instead, it is the scientists' task to identify, analyse and to solve farmers' technical problems by developing solutions at the research station, without taking into serious considerations the distinctive economic, social and cultural traits of the farm families in different regions and transferring the result as messages to farmers via the extension workers, whose role it is to assist farmers in putting the ready-made technology into practice. In other words, the only linkage between the scientist and the farmer in this process is extension which is the technology transfer medium. In this model, technology development was based on the development of intellectuals which comprises individuals having formal education in different areas of agricultural technology. These intellectuals formed the bases for designing various levels of technologies meant for farmers' utilization.
While these scientists are actively involved in the development of the technologies, the farmers on the other hand were regarded as mere recipients of such technologies.
The reliance of scientists on this model was based on the assumption that technology developed from scientists' knowledge and delivered to farmers by extension will address the problems of rural area. In other words, this model assumes that farmers' problem can be solved by people and institutions that have the custody of modern knowledge (Anon, 1998). Inadvertently, the intellectual, cultural and social gap between the professional scientists who are confined in experimental stations and the farmers in the rural communities get wider and wider and this makes it difficult for scientists to see how semiliterate, bare-foot farmers can participate in research, and hence voluntarily involve them in the intellectual process of defining problems, setting priorities and identifying potential solutions, (Ashby, 1998) This model which has been described as top-down, rigid, hierarchical and devoid of feedback, has one obvious disadvantage which is the long time usually required for feedback from farmers to get to scientists and back to them. Thus, these deficiencies associated with this approach employed in the technology development process has been linked to the failure to bring about the desired increase in food production and the adoption of technology by farmers, vis-as-vis sustainable national development (Tripp, 1985). From this argument, it is clear that conventional on-farm research did not institutionalize farmer-scientist collaboration in the planning, testing and evaluation of technologies from the onset. As a result, research activities anchored on these conventional methods and approaches were found to constitute more of a problem than solutions to farmers' needs.
TRANSFER OF TECHNOLOGY (TOT)
In most developing countries the Transfer of Technology (ToT) model has been common practice for developing and disseminating technologies. In this model, technology knowledge is believed to be generated by research organisations, then transferred by extension services to farmers who utilised them. It is based on the assumption that technologies developed by scientists and transferred to farmers will trigger agricultural development and solve farmers' problems identified by scientists through their varied scientific approaches. The model assumes that while the role of agricultural research institutes and its researchers is to generate knowledge and technologies, the role of extension services in the process is to handle the subsequent dissemination of technologies and to provide a link between researchers, policy makers and farmers. The farmers on their part, rather than being seen as potential initiators of solutions to their problems, are viewed as mere passive receivers of messages transmitted from the research organisations. Like the past conventional methodologies, this approach disregarded the fact that the majority of small-scale farmers who form the bulk of farm producers in developing countries did not have the same economic resources to embrace technologies developed by scientists. Also, little account was taken of local knowledge and value systems. Thus, the farmers response to the new technical messages developed by scientists and transferred by extension was much less successful than predicted and adoption rates were usually very low. Rather than investigating the reason for farmers passive resistance to technologies developed, a number of researchers rationalised farmers' action as a sign of traditionalism, ignorance and lack of flexibility. Also, farmers who could not accept the technologies from research stations were often labelled by extensions as laggards, who lacked the right attitude and capabilities. This linear top-down flow of information which characterised this model creates a rigid hierarchy that discourages interaction and feedback of information. The model therefore, does not really provide an opportunity for researchers, extension staff and farmers to work together. Technologies developed under this model tend to be prescriptive and uniform and such technologies do not pay attention to particular environments, conditions, opportunities and local knowledge of the receiving audience. In spite of this, both the scientist and extension conventionally expect unquestioning and universal acceptance of the technologies promoted, from farmers.
FARMING SYSTEM RESEARCH
With the lapses noticed in the transfer of technology model, a new approach to agricultural research which emphasized participation of farmers in the farming systems research emerged in developing countries in the mid seventies.
In this approach, trails were conducted together with farmers to identify the constraints of existing production systems which will lead to the production of new improved technology packages. Under this arrangement, the farmer's role was to provide land and labour, act as an experimental control by farming on an adjacent plot with his standard practices and later respond to the results of the experimental treatments. In this process, the extension worker demonstrate the technologies developed by the scientists on farmers farm for them to observe and see what, why, when and how to carry out the technology packages, while the research scientist provides leadership in designing the research (Baker 1996). In the farming system research approach, extension specifically worked with farmers to identify their problems and with the help of researchers, solutions are found. Although there is some element of participation in the approach, however, the farmer is seen as a passive participant in the technology development, who has no input in whatever form in the development of the technology package. Rather than helping farmers to solve their own problems through direct involvement in the development process, the approach fostered reliance on the extension workers and their resource base to solve the problems identified by the farmer. Also, little account is taken of local knowledge and value systems of benefiting audience. The implication is that the technologies developed by the scientists without the active involvement of the users are usually much less successful
TRAINING AND VISIT APPROACH
In the late seventies and early eighties, there was the introduction of the training and visit (T & V) system. This approach was utilized by extension service to involve farmers in the process of technology design and utilization. This approach was seen as a modification of the top-down transfer of technology model because there is room for feedback from the farmers. (see figure 1). The specific role of extension in this model is the provision of basic information, advisory and training, (Christoplos et al., 2001). It ensures that extension agents visit farmers regularly, transmit messages relevant to their production needs and identify problems faced by farmers which are quickly fed back to the scientists for solution or further investigation (Benor et al, 1984). This system which some researchers have adjudged to be practically effective is believed to facilitate research development through the extension workers and contact farmers who work with other farmers. The general expectation is that the Training & Visit System has the ingredients to facilitate farmers' participation in research development through the feedback mechanism. However, some elements of rigidity are associated with it because farmers are generally seen as recipients of technology and not co-researchers in the development of the scientific methods and practices. Besides, the approach promoted the development of uniform recommendations, which largely disregarded the low-potential, highly diverse farming activities of resource-poor farmers, (Gitta, 2001) The major criticisms with the T & V model as it concern technology development and utilisation are that the model insufficiently adapt the general recommendations released from the research agencies to farm and farmer conditions. This in turn leads to low adoption levels, poor performance of technologies, as well as give too little interest in local knowledge and practices, cultural values and power relations within the recipient communities.
FARMER FIRST (FF) APPROACH
In the late eighties, there was the emergence of another paradigm known as Farmer-first with promotion of active participation, empowerment and poverty alleviation, (Chambers et al (1989, Gitta; This approach, identified the starting point of technology development to be active and equitable partnership between rural people as key partners with the researchers and the extension workers. Within this period, development practitioners have increasingly perceived farmers as key players and partners in technology development and transfer. This is based on the understanding that farmers have the capacity to collaborate as partners in the development of technologies (participatory technology development) and also have the capacity to diffuse new technologies among themselves (farmer-to-farmer approaches).
These insights culminated in what is now known as the Farmer First approach which involves some levels of participation (Chambers 1989 ;Hagmann, et al. 1998). The Farmer First approach argues that the strategy and methods of Transfer of Technology which have been adopted over the years, do not fit the resource-poor farming systems of indigenous agriculture, which is complex, diverse resource-poor and risk-prone (Chambers, Pacey & Thrupp 1989), especially in Sub-Sahara Africa. In contrast to technology-driven agriculture, with its standardizing package of practices, the farmer First approach stresses and recognises the abilities of the resource-poor farmers to experiment, adapt and innovate. Most importantly, the farmer first approach pays attention on the technical side of local knowledge system. However, this approach has been subjected to intense criticisms. Despite the emphasis on farmer participation and recognition given to farmers' indigenous knowledge, this approach has been accused of not address the issue of farmers active contribution in technology designed. The basic view underlying this criticisms lies on the fact that in most cases the technology development process still present a framework whereby technology is generated by researchers who indeed make all the major decisions. In the past decades, scientists and researchers in the developed world cared little about the relevance of indigenous knowledge, even though it was reported that early efforts at agricultural technology generation were based on exploiting the knowledge of the best farmers and in promoting a process of horizontal transfers, i.e. farmer-to-farmer innovation. The prevailing attitude then was that the western science with its powerful analytical tool has nothing to learn from indigenous knowledge. Western scientists perceived indigenous knowledge as a social product closely linked or even restricted to a cultural and environmental content (Sabine and Rischksky, 2001) However, the threat posed to humanity by ever increasing environmental degradation and the impacts of scientific technology on the population of developing countries have somewhat increased interest in indigenous knowledge. Some scientists are beginning to recognise that the world is losing an enormous amount of basic research materials as indigenous knowledge disappears and are now working towards promoting indigenous knowledge as a key to sustainable development. (Brokensha et al,1980). In recent times a wealth of information on indigenous knowledge pertaining to soils, plants, animals, and local people's innovative capabilities have been compiled. Indigenous knowledge is seen as the cultural knowledge of rural people, promoting understanding and identity among members of a farming community, whose local technical knowledge and skills are inextricably linked to non-technical issues (Gitta 2001). Indigenous knowledge is native and unique to a group of people having some peculiarities in culture and tradition. In terms of the mode of acquisition, it is passed down from generation by oral/verbal (undocumented) communication. Apart from being the local technology base, it incorporates cultural, social and economic components of rural living. This mean that indigenous knowledge is dynamic, developing as collective experiences of specific social groups in interaction with their environment. The place of indigenous knowledge systems in the overall agricultural production is very vital, but the vast store house of this knowledge system is yet to be explored and utilized in sustaining a reasonable level of development. Contrary to what may seem to imply or connote to researchers, there are a number of studies in the recent time that have highlighted the fact that local knowledge is not totally powerless in the face of outside knowledge(scientific knowledge) (Okali et al, 1994). The socio-economic appropriateness of indigenous knowledge in agriculture and years of experiences of farmers have produced successful techniques in mixed cropping patterns, water and soil management, seed selection, pest management, food processing and storage and other adaptations to the environment. (OTA,1986). Therefore, a better understanding of indigenous farming system based on indigenous technical knowledge is considered essential for the successful development of new technology. Chambers (1979) stated that one of the strongest contributions of indigenous knowledge to agricultural research is its systems of classification of biophysical environment. He however stated that indigenous knowledge is not meant to replace scientific knowledge but to complement it. Rather, there is much overlap between indigenous knowledge systems and scientific systems. Although some scholars have capitalized on the differentiated nature and structure of indigenous knowledge to question the value of ethno-scientific models, the fact remains that local farming practices and environmental knowledge can offer the starting points for developing farming methods which can increase the productivity and sustainability of local resources. In the light of farmers own experience and understanding, local farming knowledge can supply missing ecological links which may help scientists to develop alternative farming techniques. Many innovations either originate from farmers or are modified by farmers to adapt them better to their situation. In addition, it is recognized that farmers can play an important role in technology development dissemination and adoption through the integration of their knowledge system. Therefore, much effort should be tailored towards integrating indigenous knowledge system with the scientific knowledge system in designing and developing technology for successful adaptation and adoption. Although, there is agreement among a wide variety of individuals on the need for farmer participation and integration of their knowledge system in research process, there is however no implicit statement or definition about the nature or level of their participation. Grumman (1993) noted that some approaches which incorporate farmers knowledge and participation in technology development process only focused on using farmers field and or using farmers to help researchers to identify the problems and set priorities during diagnostic survey but leaving them as recipient of the researchers products. Okali, et al (1994) Bigg's (1989) developed a framework in which he described the relationship between research partners and the recognition given to local opinions and practices. In this framework, he classified participation in terms of levels of involvement of the people and the extent to which their knowledge, opinions and practices are given relevance in research activities, into four (4) categories namely: contractual, consultative, collaborative and collegiate. (see table 1).A thorough and careful examination of these categories reveal some critical issues as it concerns farmers knowledge integration and participation in technology development.
CONTRACTURAL PARTICIPATION:
Here, little interest is shown in farmers' knowledge. There is limited dialogue between the farmer and the scientists in the process of research. Farmers' role in this process is a passive one. He is involved in research as a collaborator with the scientists by contributing land and labour for on-farm trials often designed and managed entirely by researchers, who also derive conclusions from trials without attempting to interact with farmers about their responses to the technology. This approach is widely criticized because of its top-down nature as it depicted technology development process as moving technologies from experimental stations to farmers' fields.
COLLABORATIVE PARTICIPATION:
At these levels of participation, the scientists recognize the importance of local information and resources and hence adopt the diagnostic research approach which involves informal interactions between researchers and farmers to identify problems to be addressed in technology design. Farmers participate as informants and source of ideas for problem diagnosis. Scientists at this point collaborate with farmers in determining priorities among problems, as well as in the planning and designing of farming patterns suitable for a given farm environment.
CONSULTATIVE PARTICIPATION:
Although dialogue are done in coordination, action can however be taken independently by either party while individual agenda often dominate the relationship. The consultative and collaborative farmer participation widely given support by farming systems research have been identified with major short coming relating to the stage at which the involvement of farmers is activated. Also, the time-lag between identifying problems, designing potential technical solutions to farmers problem and obtaining farmers' reactions to developed technology during validation in farmer-implemented trials, can be prolonged..
COLLEGIATE PARTICIPATION:
The collegiate participation put emphasis on strengthening and providing support to informal research processes by building on existing skills and knowledge. The farmer who is seen as an active participant plays the role of a colleague in the research process. The researcher through his mutual learning work with the farmers, tap their knowledge about local conditions and innovations to discover new opportunities. In effect, the farmers take part in making decisions about the technology through participation in diagnosis, planning, design and experimentation. There has been a radical rethinking of the role of farmers -and professionals in agricultural research and extension activities, which has led to a virtual revolution in the agricultural sector, which scholars have even termed it a 'paradigm shift' (Scoones and Thompson 1994).Having established the fact that the conventional approaches to technology development have not been attuned in developing countries like Nigeria in meeting farmers agricultural needs and having also noted that most recent practices and methods designed to facilitate farmers' participation in research development have not systematically involved local farmers as active participants in the planning, execution and evaluation of research, this paper therefore makes a case for Participatory approach in integrating farmers local knowledge and practical involvement in technology development (PTD). The PTD approaches are based on effective participation of rural communities. Farmers' participation in technology design has been seen as an integral part of agricultural research development. The focus of PTD is to promote greater involvement of farmers in rural in planning and implementing agricultural development activities, enhance capacity building, social mobilisation, experiential learning and empowerment which are major elements of the Participatory Technology Development approaches. In this model, farmers are encouraged to take initiative and work with research and extension staff on equal terms, for testing and implementing appropriate solutions.
Regardless of which name is used, participatory approach is essentially a process of purposeful and creative interaction between rural people and outside facilitators. Through this process, scientists and researchers try to increase their understanding of the main tracts of local farming systems, and based on ideas and experiences delivered from both local knowledge and modern science, the best options for addressing agricultural problems are selected and experimented on collectively, (See fig 2). This approach, which has the farmer as an active participant in research design, is ideal for the researcher in investigating ecological low-input and sustainable production system from the eye of the farmer who knows his needs and problems. The approach has been described as a people -centred process of purposeful and creative interplay between local people and communities on one hand and outsiders with formal scientific knowledge on the other. In order words, it provides an opportunity to build better linkage between various actors to increase learning from each other.
Source: Author's modification. Participatory approach as a framework for enhancing the use of indigenous knowledge in technology development is anchored on two main assumptions: 'First, involving farmers in research design using participatory approach is based on the assumption that many farmers are actively engaged in an ongoing every day research for new or improved crop planting materials, varieties, production techniques, methods of protecting crops from pests and diseases and livelihood options. These farmers who have been referred to as 'research-minded' farmers by some writers are seen as generators of new information and in understanding the operations of technologies. 'The second assumption is based on the fact that there are core elements within local farming systems and the larger contexts within which they exist, which have not been observed or examined by formal research, but which are understood by farmers themselves, because of their years of exposure to them. In other words, there are hidden local resources, skills and knowledge which are yet to be exploited within the various local cultures. It is through a careful examination of these elements based on the knowledge and understanding of both farmers and scientists through participatory approach that sustainable technologies and solutions can be developed'. Therefore, participatory approach unarguably is considered the most practicable and sustainable way in facilitating the use and the integration of indigenous knowledge into the process of technology development. It affords the different partners the opportunity to worked together in designing and refining technologies released to farmers for use and strengthens the existing experimental capacities of indigenous people (Essers. 1994, Jiggins 1992.In effect, farmers as ''insiders'' with their wealth of knowledge and practical abilities, interact with researchers and extension workers as the ''outsiders'' to identify, develop, test and apply new technologies and practices. Therefore, the participatory methodological process tends to reinforce the existing creativity of indigenous people and help them keep track with the process of generating innovations for sustainable use. The inclusion of indigenous knowledge in rapid appraisal methodologies provides basis for the incorporation of local needs in technology development, this papers identifies potential implications in applying participatory approach to technology design, especially for agricultural development:
IMPLICATIONS
(i) Incorporating indigenous knowledge in technology planning allows scientists to understand how culture and beliefs interplay as major determinants to technology acceptance and utilization. But failure to recognize the importance of farmers' indigenous knowledge, may see scientists finding solutions for which there are no problems, since solutions can be best found within the framework of peoples own local knowledge system when recognized. (ii) The integration of indigenous knowledge into technology development, using participatory approaches provide scientists, extension workers and the farmers the opportunity to work together on the same issue. Apart from exchanging knowledge and experiences with farmers, scientists reach some consensus with farmers on what is most needed. As a result of this collaboration, farmers become more confident that scientists and extension workers can help them without imposing their own knowledge or solutions to problems on them. (iii) On the production of appropriate and acceptable technology, participatory approach is considered most appropriate, because it allows a research team to quickly and systematically collect information for the general analysis of a specific problem, need assessment or in identifying priorities for solution. As an approach which involves a working team of members who have different skills and background, it permits problems to be approached holistically in an informal and flexible manner. Here, learning takes place not just in the research station, but in the Farmers field where on-thespot assessment can be done. (iv) Participatory approach is not only considered appropriate for working with the rural poor, but it is also seen to be essential for working in areas which might be considered inaccessible, difficult or out of touch for most researchers.
(v) Participatory approach can be tailored to fit the needs of almost any community in terms of both community dynamics and local preferences. It allows participants in research development to listen to the views of local people on different issues and learn their indigenous skills. (vi) Participatory approach to technology development, enables scientists to understand the main characteristics and dynamics of agro-ecosystem within which a community operates, based on ideas and experiences derived from indigenous knowledge and informal science. Thus, scientists, in collaboration with farmers themselves can develop options that will meet the farmers' needs, thereby facilitate the use of indigenous knowledge as an integral part of technology development process. (vii) Finally, participatory approach to technology development is a practical process that brings farmers knowledge and practical abilities to test technologies and also to interact with researchers as colleagues in identifying, developing, testing and apply new technologies and practices.
CONCLUSION
Cropping systems in most developing communities are considered to be more complex and diverse, hence requiring a system approach for both analysis and improvement. Thus, an alternative approach to the development of technologies that is able to cope with ecological uncertainty and diversity, and which also 54 OGBONNA, KALU IROHA recognise and incorporate the input of indigenous people in designing practices is being called for. This call has become pertinent because many official systems which have implemented highly sophisticated models and approaches such as transfer of technology (TOT), Training and visit (T&V), farming system research (FRS), etc toward technology generation and utilization, have not substantially improve food security or help to support sustainable farming practices). In addition, small scale farmers still have considerable information deficits on technical, economic, marketing and environmental issues, while many are unable to adapt the available stock of scientific knowledge to their farming environment because they are not economically feasible, socially acceptable and environmentally adapted. Giving the shortcomings of the conventional approach to technology development, a search for a more comprehensive approach which not only accommodate the local people but recognize the usefulness of their knowledge is inevitable. As indicated earlier on this paper, a wealth of information on indigenous knowledge pertaining to soils, plants and animals has been compiled by researchers. Also, literatures abound with examples of innovative discoveries of local people. It is therefore very important that scientists in Nigeria and other developing countries must give special considerations to the attributes of indigenous knowledge, built into the traditional practices in order to have sound technological base for future improvements needed for overcoming the often unpredicted food crisis prevalent in sub-Saharan Africa. | 7,855.4 | 2019-08-01T00:00:00.000 | [
"Economics"
] |
Intra-Individual Comparison of 18F-PSMA-1007 and 18F-FDG PET/CT in the Evaluation of Patients With Prostate Cancer
Purpose 18F labelled PSMA-1007 presents promising results in detecting prostate cancer (PC), while some pitfalls exists meanwhile. An intra-individual comparison of 18F-FDG and 18F-PSMA-1007 in patients with prostate cancer were aimed to be performed in the present study. Then, the pitfalls of 18F-PSMA-1007 PET/CT in imaging of patients with prostate cancer were analyzed. Methods and Material 21 prostate cancer patients underwent 18F-PSMA-1007 PET/CT as well as 18F-FDG PET/CT before treatment. All positive lesions were noticed in both 18F-PSMA-1007 PET/CT and 18F-FDG PET/CT, then differentiated PC metastasis from benign lesions. the SUVmax, SUVmean and TBR of lesions, up to 10 metastases and 10 benign lesions per patients were recorded (5 for bone, 5 for soft tissue metastasis ). The distribution of positive lesions were analyzed for two imaging. Detection rates, SUVmax, SUVmean and TBR in 18F-PSMA-1007 PET/CT and 18F-FDG PET/CT were compared, respectively. The optimal cut-off values of SUVmax, SUVmean for metastases vs. benign lesions was found through areas under ROC in 18F-PSMA-1007. Results The detection rates of primary lesions in 18F-PSMA-1007 PET/CT was higher than that of 18F-FDG PET/CT(100% (21/21) vs. 67%(14/21)). For extra- prostatic lesions, 18F-PSMA-1007 PET/CT revealed 124 positive lesions, 49(49/124, 40%) attributed to a benign origin; 18F-FDG PET/CT revealed 68 positive lesions, 14(14/68, 21%) attributed to a benign origin. The SUVmax, SUVmean, TBR of primary tumor in 18F-PSMA-1007 PET/CT was higher than that in 18F-FDG PET/CT (15.20 vs. 4.20 for SUVmax; 8.70 vs. 2.80 for SUVmean; 24.92 vs. 4.82 for TBR, respectively); The SUVmax, SUVmean, TBR of metastases in 18F-PSMA-1007 PET/CT was higher than that in 18F-FDG PET/CT (10.72 vs. 4.42 for SUVmax; 6.67 vs. 2.59 for SUVmean; The TBR of metastases was 13.3 vs. 7.91). For 18F-FDG PET/CT, the SUVmax, SUVmean in metastases was higher than that in benign lesions (4.42 vs. 3.04 for SUVmax, 2.59 vs. 1.75 for SUVmean, respectively). Similarly, for 18F-PSMA-1007 PET/CT, the SUVmax, SUVmean in metastases was significantly higher than that in benign lesions(10.72 vs. 3.14 for SUVmax, 6.67 vs. 1.91 for SUVmean, respectively), ROC suggested that SUVmax=7.71, SUVmean=5.35 might be the optimal cut-off values for metastases vs. benign lesions. Conclusion The pilot study suggested that 18F-PSMA-1007 showed superiority over 18F-FDG because its high detecting rate of PC lesions and excellent tumor uptake. While non-tumor uptake in 18F-PSMA-1007 may lead to misdiagnosis, recognizing these pitfalls and careful analysis can improve the accuracy of diagnosis.
INTRODUCTION
Prostate cancer is the second most common cancer in men (1). Early detection and accurate staging leads to improved clinical decision making. Different from traditional imaging (e.g., computed tomography (CT), magnetic resonance imaging (MRI)), whole-body imaging seems to be an advantage for PET/CT. Recently, the study of prostate-specific membrane antigen (PSMA) is growing and suggesting impressive results in the diagnosis and staging of prostate cancer (2)(3)(4).
Recently, with the extensive application of PSMA-target tracer, the pitfalls of PSMA-target PET has been found increasingly (12)(13)(14)(15). The uptake of PSMA-ligand in other malignant and benign pathologies (e.g., celiac and other ganglia, fracture, degenerative changes) causes challenge to clinical diagnosis. Recognizing these limitations can be essential.
The aim of present study was to perform an intra-individual comparison of 18F-FDG and 18F-PSMA-1007 in the evaluation of patients with prostate cancer. Then analyzed the pitfalls that may appear when conducting 18F-PSMA-1007 PET/CT in order to reduce the probability of misdiagnosis.
MATERIALS AND METHODS Patients
A total of 21 patients (median age, 66 y; range, 50-82 y) with pathologically diagnosed as prostate cancer underwent 18F-PSMA-1007 PET/CT and 18F-FDG PET/CT before treatment. 18 (86%) of these patients were diagnosed as prostate cancer with perineal prostate biopsy; two (9%) patients were confirmed by biopsy of pelvic lymph node, ala of ilium, respectively; one (5%) patient was diagnosed by biopsy with cystoscope. The Gleason score was available for 16 patients, the median Gleason score was 9 (range 7-10). The treatment of patients was as follows, eight (38%) of the patients received exclusively androgen deprivation therapy (ADT), two (10%) patients was received ADT after docetaxel chemotherapy. Four (19%) patients was treated with only radical prostatectomy. Seven (33%) was treated with ADT after radical prostatectomy.
The study was ethically approved by the Institutional Ethics Committee (Ethics Committee of Sichuan Cancer Hospital, JS-2017-01-02) and in accordance to the local regulations of China. All patients signed a written informed consent form. The patients characteristics were listed in Table 1.
Radiosynthesis and Quality Control
18F-PSMA-1007 was synthesized by a one-step method using an automated radiosynthesizer (Sumitomo, Japan) as was described (16). 18Fwas acquired by (18F)/H 2 18O nuclear reaction, and then loaded onto quarternary methyla-minecolumn (Waters, America), After eluted by 0.75 ml tetrabutylammonium hydrogen carbonate (TBAHCO 3 ) solution (ABX, Radeberg, Germany), it was transfered into reactor and followed by the addition of 0.4 ml anhydrous acetonitrile (Sigma, America), then removal of water with the temperature of 95°C. 1.2 ml dimethyl sulfoxide (ABX, Radeberg, Germany) which dissolved with PSMA-1007 precursor (ABX, Radeberg, Germany) was added into reactor and performed fluorination reaction at 85°C for 10 min. Then diluted with 6 ml of 5% ethano and loaded onto PS-H+ and C18ec (ABX, Radeberg, Germany) followed by 4 ml of 30% ethanol. Final product was eluted with 4 ml of 30% ethanol and was added into 0.1 ml of 100 mg/L Vitamin C solution, 36 mL of 0.9% NaCl, then was sterilized by 0.22 mm filter (Millipore, America). High-performance liquid chromatography (HPLC, Shimadzu, Japan) was performed to test chemical purity, Further quality control (appearance, color, clarity, PH, and adionuclidic purity) was done and in compliance with current pharmacopoeias. The synthesis of 18F-FDG was performed as reported by Gallagher et al. (17).
Imaging Procedures
To reduce the mutual interference of the two radiotracers, imaging was carried out at different days. The median 6.5 (range 1.0-34.0) days passed from 18F-PSMA-1007 to 18F-FDG. Patients fasted for at least 6 h prior to injection of the 18F-FDG and blood sugar level is lower than 15 mg/L. The injected activity of 18F-FDG were mean 388 ± 55 MBq (range 281-503 MBq) and scanning was performed 60 min after injection, while the injected activity of 18F-PSMA-1007 were 348 ± 52 MBq (range 266-458 MBq), and according to Giesel et al. (5) imaging began 180 min after injection. All scans were obtained on a Biograph mCT-64 PET/CT scanner (Siemens). Non-enhanced low-dose (1.3-1.5 mSv) CT scan was performed with CT parameters (140 keV, 42 mA) section width of 8 mm, pitch of 0.8, and CT datas were used for attenuation correction. The PET-scan, PET was acquired in 3-D FlowMotion with an acquisition time of 2 min per bed position. Both scans was performed from vertex to the mid-thigh. Images were reconstructed with an ordered-subset expectationmaximization iterative reconstruction algorithm (three iterations, 21 subsets).
Image Analysis and Quantification
All images were evaluated by two double board-certified nuclear medicine physician. Volumes of interest (VOI) were drawn around lesions using an maximum standardized uptake value (SUVmax) threshold of isocontour of 42% (18). Intra-prostatic lesions were defined as positive if the tracer-uptake was focal and higher than surrounding prostate tissue (19). Other soft tissue and bone metastases were judged as positive when there were obvious morphological changes meanwhile corresponding lesions showed increased radiotracer-uptake above normal surroundings (20). Benign lesions were recognised based on typical pitfalls (e.g., ganglia, fracture, degenerative changes, and unspecific lymph nodes) in PSMA ligand PET imaging and information from CT (14). All PET positive lesions were counted and lesions grouped into: (a) local tumor growth, (b) soft tissue metastases [including lymph node (LN) metastases, other soft tissue metastases (e.g., lung, liver)] (c) bone metastases, (d) benign lesions. In accordance with previous studies, obturator muscle was chosen as background and VOI was drawn aroud it (19,21). Tumor-to-background ratio (TBR) were defined as SUVmax of lesions/SUVmax of obturator muscle. for the SUVs (SUVmax and SUVmean), TBR of primary tumor, up to 10 metastases per patients were recorded (five for bone, five for soft tissue metastasis); the SUVs (SUVmax and SUVmean) of up to 10 benign lesions per patients were recorded.
Statistical Analysis
Statistical analysis was performed using SPSS software, version 24.0 (IBM Corp.). The nonparametric Mann-Whitney U test for two independent samples was used to compare the SUVs, TBR of all lesions. When performed 18F-PSMA-1007 PET/CT, Areas under receiver operating characteristic curves (ROC) were calculated and optimal cut-off values of SUV in metastases vs. benign lesions were calculated using the Youden's index. P < 0.05 were considered significant.
Local Lesion Finding and Uptake
Among these 21 prostate cancer patients, 18F-PSMA-1007 PET/ CT detected all patients (100%), eight (38%) cases of them had obvious multifocality. 14 of 21 cases (67%) was identified by 18F-FDG PET/CT, and none of them was found multifocality ( Table 1, Figure 1). Figure 4). ROC showed that optimal cut-off values of SUV in metastases vs. Table 3. The SUVmax, SUVmean for suspicious metastases was significantly higher than probably benign [median SUVmax Table 2). attributed to benign origin and lesions attributed to metastases in 18F-FDG PET/CT and 18F-PSMA-1007 PET/CT, respectively. Detection rate for local lesions in 18F-PSMA-1007 PET/CT was higher than in 18F-FDG PET/CT (100% (21/21) for 18F-PSMA-1007 PET/CT, 67% (14/21) for 18F-FDG PET/CT) and 18F-PSMA-1007 PET/CT was more likely to find lesions with multifocality (eight cases for 18F-PSMA-1007 PET/CT, none for 18F-FDG PET/CT). This might be explained by the following reasons: Firstly, PSMA is a type II transmembrane glycoprotein that is strongly overexpressed in PCa cells (both primary tumor and metastases) and low in benign prostate tissue (22)(23)(24), making 18F-PSMA-1007 PET/CT a promising technique for detecting and locating prostate cancer. Furthermore, hepatobiliary elimination seems to be another advantage for 18F-PSMA-1007, while 18F-FDG mainly excreted via urinary tract (17,25); low bladder/ureter activity in 18F-PSMA-1007 PET/CT make it possible to differentiate primary tumor and pelvic lymph node metastases from the bladder urinalysis activity (26). In present study, we found 18F-PSMA-1007 PET/CT was more likely to detect metastases (both LN and bone metastases) than 18F-FDG PET/ CT (75 lesions for 18F-PSMA-1007 PET/CT, 54 for 18F-FDG PET/ CT). However, a recently published study shows no significant difference was found when comparing the detection rate in 18F-PSMA-1007 and 18F-DCFPyL (27). Both tracers belong to the same family of PSMA ligands and labelled by the same radioisotope (18F) may explain the phenomenon (28).
Consistent with previous studies, we found that 18F-PSMA-1007 PET/CT shows high tumor-to-background ratios (TBR) in lesions with prostate cancer (6), median TBR of 24.92 in local lesions with prostate cancer, median TBR of 13.30 in lymph node metastases and bone metastases with prostate cancer. The median TBR of 18F-FDG PET/CT in local lesions with prostate cancer, lymph node metastases, and bone metastases with prostate cancer was 4.82 and 7.91, respectively. After injection of 18F-PSMA-1007 for 3 h, the uptake of radio-tracer in prostate cancer lesions demonstrated a remarkable increasing and leading to the improvement of tumor-to-background ratios (5), it makes tumor lesions more visible in 18F-PSMA-1007 PET/CT than in 18F-FDG PET/CT.
Recent studies suggest that PSMA-target PET shows some pitfalls in clinical application, especially in 18F labeled PSMA, as it may be expressed in other malignant and benign pathologies, even some normal tissues (6,13,14,29) and these findings are in line with ours. We found that some PSMA-positive lesions attributed to benign origin (e.g., benign lymph nodes, ganglia, and skeletal fracture) and the reason of this phenomenon is unclear yet. To our knowledge, salivary glands, liver, gallbladder, etc. non-prostate tissues show uptake in 18F-PSMA-1007 PET (5,7), providing the possibility of PSMA-uptaking in benign lesions. Moreover, PSMA present both in peri-tumoral capillaries and inflammatory-associated neovasculature may explain the uptake of benign lesions (14). The arising of these pitfalls leads to an increasing in false positive and brings us challenges. Differentiating suspicious metastases from these potential diagnostic pitfalls may be of increased importance.
In present study, we found that the uptake of PSMA ligand tracer in probable Pca metastases was significantly higher than in benign lesions (10.72 vs. 3.14 for SUV max, 6.67 vs. 1.91 for SUVmean), which was consistent with previous studies (15). And ROC shows that SUVmax ≥7.71 was more likely to be Pca metastases (AUC = 0.795, P < 0.001) than SUVmax <7.71, SUVmean ≥5.35 was more likely to be Pca metastases (AUC = 0.791, P < 0.001) than SUVmax <5.35. These findings make it possible to differentiate suspicious metastases and benign lesions. Furthermore, lesions attributed to benign origin can be identified by CT (with 80% in benign lesions, 96% in coeliac ganglia) (14,30), due to their typical shapes and locations. More importantly, the clinical medical records (e.g., other imaging data, history of fracture, inflammation) providing essential information when evaluating benign lesions.
In accordance with previous study (13,14), we found that the most prevalent pitfall in 18F-PSMA-1007 PET/CT was nonspecific radiotracer uptake in ganglia, with 17 (35%) lesions attributed to ganglia (including cervical, coeliac, or sacral ganglia). A recent study publication by Krohn et al. demonstrated that up to 94.0% of prostate cancer patients with PSMA-PET/CT show intense PSMA-ligand uptake in at least one coeliac ganglia (15). In current study, the distribution of radio-tracer uptake in other benign lesion with 18F-PSMA-1007 PET/CT was as follows, unspecific lymph nodes, fracture, degenerative changes, unspecific soft tissues, focal increasingly PSMA-ligand uptake showing no clear correlate on CT images, which were along with the previous study (12)(13)(14). Recently, a study found that a high number of PSMA-ligand uptake in the ribs without corresponding morphological changes in CT (14), which was different from our finding [only two (28%) lesions]. Small population was involved in present study may be the reason leading to this difference. Lacking histopathology verification of the PSMA-positive lesions is the major limitation in present study. However, the uptake of lesions, CT images, and clinical medical records provide the possibility to identify benign lesions. Additionally, small patient population is another limitation, larger comparison trials will be needed in future studies.
CONCLUSION
The study demonstrated that 18F-PSMA-1007 showed superiority in detecting Pca lesions (both primay and metastases) than 18F-FDG and the uptaking in benign lesions was more likely to be found in 18F-PSMA-1007. Emphasizing the known of pitfalls, evaluating PET and CT images as well as clinical medical records make it available to avoid a misdiagnosis in 18F-PSMA-1007 PET/CT.
DATA AVAILABILITY STATEMENT
The datasets presented in this article are not readily available.
Requests to access the datasets should be directed to<EMAIL_ADDRESS>
ETHICS STATEMENT
The study was ethically approved by Sichuan Cancer Hospital Ethics Committee and in accordance to the local regulations of China. All patients signed a written informed consent form. | 3,385.2 | 2021-02-01T00:00:00.000 | [
"Medicine",
"Physics",
"Biology"
] |
GM-CSF/IL-3/IL-5 receptor common beta chain (CD131) expression as a biomarker of antigen-stimulated CD8+ T cells.
BACKGROUND
Upon Ag-activation cytotoxic T cells (CTLs) produce IFN-gamma GM-CSF and TNF-alpha, which deliver simultaneously pro-apoptotic and pro-inflammatory signals to the surrounding microenvironment. Whether this secretion affects in an autocrine loop the CTLs themselves is unknown.
METHODS
Here, we compared the transcriptional profile of Ag-activated, Flu-specific CTL stimulated with the FLU M1:58-66 peptide to that of convivial CTLs expanded in vitro in the same culture. PBMCs from 6 HLA-A*0201 expressing donors were expanded for 7 days in culture following Flu M1:58-66 stimulation in the presence of 300 IU/ml of interleukin-2 and than sorted by high speed sorting to high purity CD8+ expressing T cells gated according to FluM1:58-66 tetrameric human leukocyte antigen complexes expression.
RESULTS
Ag-activated CTLs displayed higher levels of IFN-gamma, GM-CSF (CSF2) and GM-CSF/IL-3/IL-5 receptor common beta- chain (CD131) but lacked completely expression of IFN-gamma receptor-II and IFN-stimulated genes (ISGs). This observation suggested that Ag-activated CTLs in preparation for the release of IFN-gamma and GM-CSF shield themselves from the potentially apoptotic effects of the former entrusting their survival to GM-SCF. In vitro phenotyping confirmed the selective surface expression of CD131 by Ag-activated CTLs and their increased proliferation upon exogenous administration of GM-CSF.
CONCLUSION
The selective responsiveness of Ag-activated CTLs to GM-CSF may provide an alternative explanation to the usefulness of this chemokine as an adjuvant for T cell aimed vaccines. Moreover, the selective expression of CD131 by Ag-activated CTLs proposes CD131 as a novel biomarker of Ag-dependent CTL activation.
Background
In vivo animal models suggest that the activation of CD8expressing cytotoxic T cells (CTLs) follows a linear pattern in which an expansion phase occurring within the first week after Ag stimulation rapidly evolves into a contraction phase in which surviving memory CTLs resume a quiescent phenotype [1,2]. During the expansion phase, Agactivated CTLs boast a robust enhancement of effector functions including the activation of cytotoxic mechanisms and the production of pro-inflammatory cytokines such as interferon (IFN)-γ. It is believed that such activation occurs through signaling associated with the Ag-specific triggering of the T cell receptor (TCR) combined with other co-stimulatory signals. In summary, naïve and, to a certain degree, long-term memory CTL activation and expansion is dependent upon three types of stimulation [3]; the first is the direct interaction between the TCR and the major histocompatiblity (MHC)/epitope complex. This interaction determines the specificity of the activation. However, TCR triggering is not sufficient by itself to sustain a forceful activation and expansion of CTLs and it may lead to unresponsiveness if others stimulatory signals are not provided simultaneously. A second signaling requirement is absolved by cell-to-cell interactions involving co-stimulatory molecules expressed on the surface of Ag-presenting cells. This interaction may sustain a few cell divisions but is insufficient to induce clonal expansion and full activation of effector functions. Thus, a third signal is needed, which is provided by immune-modulatory cytokines released by Ag-presenting cells, helper T cells or other immune cells in response to pro-inflammatory signals provided by pathogens or other environmental conditions. This third signal can be modeled experimentally by the exogenous administration of pro-inflammatory cytokines such as interleukin (IL)-2 [4].
Recombinant human IL-2 has been extensively used for the selective in vitro expansion of CTLs naturally exposed in vivo to Ag such as tumor infiltrating lymphocytes [5] or vaccine-induced circulating lymphocytes [6]. The in vitro expansion of CTLs exposed to Ag in vivo, strictly requires cytokine stimulation (as exemplified by IL-2); furthermore, in vitro stimulation in the presence of IL-2 leads not only to selective expansion of Ag-specific CTLs but also to the activation of their effector functions [4] paralleling the expansion phase described in other experimental models [1,7].
Segregating the respective contribution of Ag-specific signaling and environmental co-stimulation within the same microenvironment may provide useful insights about the mechanisms involved in the selective activation of Agexposed CTLs within a T cell population and shed light on the requirements for full activation of CTL effector functions in the target organ during distinct immune reactions including tumor regression following immunotherapy [8,9], acute allograft rejection [10], clearance of viral infection [11] and flares of autoimmunity [12].
In a simplified in vitro model of human CTL activation, we previously observed that neither Ag-stimulation in the presence of signal two nor the presence of signal 3 alone could induce in vitro expansion and activation of Agexposed CTLs and only the combination of the three could induce effective CTL responses [4]. Analysis of the transcriptional patterns associated with the complete activation of effector CTL responses suggested that proliferation and effector function were both dependent upon the combined presence of the three signals. However, further dissection of transcriptional patterns induced by the administration of IL-2 to peripheral blood mononuclear cells (PBMC) or non Ag-activated CD4 and CD8 T cell sub-populations suggested that the effects of IL-2 on T cell signaling are powerful but non-specific in the absence of TCR triggering [13]. Thus, to discriminate the individual contribution of direct TCR triggering on CTL activation, we compared the transcriptional profile of Ag-exposed CTLs to non-Ag-exposed, non-proliferating CTLs sharing identical environmental conditions. The model evaluated the kinetics of proliferation of HLA-A*0201-restricted, Flu Matrix protein epitope M1:58-66-specific CTLs; seven days following in vitro Ag stimulation with M1:58-66 and in vitro culture in 300 IU of human recombinant IL-2 (Novartis-Chiron CO, Emeryville CA), we separated with tetrameric flu-specific human leukocyte antigen/complexes (tHLA) proliferating CTLs from their companions CD8 expressing T cells (convivial CTLs).
Transcriptional characteristics of stimulated versus resting CD8 expressing T cells In vitro sensitization (IVS)
PBMCs were obtained by leukapheresis from HLA-A*0201-expressing normal volunteers and frozen after Ficoll separation. HLA-A*0201 expression was documented by sequence-based typing [14]. PBMCs from 6 donors were thawed and plated in complete Iscove medium (Life Technologies, Grand Island, NY) supplemented with 10% heat inactivated human AB serum, 10 mM HEPES buffer, 100 U/ml penicillin-streptomycin, 0.5 mg/ml amphotericin B and 0.03% glutamine, at the density of 10 6 cells/well in 48 multiwell plate. After overnight panning, cells were pulsed at day 1 with 1 μM Flu M1:58-66 peptide (Princeton Biomolecules, Langhorne, PA) and the following day human recombinant IL-2 300 IU/ml (rHuIL-2, Chiron Co, Emeryville, CA) was added. IL-2 was added every two days. At day 1, T cells were stained with Carboxy Fluoroscein Succinimidyl Ester (CFSE) to monitor their proliferation. PBMC cultures were continued for 7 days till sorting.
Cell sorting
On the eight day in culture, CD8-expressing T cells were enriched by negative selections using magnetic beads before sorting (Miltenyi Biotec, Auburn, CA) on an autoMACS separator. Median purity of CD8 T cells eluted from the columns was 85%. Sorting was then performed by high speed flow-cytometry (FACSVantage SE, BD); a logical gate was applied on SSC and live/dead staining with DAPI to check the viability of sorted cells. The sorting was always done in the Normal-R mode, which optimizes for cell purity, as confirmed by re-analysis of the sorted populations. Sorting was based according to level of tHLA-Flu and CFSE staining of CD8-expressing T cells segregating the tHLA-Flu+/CFSE-proliferating cells from the tHLA-Flu-/CFSE+ non proliferating CTLs. Median purity of Flu-specific and non-Flu-specific CTLs was above 95 % in all experiments ( Figure 1A). For the various sorting procedures the following monoclonal antibodies were used: CD8-PE or CD8-FITC, CD3-PerCP (all from BD Biosciences Pharmingen, San Diego, CA). As negative control cells were stained with IgG FITC or PE conjugated, according to the respective antibody's isotype. Cells were analyzed by FACS sort (BD Bioscience) gating them on living CD3-expressing lymphocytes.
RNA handling for transcriptional profiling
Total RNA was isolated with RNeasy minikits (Qiagen, Valencia, CA) and amplified into anti-sense RNA as previously described [15,16]. First strand cDNA synthesis was accomplished in 1μl SUPERase·In (Ambion, Foster City, CA) and ThermoScript RT (Gibco-Invitrogen, Carlsbad, CA) in 2 μg bovine serum albumin. RNA quality was verified by Agilent technologies (Palo Alto). Anti-sense RNA was labeled with Cy5-dUTP (Amersham, Piscataway, NJ) and co-hybridized with reference pooled normal donor peripheral blood mononuclear cells (PBMC) labeled with Cy3-dUTP to custom made 17K-cDNA array platform (UniGene cluster) printed at the Infectious Disease and Immunogenetics Section, DTM, CC, NIH with a configuration of 32 × 24 × 23 and contained 17,500 elements. Clones used for printing included a combination of the Research Genetics RG_HsKG_031901 8 k clone set and 9,000 clones selected from the RG_Hs_seq_ver_070700 40 k clone set. The 17,500 spots included 12,072 uniquely named genes, 875 duplicated genes and about 4,000 expression sequence tags.
Arrays were scanned on a GenePix 4000 (Axon Instruments) and analyzed using BRB-ArrayTools Version: 3.3, Cluster and Tree View software.
FACS analysis
Harvested cells were washed with buffer and stained with t-FLU-PE (FLU M1 iTAg MHC Tetramer, Beckman Coulter, Miami, FL), mouse IgG1k anti CD8 PE-Cy5 (Becton Dickinson, Franklin Lakes, NJ), mouse IgG2a anti GM-CSF-R (by Millipore, Billerica, MA, detected by using a secondary antibody against mouse IgG2a Alexa 647 conjugated, from Invitrogen, Carlsbad, CA), mouse IgG1k anti IFN-γ Receptor β chain (Abcam, detected by using a secondary antibody against mouse IgG1k Alexa 488 conjugated, from Invitrogen). In order to avoid the reaction of the secondary antibody used for IFNγ detection with the Fc of the IgG1k anti CD8, the staining for IFNγ receptor was performed separately and anti CD8 was added after washing as last step.
FACS analysis was performed using a FACScalibur by BD Pharmingen.
Statistical analysis Transcriptional profiling
The raw data were filtered to exclude spots with minimum intensity by arbitrarily setting a minimum intensity requirement of 300 in both fluorescence channels. If the fluorescence intensity of one channel was over and that of the other below 300 the fluorescence of the low intensity channel was arbitrarily set to 300. Spots with diameters < 25 μm and flagged spots were excluded from the analysis. The filtered data were then normalized using the lowess smother correction method. All statistical analyses were performed using the log 2 -based ratios normalizing the normal value in the array equal to zero.
Validation and reproducibility were measured using an internal reference concordance system based on the expectation that results obtained through the hybridiza- tion of the same test and reference material in different experiments should perfectly collimate. The level of concordance was measured by periodically re-hybridizing the melanoma cell line A375-melanoma (American Type Culture Collection, Rockville MD) to the reference samples consisting of pooled PBMCs as previously described [17]. This analysis demonstrated a higher than 95% concordance level. Non-concordant genes were excluded from subsequent analysis.
Supervised class comparison utilized the BRB ArrayTool [18] developed at NCI, Biometric Research Branch, Division of Cancer Treatment and Diagnosis. Paired samples were compared with a two-tailed paired Student t test. Unpaired samples were tested with a two-tailed un-paired Student t test assuming unequal variance or with an F test as appropriate. All analyses were tested for a univariate significance threshold set at a p 2 -value < 0.005. Gene clusters identified by the univariate t test were challenged with two alternative additional tests, a univariate permutation test (PT) and a global multivariate PT. The multivariate PT was calibrated to restrict the false discovery rate to 10%. Genes identified by univariate t test as differentially expressed (p 2 -value < 0.005) and a PT significance < 0.05 were considered truly differentially expressed. Gene function was assigned based on Database for Annotation, Visualization and Integrated Discovery (DAVID) and Genontology. Multiple dimensional scaling was performed using the BRB Array tool.
Functional studies
Fold increase in CD8+ FLU+ T cells was based on the calculation of the absolute number of CD8 expressing T cells at day 1, day 6 and day 12. Their fold increase (FI) was calculated by dividing their number at day 6 and 12 over their starting number at day 1. The same assessment was done for flu-positive and flu-negative CTLs. Average and standard error from the mean (SEM) are presented as appropriate. A paired Student t test was applied to calculate level of significance.
Global Differences between Ag-stimulated, IL-2-activated and quiescent CD8 expressing T cells
There were extensive differences between quiescent CD8 expressing T cells analyzed ex vivo and their counterparts maintained in in vitro culture in the presence of 300 IU IL-2 and Ag stimulation. An univariate F-test with random variance model identified 4,702 clones out of 16,726 present in the array to be differentially expressed at a pvalue < 0.001. The probability of randomly obtaining this number of genes at the selected level of significance (p < 0.005) if there were no real differences among groups was calculated as 0 by multivariate permutation test (Table 1). To assess for possible effects due to the in vitro culture conditions, we maintained PBMCs in culture for 24 hours in the absence of IL-2 and sorted CD8 expressing T cells before transcriptional profiling. This control demonstrated high similarity of in vitro maintained CD8 T cells with the profile of ex vivo analyzed CD8 T cells. Thus, the functional genomics changes observed in in vitro stimulated CD8 T cells are specific to stimulation and not simply due to in vitro culture artifacts. Multiple dimensional scaling based on the global 16,726 cDNA clone data set clearly separated the ex vivo and in vitro quiescent from the in vitro activated populations ( Figure 1B).
Analysis of individual experimental conditions against each other demonstrated that the biggest differences in transcriptional patterns were present between the Flu-specific CTLs stimulated in vitro and the quiescent ex vivo CD8 T cells (three way t test) with the least differences noted between the Flu-specific CTLs and the convivial non-Fluspecific CTLs from the same culture.
Transcriptional patterns shared by Ag-specific and convivial CTLs compared to quiescent ex vivo analyzed CD8-expressing T cells
The transcriptional pattern of in vitro stimulated CTLs whether exposed to Ag (Flu-specific CD8 T cells) or only to IL-2 was similar relative to that of ex vivo isolated or in Figure 1C). Thus, CTLs maintained in the same culture demonstrate similar transcriptional patterns independent of their exposure to Ag-specific stimulation.
Among the genes similarly up-regulated in both subgroups of stimulated T cells were perforin, granzyme A, TNF-α and the IL-2 receptor α chain. Overall, the transcriptional profile of the genes concordantly expressed by in vitro stimulated CTLs was similar to that of our previous reported analysis [13].
Transcriptional patterns specific to Ag-specific activation
The aim of this study was to identify those signatures that are determined by the long term effects of Ag stimulation independent of other co-existing factors that may influence the activation and function of CD8+ T cells. For this reason, we focused our analysis on genes that were differentially expressed between CFSE high , Flu/tHLA positive CD8 T cells and CSFE low , Flu/tHLA negative CD8 T cells. An unpaired t test identified 1,727 genes to be differentially expressed between the two populations at a p 2 -vlue < 0.001 (Table 1). It should be clarified that several of these genes were concordantly differentially expressed in both subpopulations compared with quiescent ex vivo isolated CD8-expressing T cells. However, the degree in which the expression was altered in the two subsets was sufficiently different to result in significant differences between the two populations. The differences identified were significant according to the multivariate permutation test (p-value = 0). Multiple dimensional scaling analysis based on the complete date set confirmed the separation of the two populations ( Figure 1D).
Among the genes differentially expressed by the two populations 644 were up-regulated in Ag-specific CTLs compared to convivial CTLs. The rest (1,083) were downregulated. The annotations related to biological functions derived through gene ontology suggested that the genes that were predominantly up-regulated in Ag-specific CTLs belong to several categories. Although some categories appeared particularly enriched, they contained a relatively small number of genes, while the categories with the largest absolute number of genes included: cell cycle and cell division (111 genes), response to endogenous stimulus (45 genes) and cytokine production (17 genes). Gene Ontology analysis suggested, therefore, that even seven days after the original stimulus the predominant differences between Ag-exposed Flu-specific CTLs and their culture companions were related to a broader activation of pro-proliferative stimuli, signaling and cytokine production in the former.
Gene Ontology was also applied to identify genes associated with immunological functions; this analysis identified 58 (expected number 52.8; observed over expected ratio = 1.10) up-regulated in the Flu-specific CTLs and 212 out of 988 (expected number 80.3; observed over expected ratio = 2.64) down-regulated relative to the convivial CTLs (Fisher test p 2 -value < 0.001).
The immunologically-related genes most expressed by Agactivated or convivial CTLs are shown in Table 2. The transcript most abundant in Ag-specific CTLs was IFN-γ (Figure 2). Conversely, IFN-γ receptor 2 (β-chain) ranked highest among the immune genes up-regulated bye convivial CTLs, which paralleled the over-expression of interferon-stimulated genes (ISGs) selectively in these cells while ISGs where completely shut off in Ag-activated CTLs. The lack of expression of ISGs by Ag-activated CTLs was associated with lack of expression of the IFN-γ receptor α and β and the IFN-γ receptor accessory factor AF-1. This observation suggested that Ag-activated, terminally differentiated CTLs shelter themselves from the potentially harmful autocrine effects of IFN-γ. This observations supports the lack of responsiveness of Ag-specific T cells to IFN-γ during the expansion phase which is slowly regained during the contraction phase previously reported in an experimental animal model [19]. On the other hand, Ag-stimulated CTLs expressed higher levels of CSF2 receptor β chain, (GM-CSF/IIL-5/IL-3 receptor common β-chain, CSFR2B, CD131), chemokine (C-X-C motif) receptor 6 (CXCR6), C-C chemokine receptor 1 (CCR1) and IL-2R αand β-chains. In addition, Ag-stimulated CTLs strongly down-regulated the chemokine (C-C motif) receptor 7 in accordance with their effector T cell differentiation. Interestingly, the over-expression of CD131 was associated with very high expression of its ligand colony stimulating factor 2 (CSF2, GM-CSF) suggesting that this autocrine loop may play an important role in promoting their survival. Moreover, several cytokines were produced including chemokine (C-C motif) ligand 3 (CCL3, MIP-1α), ligand 4 (CCL4, MIP-1β) and ligand 18 (CCL18, PARC).
Ag-activated CTLs also expressed higher levels of granzyme B and to a lesser degree granzyme A while their convivial counterparts expressed high levels of granzyme K. As previously observed, Perforin, was strongly and equally un-regulated in both populations compared to quiescent CD-expressing T cells studied ex vivo or in vitro [4].
Finally, Ag-activated CTLs expressed higher levels of the co-stimulatory molecules CD80 and CD86.
GMCSF effects on Ag-specific CTLs
The high levels of IFN-γ and GMSCF transcript (but not protein) expression together it the up-regulation of CMCSF receptor β chain (CD131) and the down regulation of the IFN-γ receptors I and II by Ag-activated CTLs suggested the intriguing possibility of a bipolar relationship of Ag-activated CTLs with the potential autocrine effects of the two cytokines. It appears that Ag-activated CTLs shield themselves from the potential harmful affects of IFN-γ [20] while accepting the potentially proliferative support of GM-CSF; a growth factor without known proapoptotic functions [21]. This bipolar behavior may explain the selective survival and expansion of Ag-stimulated CTLs in vitro (an potentially in vivo).
To test the potential of such hypothesis, we analyzed CD131 expression by Flu-specific CTLs. As shown in Figure 3, at day 6 and 12 Flu-specific CTLs consistently express CD131 which is totally absent in the convivial CTLs. IVS of HLA-A*0201-expressing PBMCs with 1 μM Flu M1:58-66 peptide in the presence of IL-2, GMCSF, IFN-γ or a combination of the them with IL-2 demonstrated that while GMCSF (1,000 U/ml) selectively enhances the expansion of Flu-specific CTLs after 12 days in culture when combined to IL-2 (300 IU/ML) administration while IFN-γ (500 U/ml) has no effect consistent with their lack of expression of the IFN receptors ( Figure 3C). Not the combination IL-2+GMCSF nor other Gene expression profile: comparison between antigen specific and convivial non specific CD8+ T cells Figure 2 Gene expression profile: comparison between antigen specific and convivial non specific CD8+ T cells. A) Selection of genes among 1,727 genes differentially expressed between Flu-specific (green bar) and convivial (red bar) CD8 T cells at a t test p 2 -value < 0.001 whose annotation includes the word "interferon". The table displays the gene symbol, its name and the level of differential expression (Δ) as the CY5/CY3 ratio of fly-specific versus convivial in green and vice versa in red.
cytokine combination exerted any effect on the proliferation of convivial CTLs (data not shown).
In conclusion, this preliminary study suggests that, at least during IVS, preferential survival/expansion of Ag-activated CTL may be partially mediated through a bipolar regulation of their sensitivity to the autocrine secretion of cytokines; IFN-γ and GMCSF appear to play a dominant role at this junction as other two cytokines known to be produced by activated CTLs (TNF-α and IL-2) where similarly highly expressed at the transcriptional level by both Ag-activated and convivial CTLs. The confirmation of the selective expression of CD131 on the surface of Ag-activated CTLs and its likely functional association with the selective response of Ag-activated CTLs to exogenous GM-CSF suggests a previously unreported positive feed back autocrine loop that may stimulate CTL growth in response to further Ag stimulation [21]. In addition, the positive role that GMCSF may play in the proliferation of CD131-expressing Ag-activated CTLs, may explain the beneficial effects of this cytokine used as vaccine adjuvant, which has been so far attributed exclusively to its role in activating and maturing antigen presenting cells [22]. Finally, the selective expression of CD131 by Ag-activated CTLs may qualify this surface marker as a non Ag-specific biomarker of Ag-specific CTL activation. Although extensive in vivo and in vitro validation is required to support such hypotheses, we believe that the novelty and the potential biological implications of these findings warrant a preliminary disclosure. | 5,068 | 2008-04-15T00:00:00.000 | [
"Biology"
] |
COMP-angiopoietin-1 ameliorates inflammation-induced lymphangiogenesis in dextran sulfate sodium (DSS)-induced colitis model
Alterations in the intestinal lymphatic network are pathological processes as related to inflammatory bowel disease (IBD). In this study, we demonstrated that reduction in inflammation-induced lymphangiogenesis ameliorates experimental acute colitis. A soluble and stable angiopoietin-1 (Ang1) variant, COMP-Ang1, possesses anti-inflammatory and angiogenic effects. We investigated the effects of COMP-Ang1 on an experimental colonic inflammation model. Experimental colitis was induced in mice by administering 3% dextran sulfate sodium (DSS) via drinking water. We determined body weight, disease activity indices, histopathological scores, lymphatic density, anti-ER-HR3 staining, and the expression of members of the vascular endothelial growth factor (VEGF) family and various inflammatory cytokines in the mice. The density of lymphatic vessel endothelial hyaluronan receptor 1 (LYVE-1) and VEGFR-3-positive lymphatic vessels increased in mice with DSS-induced colitis. We observed that COMP-Ang1-treated mice showed less weight loss, fewer clinical signs of colitis, and longer colons than Ade-DSS-treated mice. COMP-Ang1 also significantly reduced the density of LYVE-1-positive lymphatic vessels and the disruption of colonic architecture that is normally associated with colitis and repressed the immunoregulatory response. Further, COMP-Ang1 treatment reduced both M1 and M2 macrophage infiltration into the inflamed colon, which involved inhibition of VEGF-C and D expression. Thus, COMP-Ang1, which acts by reducing inflammation-induced lymphangiogenesis, may be used as a novel therapeutic for the treatment of IBD and other inflammatory diseases. COMP-Ang1 decreases inflammatory-induced lymphangiogenesis in experimental acute colitis. COMP-Ang1 improves the symptom of DSS-induced inflammatory response. COMP-Ang1 reduces the expression of pro-inflammatory cytokines in inflamed colon. COMP-Ang1 reduces the expression of VEGFs in inflamed colon. COMP-Ang1 prevents infiltration of macrophages in a DSS-induced colitis model. COMP-Ang1 decreases inflammatory-induced lymphangiogenesis in experimental acute colitis. COMP-Ang1 improves the symptom of DSS-induced inflammatory response. COMP-Ang1 reduces the expression of pro-inflammatory cytokines in inflamed colon. COMP-Ang1 reduces the expression of VEGFs in inflamed colon. COMP-Ang1 prevents infiltration of macrophages in a DSS-induced colitis model.
Introduction
Ulcerative colitis (UC) is an inflammatory disease that affects the colon and the small intestine. Clinical diagnosis revealed hematochezia, passage of mucus, distorted crypt architecture, and crypt abscesses in patients with UC [1]. Numerous animal models of colonic inflammation with several features of UC Electronic supplementary material The online version of this article (https://doi.org/10.1007/s00109-018-1633-x) contains supplementary material, which is available to authorized users. exist, which require administration of specific concentrations of colitis-inducing chemicals such as dextran sulfate sodium (DSS) [2].
Inflammation induces inflammatory lymphangiogenesis, and remodeling of lymphatic vessels in inflamed conditions is important for tissue homeostasis and immune response [3]. In fact, lymphatic vessel density in the inflamed colonic mucosa of patients with UC increases with progression of the disease [4]. Reports show that blockade of angiogenesis, the growth of new blood vessels, could be a new therapeutic approach in experimental colitis models [5].
Angiopoietin-1(Ang1) was identified as a secreted protein ligand of tyrosine kinase with important roles in vascular development [6,7]. Ang1 possesses anti-inflammatory effect and reduces vascular permeability [8]. The N-terminal portion of Ang1 was replaced with the short coiled-coil domain of cartilage oligomeric matrix protein (COMP) to generate COMP-angiopoietin-1, a soluble, stable, and potent Ang1 variant [9]. Previous studies show that COMP-angiopoietin-1 (COMP-Ang1) affects wound healing by enhancing angiogenesis and lymphangiogenesis in a diabetic experimental model [10]. Furthermore, blood serum angiopoietin-1 levels are elevated in patients with UC and can be used as a factor for studying the progression of inflammatory bowel disease (IBD) [11,12].
Macrophages are essential for the pathogenesis of IBD. Inflammatory macrophages in the colon act as initiators and protectors of immune responses of IBD-related disorders of the epithelial barrier [13,14]. Polarized macrophage phenotype is classified into two functional types, namely, classically activated macrophages (M1) and alternatively activated macrophages (M2) [15]. M1 macrophages secrete proinflammatory cytokines and contribute to the initiation of DSS-colitis, along with innate immune cells, neutrophils, and dendritic cells. On the contrary, M2 macrophages produce anti-inflammatory cytokines, which exhibit protective roles in the development of IBD [16].
In this study, we investigated the mechanism underlying the effect of COMP-Ang1 on colitis symptoms and changes in lymphatic vessel density in acute colitis. The effect of COMP-Ang1 on activated macrophages in colitis was also investigated to determine whether they are involved in the colonic immune response.
Animal experiments
Seven-week-old male C57BL/6 mice (Charles River Korea, Seoul, South Korea) were used as experimental animals. All animal studies were reviewed and approved by the Institutional Animal Care and Use Committee of Chonbuk National University. The animals were randomly assigned to the following four groups of ten mice each: adeno-virus diluted in sterile 0.9% NaCl and injected intravenously (i.v.) through the tail vein in the control group (without DSS; Ade-cont), COMP-Ang1-virus injected control group (without DSS; comp-cont), adeno-virus injected 3% DSS administration group (Ade-DSS), and COMP-Ang1-virus injected 3% DSS administration group (comp-DSS). The colitis model was induced in mice by adding filtered 3% DSS to drinking water for 7 days. (DSS, molecular weight 36-50 kDa; MP Biochemicals, Aurora, OH, USA); mice in the control group received tap water without DSS. Mice were monitored daily for body weight, symptom of stool, fecal occult blood, and survival [17]. The disease activity index (DAI) was determined on the basis of mean scores of weight change, stool parameters, and fecal occult blood, as described previously [18,19].
Histopathological analysis
The colon was fixed using 4% paraformaldehyde and embedded in paraffin. Five-micrometer-thick sections were sliced from the paraffin block and stained with hematoxylin and eosin (H & E). Colon damage was assessed as previously described. The evaluation parameters were extent of injury, wall edema, leukocyte infiltration, and crypt abscesses [20]. The degree of inflammation was scored as follows: on a scale of 0-3 (0, negative; 1, mild; 2, moderate; 3, severe), as was the extent of injury (0, negative; 1, mucosal; 2, mucosal and muscularis mucosal; 3, transmural); damage in crypt architecture was scored as follows: on a scale of 0-4 (0, negative; 1, 0-30% damage to epithelium; 2, 31-65% damage to epithelium; 3, structurally defective epithelium; 4, loss of crypt and epithelium destruction). Each section was graded on the basis of affect the mount of involvement on a scale of 1-4 (1, 0-25%; 2, 26-50%; 3, 51-75%; 4, 76-100%). At least five sections from slide were examined to derive each score. The scoring system was designed to yield a minimum of 0 and a maximum of 40.
Immunohistochemistry
Colon sections were fixed using 4% paraformaldehyde and embedded in paraffin. The paraffin block was cut into 4 μm sections, deparaffinized with xylene and rehydrated with ethanol. After blocking for 1 h, the colon tissue was incubated overnight at 4°C with anti-mouse lymphatic vessel endothelial hyaluronan receptor 1 (LYVE-1) (Angiobio, Del Mar, CA, USA), anti-vascular endothelial growth factor receptor-3 (VEGFR-3) (R & D systems, Minneapolis, MN, USA), and anti-ER-HR3 antibodies (BMA, Augst, Switzerland). The sections were treated with AEC substrate-chromogen (DakoCytomation, Glostrup, Denmark) to visualize the immunocomplexes. Immunohistochemical staining was visualized under a Nikon Eclipse 80i light microscope (Nikon Instruments Inc., Melville, NY, USA). The densities of LYVE-1-positive and VEGFR-3-positive areas were measured in 12 randomly selected fields at a magnification of ×400 using the ImageJ software.
Quantitative real-time PCR
Total RNA from the colon was isolated using the RNeasy mini kit (Qiagen, Hilden, Germany), and the first strand of cDNA was synthesized using a Transcriptor First Strand cDNA synthesis kit (Roche, Mannheim, Germany). Real-time qPCR was performed using iTaq universal SYBR Green Supermix
Enzyme-linked immunosorbent assay
The levels of IL-1β and IL-6 in colon tissue were measured using a DuoSet sandwich ELISA kit (Enzo Life Sciences, Farmingdale, NY, USA).
Statistical analysis
Data are expressed as the means ± SD. Mean comparisons between two groups were examined for significant differences using analysis of variance (ANOVA), followed by individual comparisons using a Tukey's post hoc test; P value < 0.05 was considered to indicate a statistically significant difference.
Inflammation-induced lymphangiogenesis in experimental colitis
Inflammation remodels the lymphatic network by a process known as lymphangiogenesis. During disease onset, intestinal lymphatics are generated from the existing vessels and extremely dilated lymphatic vessels in DSSinduced experimental colitis mice. We observed that the LYVE-1-positive lymphatic vessels were distributed in the submucosal layers of the colon in untreated control mice, whereas the DSS-induced mice had enlarged LYVE-1positive lymphatic vessels with a significantly higher density compared to the untreated mice ( Fig. 1a, b). In addition, LYVE-1-expressing monocytes were also observed (indicated by yellow arrow head, Fig. 1a). Similarly, the levels of VEGFR-3, a representative lymphatic marker, were also increased in the inflamed colon (Fig. 1c, d).
Systemic delivery of COMP-Ang1 ameliorates body weight loss, DAI, and colon shortening in mice with colitis We examined signs of disease such as body weight loss, DAI, and colitis-induced shortening of the colon 7 days after administration of 3% DSS-supplemented drinking water. We observed that the body weight in Ade-DSS-treated mice and COMP-DSS-treated mice showed no significant difference (Fig. 2a). However, the DAI was markedly decreased in the COMP-DSS-treated mice compared to the Ade-DSS-treated mice (Fig. 2b). COMP-treatment recovered colon length in COMP-DSS-treated mice by day 7 (Fig. 2c).
COMP-Ang1 decreases inflammation-induced lymphangiogenesis in a DSS-induced colitis model
Recent studies showed that lymphatic vessels (LVs) are functionally important for the resolution of inflammatory response [1]. To investigate the changes in lymphatic density in the DSSinduced colitis model, we performed immunohistochemical staining for LYVE-1, a marker of lymphatic vessel endothelial cells. The colon sections of Ade-Cont-treated mice showed only a narrow alley and thin LVs in the lamina propria and submucosa (Fig. 3a). However, after 7 days of DSS treatment, the mucosa was inflamed in Ade-DSS-treated mice, accompanied by increase in the density of enlarged LYVE-1-positive lymphatic vessels and colon submucosa edema, indicating decreased lymphatic vessel function. Surprisingly, systemic delivery of COMP-Ang1 significantly reduced the density of lymphatic vessels in the DSS-induced colitis model (Fig. 3a, c). The Ade-DSS-induced inflammatory changes that progressed to multifocal erosions, crypt loss, infiltration of leukocytes, and increased submucosa edema were improved by COMP-Ang1 treatment (Fig. 3a, b).
COMP-Ang1 reduces the expression of VEGF-A, VEGF-C, and VEGF-D in mice with colitis
Reports show that inflammation-induced lymphangiogenesis correlates with expression of the VEGF family members. We quantified the mRNA to determine the expression of level of VEGF family members in the colon tissue. Results showed that the expression levels of VEGF-A (20.5 fold), VEGF-C (3.4 fold), and VEGF-D (3.9 fold) were increased in Ade-DSS mice, whereas those of VEGF-C and VEGF-D were markedly suppressed in COMP-DSS mice (Fig. 3d-f). VEGF-A expression was decreased in COMP-DSS mice compared to that in Ade-DSS mice, although it was not statistically significant (Fig. 3d).
COMP-Ang1 reduces the expression of pro-inflammatory cytokines in a DSS-induced colitis model
Secreted pro-inflammatory cytokines may cause inflammationinduced lymphangiogenesis in mice with colitis. To investigate changes in pro-inflammatory cytokine levels in a DSS-induced colitis model, we measured IL-1β, IL-6, and TNF-α levels at the mRNA (RT-PCR) and protein (ELISA) levels in the colon. VEGF-D levels was determined by quantitative real-time PCR. The expression of these genes was normalized to that of GAPDH. Bars represent the means ± SD of three independent experiments. ***P < 0.001 vs. Ade-Cont; ##P < 0.01 vs. Ade-DSS; ###P < 0.001 vs. Ade-DSS Quantitative real-time PCR (qPCR) analysis showed that the upregulation of IL-1β, IL-6, and TNF-α in Ade-DSS mice was significantly decreased in COMP-DSS mice (Fig. 4a-c). In addition, ELISA showed the same trend for IL-1β, IL-6, and TNF-α levels (Fig. 4d-f).
COMP-Ang1 prevents infiltration of macrophages in a DSS-induced colitis model
Activated macrophages may have critical roles in the regulation of inflammatory processes. Inflammation-induced lymphangiogenesis and the involvement with polarized that classically activated (M1) and alternatively activated (M2) macrophages have considerably improved in recent years. We measured the number of ER-HR3-positive macrophages in the inflamed colon by immunohistochemistry to further examine the inhibitory effects of COMP-Ang1 on infiltration of inflammatory macrophages (Fig. 5a, b). Treatment with COMP-Ang1 reduced the DSS-induced increase in infiltration of ER-HR3-positive macrophages in colitis tissue. Next, we evaluated the expression of M1 and M2 macrophage-related factors after treatment with INF-γ and IL-4, respectively; qPCR was used to investigate iNOS and CD80 levels as M1 macrophage-related factors, whereas arginase-1 and CD206 were investigated as M2 macrophage-related factors. Compared to Ade-DSS, COMP-Ang1 significantly decreased the expression of both M1 and M2 macrophage-related factors (Fig. 5c-f).
Discussion
DSS administration induces typical sign of colitis such as DAI increase, body weight loss, and a shortening of the colon [21]. Presence of occult blood and diarrhea are usually the earliest features, and the inflammation fully develops within 7-10 days. Macroscopic features include shortened edematous colon with areas of hemorrhage and ulceration in H & E staining. Infiltration of inflammatory cells, such as macrophages, plasma cells, and few lymphocytes was observed in the mucosa and submucosa. Mucosal edema, goblet cell loss, and crypt destruction were followed by crypt shortening [21]. COMP-Ang1 administration alleviated these symptoms, which indicated that COMP-Ang1 reduced DSS-induced colitis. Furthermore, COMP-Ang1 may possess a therapeutic IBD is a complex process involving most type of immune cells of the microvasculature. It causes chronic inflammation via leukocyte recruitment, angiogenesis, and lymphangiogenesis, which results in tissue remodeling. IBD is the result of dysfunctional immunoregulation which is evident by the production of mucosal cytokines that contribute to increase in blood and lymphatic vessel density in IBD [22]. However, the precise mechanism of inflammation-induced lymphangiogenesis is still unknown. Ran et al. [23] asserted that induction of the NF-κβ pathway by inflammatory stimuli activates the transcription factor Prox-1 (Prospero homeobox protein 1), which is a specific marker of the lymphatic endothelium. NF-κβ and Prox-1 activate the VEGFR-3 promoter and enhance the response of the lymphatic endothelium to the VEGFR-3 binding factors, VEGF-C and VEGF-D [24]. Reports show that lymphatic vessel density and VEGF-C/VEGFR-3 signaling are increased in the colon of IBD patients [25].
Lymphangiogenesis is occasionally present during inflammation [26]. Lymphangiogenesis with inflammatory conditions, including colitis, is involved in the physiology of inflammation. It may directly influence mucosal edema or immune cell infiltration in inflamed tissues [3]. During inflammation, inflammatory mediators pass through the LVs, which play an important role in maintaining fluid homeostasis by absorbing tissue fluid [27]. Inflammation-induced lymphangiogenesis correlates with the expression of VEGF family members, because of the lymphangiogenic role of CD11b macrophage [28]. Infiltrated macrophages expressing VEGF-C and VEGF-D are observed during intestinal inflammation [29,30], which might contribute to inflammationinduced lymphangiogenesis. COMP-Ang1 treatment reduced the increase in the number of ER-HR3-positive macrophages infiltrating the kidney in ischemia-reperfusion-induced renal injury. [31] In this study, COMP-Ang1 drastically diminished the levels of inflammatory cytokines and reduced macrophage infiltration in DSS-induced colitis. Therefore, COMP-Ang1 may be considered a candidate for anti-inflammatory therapy to reduce inflammation-induced lymphangiogenesis-related inflammatory cytokine levels and the number of infiltrated macrophages.
When macrophages are activated, M1 and M2 types of macrophages elicit different responses. Recently, it has been reported that the mechanism of M1 or M2 polarization may regulate VEGF production by macrophages [32]. Indeed, the expression of VEGF-C is increased in both M1 and M2 type macrophages in obstructed renal inflammation [33]. Our results showed that COMP-Ang1 decreased the expression of CD80, iNOS, CD206, and Arg-1-related genes of macrophage The expression levels of macrophage-associated genes, c CD80, d iNOS, e CD206, and f Arg-1, were examined by real-time PCR. The expression of these genes was normalized to that of GAPDH. Data represent the means ± SD of three independent experiments. **P < 0.01 vs. Ade-Cont; ***P < 0.001 vs. Ade-Cont; ##P < 0.01 vs. Ade-DSS; ###P < 0.001 vs. Ade-DSS polarization, and VEGF-C and VEGF-D. COMP-Ang1 also affected macrophage polarization, leading to decrease in VEGF-C production.
IBD pathogenesis involves impaired clearance of foreign material, leading to sustained activation of innate immune cells and compensatory induction of the adaptive immune response [34]. Although not assessed in our study, it would be of interest to investigate the in vivo effects of COMP-Ang1, such as systemic elimination of macrophages or antigen clearance for epithelial barrier of colon.
In conclusion, our observations support the use of COMP-Ang1 as a novel therapeutic that might reduce inflammationinduced lymphangiogenesis in IBD and other inflammatory diseases.
Compliance with ethical standards
All animal studies were reviewed and approved by the Institutional Animal Care and Use Committee of Chonbuk National University.
Conflict of interest
The authors declare that they have no conflict of interest.
Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 3,873.8 | 2018-04-02T00:00:00.000 | [
"Medicine",
"Biology"
] |
Physics Informed Token Transformer for Solving Partial Differential Equations
Solving Partial Differential Equations (PDEs) is the core of many fields of science and engineering. While classical approaches are often prohibitively slow, machine learning models often fail to incorporate complete system information. Over the past few years, transformers have had a significant impact on the field of Artificial Intelligence and have seen increased usage in PDE applications. However, despite their success, transformers currently lack integration with physics and reasoning. This study aims to address this issue by introducing PITT: Physics Informed Token Transformer. The purpose of PITT is to incorporate the knowledge of physics by embedding partial differential equations (PDEs) into the learning process. PITT uses an equation tokenization method to learn an analytically-driven numerical update operator. By tokenizing PDEs and embedding partial derivatives, the transformer models become aware of the underlying knowledge behind physical processes. To demonstrate this, PITT is tested on challenging 1D and 2D PDE neural operator prediction tasks. The results show that PITT outperforms popular neural operator models and has the ability to extract physically relevant information from governing equations.
Introduction
Partial Differential Equations (PDEs) are ubiquitous in science and engineering applications.
While much progress has been made in developing analytical and computational methods to solve the various equations, no complete analytical theory exists, and computational methods are often prohibitively expensive.Recent work has shown the ability to learn analytical solutions using bilinear residual networks, 1 and bilinear neural networks, [2][3][4] where an analytical solution is available.2][13][14] While mesh optimization generally allows for using traditional numerical solvers, current methods only improve speed or accuracy by a few percent, or require many simulations during training.Methods for super resolution improve speed, but often struggle with generalizing to data resolutions not seen in the training data, with more recent work improving generalization capabilities. 15Surrogate modeling, on the other hand, has shown a good balance between improved performance and generalization.
Neural operator learning architectures, specifically, have also shown promise in combining super resolution capability with surrogate modeling due to their inherent discretization invariance. 16Recently, the attention mechanism has become a popular choice for operator learning.
The attention mechanism first emerged as a promising model for natural language processing tasks, [17][18][19][20] especially the scaled dot-product attention. 18Its success has been extended to other areas, including computer vision tasks 21 and biology. 224][25][26][27][28][29][30] Kovachki et al. 31 proposes a kernel integral interpretation of attention.Cao 23 analyzes the theoretical properties of softmax-free dot product attention (also known as linear attention) and further proposes two interpretations of attention, such that it can be viewed as the numerical quadrature of a kernel integral operator or a Peterov-Galerkin projection.OFormer (Operator Transformer) 24 extends the kernel integral formulation of linear attention by adding relative positional encoding 32 and using cross attention to flexibly handle discretization, and further proposes a latent marching architecture for solving forward time-dependent problems.Guo et al. 29 introduces attention as an instance-based learnable kernel for direct sampling method and demonstrates superiority on boundary value inverse problems.LOCA (Learning Operators with Coupled Attention) 33 uses attention weights to learn correlations in the output domain and enables sample-efficient training of the model.GNOT (General Neural Operator Transformer for Operator Learning) 25 proposes a heterogeneous attention architecture that stacks multiple cross-attention layers and uses a geometric gating mechanism to adaptively aggregate features from query points.Additionally, encoding physics-informed inductive biases has also been of great interest because it allows incorporatation of additional system knowledge, making the learning task easier.One strategy to encode the parameters of different instances for parametric PDEs is by adding conditioning module to the model. 34,35other approach is to embed governing equations into the loss function, known as Physics-Informed Neural Networks (PINNs). 36Physics Informed Neural Networks (PINNs) have shown promise in physics-based tasks, but have some downsides.Namely, they show lack of generalization, and are difficult to train.Complex training strategies have been developed in order to account for these deficiencies. 37ile many existing works are successful in their own right, none so far have incorporated entire analytical governing equations.In this work we introduce an equation embedding strategy as well as an attention-based architecture, Physics Informed Token Transformer (PITT), to perform neural operator learning using equation information that utilizes physicsbased inductive bias directly from governing equations (The main architecture of PITT is shown in figure 1).More specifically, PITT fuses the equation knowledge into the neural operator learning by introducing a symbolic transformer on top of the neural operator.
We demonstrate through a series of challenging benchmarks that PITT outperforms the popular Fourier Neural Operator 13 (FNO), DeepONet, 14 and OFormer 24 and is able to learn physically relevant information from only the governing equations and system specifications.
Methods
In this work, we aim to learn the operator G θ : A → U, where A is our input function space, U is our solution function space, and θ are the learnable model parameters.We use a combination of novel equation tokenization and numerical method-like updates to learn model operators G θ .Our novel equation tokenization and embedding method is described first, followed by a detailed explanation of the numerical update scheme.
Equation Tokenization
In order to utilize the text view of our data, the equations must be tokenized as input to our transformer.Following Lampe et al., 38 each equation is parsed and split into its constituent symbols.The tokens are given in table 1. as the continuity equation, are self-contained.All of the tokens are then compiled into a single list, where each token in the tokenized equation is the index at which it occurs in this list.For example, we have the following tokenization: = [Derivative, (, u, (, x, , , t, ), , , t, )] = [6, 0, 3, 15, 0, 16, 33, 14, 1, 33, 1] After each equation has been tokenized, the target time value is appended in tokenized form to the equation, and the total equation is padded with a placeholder token so that each text embedding is the same length.Sampled values are truncated at 15 digits of precision.
Data handling code is adapted from PDEBench. 39
Physics Informed Token Transformer
The Physics Informed Token Transformer (PITT) utilizes tokenized equation information to construct an update operator F P , similar to numerical integration techniques: x t+1 = x t + F P (x t ).We see in figure 1, PITT takes in the numerical values and grid spacing, similar to operator learning architectures such as FNO, as well as the tokenized equation and the explicit time differential between simulation steps.The tokenized equation is passed through a Multi Head Attention block seen in figure 1a.In our case we use Self Attention. 23e tokens are shifted and scaled to be between -1 and 1 upon input, which significantly boosts performance.This latent equation representation is then used to construct the keys and queries for a subsequent Multi Head Attention block that is used in conjunction with output from the underlying neural operator to construct the update values for the final input frame.The time difference between steps is encoded, allowing use of arbitrary timesteps.
Intuitively, we can view the model as using a neural operator to passthrough the previous state, as well as calculate the update, like in numerical methods.The tokenized information is then used to construct an analytically driven update operator that acts as a correction to the neural operator state update.This intuitive understanding of PITT is explored with our 1D benchmarks.
Two different embedding methods are used for the tokenized equations.In the first method, the token attention block first embeds the tokens, T , as key, query, and values with learnable weight matrices: We use a single layer of self-attention for the tokens.The update attention blocks seen in figure 1b then uses the token attention block output as queries and keys, the neural operator output as values, and embeds them using trainable The output is passed through a fully connected projection layer to match the target output dimension.This update scheme mimics numerical methods and is given in algorithm 1.
Algorithm 1 PITT numerical update scheme Require: V 0 , T h1 , T h2 , time t, L layers for l = 1, 2, . . ., L do A standard, fully connected multi-layer perceptron is used to calculate the update after concatenating the attention output with an embedding of the fractional timestep.This block uses softmax-free Linear Attention (LA), 23
Data Generation
In order to properly assess performance, multiple data sets that represent distinct challenges are used.In the 1D case, we have the Heat equation, which is a linear parabolic equation, the Where the forcing term is given by: δ (t, x) = J j=1 A j sin (ω j t + (2πl j x)/L + ϕ j ), and the initial condition is the forcing term at time t = 0: u (0, x) = δ (0, x).The parameters in the forcing term are sampled as follows: A j ∼ U(−0.5, 0.5), ω j ∼ U(−0.4,0.4), l j ∼ {1, 2, 3}, ϕ j ∼ U(0, 2π).The parameters, (α, β, γ) of equation 1 can be set to define different, famous equations.When γ = 0, β = 0 we have the Heat equation, when only γ = 0 we have Burgers' equation, and when β = 0 we have the KdV equation.Each equation has at least one parameter that we modify in order to generate large data sets.
For the Heat equation, we generated 10,000 simulations from each β value for 60,000 total samples.For Burgers' equation, we used advection values of α ∈ {0.01, 0.05, 0.1, 0.2, 0.5, 1}, and generated 2,500 simulations for each combination of values, for 90,000 total simulations.
For the KdV equation, we used an advection value of α = 0.01, with γ ∈ {2, 4, 6, 8, 10, 12}, and generated 2,500 simulations for each parameter combination, for 15,000 total simulations.The 1D equations text tokenization is padded to a length of 500.Tokenized equations are long here due to the many sampled values.
Navier-Stokes Equation
In 2D, we use the incompressible, viscous Navier-Stokes equations in vorticity form, given in equation 2. Data generation code was adapted from Li et al. 13 .
where u(x, t) is the velocity field, w(x, t) = ∇ × u(x, t) is the vorticity, w 0 (x) is the initial vorticity, f (x) is the forcing term, and ν is the viscosity parameter.We use viscosities
Steady-State Poisson Equation
The last benchmark we perform is on the steady-state Poisson equation given in equation 3.
where u(x, y) is the electric potential, −∇u(x, y) is the electric field, and g(x, y) contains boundary condition and charge information.The simulation cell is discretized with 100 points in the horizontal direction and 60 points in the vertical direction.Capacitor plates are added with various widths, x and y positions, and charges.An example of input and target electric field magnitude is given in figure 2.
Results
We now compare PITT with both embedding methods against FNO, DeepONet, and OFormer on our various data sets.† indicates our novel embedding method and * indicates standard embedding.All experiments were run with five random splits of the data.Reported results and shaded regions in plots are the mean and one standard deviation of each result, respectively.Experiments were run with a 60-20-20 train-validation-test split.Early stopping is also used, where the epoch with lowest validation loss is used for evaluation.Note: parameter count represents total number of parameters.In some cases PITT variants use a smaller underlying neural operator and have lower parameter count than the baseline model.
Hyperparameters for each experiment are given in the appendix.
1D Next-Step Prediction
Our 1D case is trained by using 10 frames of our simulation to predict the next frame.The data is generated for four seconds, with 100 timesteps, and 100 grid points between 0 and 16.
The final time is T = 4s.Specifically, the task is to learn the operator The effect of the neural operator and token transformer modules in PITT can be easily decomposed and analyzed by returning the passthrough and update separately, instead of their sum (Figure 1c).Using the pretrained PITT FNO from above, a sample is predicted for the 1D Heat equation.We see the decomposition in figure 3 where PITT shows both lower final error and improved total error accumulation.The novel embedding error accumulation plot is given in the appendix in figure 9.In these experiments, we used the models trained in the next-step fashion from section our 1D benchmarks.We start with the first 10 frames from each trajectory in the test set for the 1D data sets and the only initial condition for the 2D test data set and autoregressively predict the entire rollout.
Appendix Experimental Details
Training and model hyperparameters are given here.In all cases, 3 encoding and decoding convolutional layers were used for FNO.FNO
1D Next-Step Training Details
All models were trained for 200 epochs on all of the 1D data sets.
1D Rollout Comparison
In 1D rollout we see significantly PITT variants of FNO and DeepONet match the ground truth values much better for the Heat and Burgers simulations, and maintains its shape closer to ground truth for the KdV equation when compared to FNO.Darker lines correspond with londer times in rollout, up to a time of 4 seconds.
Figure 1 :
Figure1: The Physics Informed Token Transformer (PITT) uses standard multi head selfattention to learn a latent embedding of the governing equations.This latent embedding is then used to perform numerical updates using linear attention blocks.The equation embedding acts as an analytically-driven correction to an underlying data-driven neural operator.
10 −8 , 2 • 10 −8 , . . ., 10 −5 } and forcing term amplitudes A ∈ {0.001, 0.002, 0.003, . . ., 0.01}, for 370 total parameter combinations.120 frames are saved over 30 seconds of simulation time.The initial vorticity is sampled according to a gaussian random field.For each combination of ν and A, 1 random initialization was used for the next-step and rollout experiments and 5 random initializations were used for the fixed-future experiments.The tokenized equations are padded to a length of 100.Simulations are run on a 1x1 unit cell with periodic boundary conditions.The space is discretized with a 256x256 grid for numerical stability that is evenly downsampled to 64x64 during training and testing.
Figure 2 :
Figure 2: Example setup for the 2D Poisson equation.a) Input boundary conditions and geometry.b) Target electric field output.
Figure 5 :
Figure 5: Rollout results for 2D Navier Stokes using our novel embedding method.
Attention Weight Response to Parameter Change
Figure 7 :
Figure 7: PITT attention weight response to changing equation tokens.a) PITT attention weights change as we modify the input tokens.From our Navier-Stokes equation, we see the attention weights change differently when we modify the viscosity and forcing term amplitude.This demonstrates that PITT is able to learn equation parameters from the tokenized equations.b) PITT attention weights change as we modify the input token target time.From our Navier-Stokes equation, we see the attention weights change differently when we modify the target time.This demonstrates that PITT is able to learn time evolution from the tokenized equations.
Attention Weight Response to Parameter Change
Figure 8 :
Figure 8: PITT attention weight response to changing equation tokens.a) PITT attention weights change as we modify the input tokens.From our Navier-Stokes equation, we see the attention weights change differently when we modify the viscosity and forcing term amplitude.This demonstrates that PITT is able to learn equation parameters from the tokenized equations.b) PITT attention weights change as we modify the input token target time.From our Navier-Stokes equation, we see the attention weights change differently when we modify the target time.This demonstrates that PITT is able to learn time evolution from the tokenized equations.
Figure 9 :
Figure 9: Error accumulation for rollout experiments.PITT variants have less error accumulation at long rollout times for every benchmark when compared to the baseline models.
Figure 10 :
Figure 10: Comparison of FNO to ground truth data for autoregressive rollout on our 1D data sets.
Figure 11 :
Figure 11: Comparison of PITT FNO using our novel embedding to ground truth data for autoregressive rollout on our 1D data sets.
Figure 12 :
Figure 12: Comparison of PITT FNO using standard embedding to ground truth data for autoregressive rollout on our 1D data sets.
Figure 13 :
Figure 13: Comparison of DeepONet to ground truth data for autoregressive rollout on our 1D data sets.
Figure 14 :
Figure 14: Comparison of PITT DeepONet using our novel embedding to ground truth data for autoregressive rollout on our 1D data sets.
Figure 15 :
Figure 15: Comparison of PITT DeepONet using standard embedding to ground truth data for autoregressive rollout on our 1D data sets.
Figure 19 :
Figure 19: Comparison of Poisson predicition error between PITT variants and baseline models using standard embedding.
Table 1 :
Collection of all tokens used in tokenizing governing equations, sampled values, and system parameters.
, ∂, Σ, j, A j , l j , ω j , ϕ j , sin, t, u, x, y, +, −, * , / initial condition, sampled values, and output simulation time are all separated because each component controls distinct properties of the system.The 2D equations are tokenized so that the governing equations remain in tact because some of the governing equations, such
Table 2 :
j=n+1 where n ∈[10, 100].A total of 1,000 sampled equations were used in the training set, with 90 frames for each equation.Data was split such that samples from the Mean Absolute Error (MAE) ×10 −3 for 1D benchmarks.Bold indicates best performance.
same equation and forcing term did not appear in the training and test sets.We see PITT significantly outperforms all of the baseline models across all equations for both embedding methods.Although the lower error often resulted in unstable autoregressive rollout, PITT variants have also outperformed their baseline counterparts when simply trained to minimum error.Additionally, PITT is able to improve performance with fewer parameters than FNO, and a comparable number of parameters to both OFormer and DeepONet.Notably, PITT uses a single attention head and single multi-head attention block for the multi-head and linear attention blocks in this experiment.
table 3 .
and figure6in the appendix.The first 10 frames of each equation are used as input to predict the last frame of each simulation.In total, 5,000 samples from each equation were used for both single equation and multiple equation training.Models trained on the combined data sets are then tested on data from each equation individually.For PITT FNO and PITT OFormer, we see that training on the combined equations using our novel embedding method has best performance improve neural operator generalization across different systems.Interestingly, we see also improvement in FNO and OFormer when training using the combined data sets.
Figure 3: PITT FNO prediction decomposition for 1D Heat equation.Left: The FNO module of PITT predicts a large change to the final frame of input data.Middle The numerical update block corrects the FNO output.Right The combination of FNO and numerical update block output very accurately predicts the next step.Interestingly, the underlying FNO has learned to overestimate the passthrough of the data in both cases.The token attention and numerical update modules have learned a correction to the FNO output, as expected.1DFixed-FuturePredictionIn this 1D benchmark, each model is trained on all three equations simultaneously, and performance is compared against training on single equations.Results are shown in across all data sets.Additionally, for PITT FNO and PITT DeepONet, training using our standard embedding method acheivs best performance across all data sets.This shows PITT is able to
Table 3 :
Mean Absolute Error (MAE) ×10 −3 for 1D benchmarks.Bold indicates best performance.The final time is T = 30s.Similar to the 1D case, we are learning the operatorG θ : a(•, t i )| i=n → u(•, t j )| j=n+1 where n ∈ [0, 119].This benchmark is especially challenging for two reasons.First, there are viscosity and forcing term amplitude combinations in the test set that the model has not trained on.Second, rollout is done starting from only the initial condition, and models are trained to predict the next step using a single snapshot.This limits the time evolution information available to models during training.Although the baseline models perform comparably to PITT variants in terms of error, we note that PITT shows improved accuracy for all variants, and in many cases lower error led to unstable rollout, like in the 1D cases.Despite this, PITT has much better rollout error accumulation, seen in table6.Further analysis of PITT FNO attention maps from this experiment is given in the appendix in figures 7a, 7b, 8a, and 8b.The attention maps show PITT FNO is able to extract physically relevant information from the governing equations.For the steady-state Poisson equation, for a given set of boundary conditions we learn the operator, G θ : a → u, with Boundary conditions: u(x) = g(x), ∀x ∈ ∂Ω 0 and n∇u(x) = f (x), ∀x ∈ ∂Ω 1 .The primary challenge here is in learning the effect of boundary conditions.Dirichlet boundary conditions are constant, only requiring passing through initial values at the boundary for accurate prediction, but Neumann boundary conditions lead to boundary values that must be learned from the system.Standard neural operators do not offer a way to easily encode this information without modifying the initial conditions, while PITT uses a The 2D benchmarks provided here provide a wider array of settings and tests for each model.In the next-step training and rollout test experiment, we used 200 equations, a single random initialization for each equation, and the entire 121 step trajectory for the data set.text encoding of each boundary condition, as outlined in equation tokenization.PITT is able to learn boundary conditions through the text embedding, and performs approximately an order of magnitude better, with the standard embedding improving over our novel embedding by an average of over 50%.5,000 samples were used during training with random data splitting.All combinations of boundary conditions appear in both the train and test sets.Prediction error plots for our models on this data set are given in the appendix in figures 18 and 19.
Table 4 :
24an Absolute Error (MAE) ×10 −3 for 2D benchmarks.Bold indicates best performance.Although PITT variants have overlapping error bars with the base model in the Navier-Stokes benchmark, the PITT variant had lower error on all but one random split of the data for PITT FNO, and every random split for PITT DeepONet.Lastly, similar to experiments in both Li et al.13and Li et al.24, we can use our models to use the first 10 seconds of data to predict a fixed, future timestep.Including the initial condition, we use 41 frames to predict a single, future frame.In this case, we predict the system state at 20 and 30 seconds in two separate experiments.For this experiment, we are learning the operator G θ : u(•, t)| t∈[0,10] → u(•, t)| t=20,30 .We shuffle the data such that forcing term amplitude and viscosity combinations appear in both the training and test set, but initial conditions do not appear in both.Our setup is more difficult than in previous works because we are using multiple forcing term amplitudes and viscosities.The results are given in table5, where we see PITT variants outperform the baseline model for both embedding methods.Example predictions are given in the appendix in figures 16 and 17.
exception of PITT DeepONet using our novel embedding method when compared to the baseline model on KdV.Error accumulation is shown in figure4for standard embedding,
Table 6 :
Final Mean Absolute Error (MAE) for rollout experiments.Bold indicates best performance when comparing base models to their PITT version.OFormer is omitted due to instability during rollout.Standard embedding PITT DeepONet is bolded here because it outperforms DeepONet for every random split of the data.
does not match small-scale features well.Similarly, PITT DeepONet is able to approximately match large-scale features in magnitude (lighter color), whereas DeepONet noticably differs even at large scales.1D rollout comparison plots are given in the appendix.
and DeepONet used a step schedule for learning rate during training.PITT variants and OFormer used a One Cycle Learning rate during training.All models used the Adam optimizer and L1 loss function for all experiments.
Table 7 :
Training Hyperparameters for 1D Next-Step Experiments
Table 8 :
FNO Hyperparameters for 1D Next-Step Experiments
Table 9 :
OFormer Hyperparameters for 1D Next-Step Experiments Model Data Set Hidden Dim.Numerical Layers Heads Input Embedding Dim.Output Embedding Dim.Encoder Depth Decoder Depth Latent Channels Encoder Resolution Decoder Resolution Scale
Table 11 :
Training Hyperparameters for 1D Fixed-Future Experiments
Table 13 :
OFormer Hyperparameters for 1D Fixed-Future Experiments Model Data Set Hidden Dim.Numerical Layers Heads Input Embedding Dim.Output Embedding Dim.Encoder Depth Decoder Depth Latent Channels Encoder Resolution Decoder Resolution Scale
Table 15 :
Training Hyperparameters for the 2D Navier-Stokes Next-Step Experiment
Table 16 :
Model Hyperparameters for the 2D Navier-Stokes Next-Step Experiment
Table 17 :
Model Hyperparameters for the 2D Navier-Stokes Next-Step Experiment
Table 18 :
Model Hyperparameters for the 2D Navier-Stokes Next-Step Experiment
Table 19 :
Training Hyperparameters for the 2D Navier-Stokes Fixed-Future Experiment
Table 20 :
Model Hyperparameters for the 2D Navier-Stokes Fixed-Future Experiment
Table 21 :
Model Hyperparameters for the 2D Navier-Stokes Fixed-Future Experiment
Table 22 :
Model Hyperparameters for the 2D Navier-Stokes Fixed-Future Experiment All models were trained for 1000 epochs on the 2D Poisson steady-state data.
Table 23 :
Training Hyperparameters for the 2D Poisson Experiment
Table 24 :
Model Hyperparameters for the 2D Poisson Experiment
Table 25 :
Model Hyperparameters for the 2D Poisson Experiment
Table 26 :
Model Hyperparameters for the 2D Poisson Experiment PITT FNO prediction decomposition for 1D Heat equation.Left: The FNO module of PITT predicts a large change to the final frame of input data.Middle The numerical update block corrects the FNO output.Right The combination of FNO and numerical update block output very accurately predicts the next step. | 6,003.2 | 2023-05-15T00:00:00.000 | [
"Physics",
"Computer Science"
] |
UvA-DARE ( Digital Academic Repository ) Kinase Activity Profiling of Pneumococcal Pneumonia
Background: Pneumonia represents a major health burden. Previous work demonstrated that although the induction of inflammation is important for adequate host defense against pneumonia, an inability to regulate the host’s inflammatory response within the lung later during infection can be detrimental. Intracellular signaling pathways commonly rely on activation of kinases, and kinases play an essential role in the regulation of the inflammatory response of immune cells. Methodology/Principal Findings: Pneumonia was induced in mice via intranasal instillation of Streptococcus (S.) pneumoniae. Kinomics peptide arrays, exhibiting 1024 specific consensus sequences for protein kinases, were used to produce a systems biology analysis of cellular kinase activity during the course of pneumonia. Several differences in kinase activity revealed by the arrays were validated in lung homogenates of individual mice using western blot. We identified cascades of activated kinases showing that chemotoxic stress and a T helper 1 response were induced during the course of pneumococcal pneumonia. In addition, our data point to a reduction in WNT activity in lungs of S. pneumoniae infected mice. Moreover, this study demonstrated a reduction in overall CDK activity implying alterations in cell cycle biology. Conclusions/Significance: This study utilizes systems biology to provide insight into the signaling events occurring during lung infection with the common cause of community acquired pneumonia, and may assist in identifying novel therapeutic targets in the treatment of bacterial pneumonia. Citation: Hoogendijk AJ, Diks SH, van der Poll T, Peppelenbosch MP, Wieland CW (2011) Kinase Activity Profiling of Pneumococcal Pneumonia. PLoS ONE 6(4): e18519. doi:10.1371/journal.pone.0018519 Editor: Jose Chabalgoity, Facultad de Medicina, Uruguay Received October 14, 2010; Accepted March 10, 2011; Published April 5, 2011 Copyright: 2011 Hoogendijk et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This work was supported by a grant from The Netherlands Organization for Scientific Research to C.W. Wieland. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing Interests: The authors have declared that no competing interests exist. * E-mail<EMAIL_ADDRESS>
Introduction
Due to its unique relationship with the environment, the lung must defend itself from infection by numerous inhaled microorganisms. Although in general the lung is successful in doing so, bacterial pneumonia remains a major health burden. The Grampositive bacterium S. pneumoniae is the main causative pathogen in community-acquired pneumonia (CAP), responsible for an estimated ten million deaths annually worldwide [1,2,3]. Increasing resistance of this common pathogen to antibiotics is a great concern [4,5,6].
Recognition of invading bacteria by the host is considered to occur mainly through toll-like receptors (TLRs). After interacting with their ligands, TLRs signal via adaptor proteins and kinases to ultimately activate Nuclear factor-kB (NF-kB) inducing inflammatory responses [7]. However, the interactions between bacteria and host cells are not confined to TLRs and ongoing intracellular signaling cascades may be much more extensive and complex than generally thought. Many studies on host pathogen interactions concentrate mainly on isolated pathways [8,9,10,11]. Although elegant in emphasizing the importance of these single pathways, such studies do not address the synergy of the multitude of signal-cascades, activated upon recognition of pathogens. Systems biology provides tools to enable understanding of such complex matters.
Kinases comprise an important part of the intracellular responses mediated by a variety of receptors. Although it is highly likely that kinases mediate lung inflammation during pneumonia, knowledge about the activation of kinases during pneumonia is limited. Microarray-based kinome profiling approaches have been subject of development over the last years and an interesting tool to integral study signaling events [12,13,14]. Unraveling the complexities of the host-pathogen interactions during pneumococcal pneumonia can be of great value in finding new targets of therapy. Here we use a radio-kinome substrate array to determine kinase activities in the lungs during S. pneumoniae pneumonia in mice and furthermore attempt to elucidate complex interactions occurring during the course of the infection. To our surprise, we did not detect signaling pathways belonging to the TLR signaling cascades. In contrast, we detected pathways that induce chemotoxic stress and promoted the T helper 1 (Th1) response. In addition we found an overall reduction in WNT signaling. Canonical WNT signaling, named after the homology of WNTgenes with int-1 and wingless in Drosophila, is important in developmental signaling [15,16]. However more roles of this signaling cascade have emerged (e.g. development of cancer) [17].
Moreover, we found a reduction in cell cycle activity during the course of S. pneumoniae pneumonia. This study is the first to apply kinome profiling using kinomics chip arrays in infectious diseases.
Bacterial pneumonia
First, we determined the course of bacterial infection. After instillation of S. pneumoniae bacterial loads remained similar at 3 and 6 hours (Figure 1a). Between 6 and 24 hours bacterial loads in the lung increased exponentially (up to 5 logs increase). At this time an apparent maximum number of bacteria had been reached in the lung compartment, as no further increase was detected at 48 hours. The induction of lung inflammation was illustrated by increases in the pulmonary levels of all measured cytokines (Tumor necrosis factor-a (TNF-a), Interleukin (IL)-1b, IL-6) and chemokines (cytokine-induced neutrophil chemoattractant (KC), Macophage inflammatory protein (MIP)-2) during the course of bacterial pneumonia (Figures 1b-f).
Kinome profile overview
To determine the relation between each data-set of obtained kinome profiles we performed hierarchical clustering according to Johnson (Figure 2a) [18]. During S. pneumoniae infection the distance to the control increased throughout the course of infection and with increasing bacterial loads, indicating increased divergence from the initial kinase activity profile. Interestingly, 6 hours after infection, the kinome profile resembled that of the control more than the profile at 3 hours. This suggested that strong changes occurred in phosphorylation patterns early in infection. As the infection progressed more changes in the kinome profile were detected, creating a greater distance in the cluster. The greatest distance to the control was found at 24 and 48 hours, which cluster outside of the control and 3 and 6 hours group.
Overall, a total of 153 kinases were found to be activated significantly different from uninfected control lungs. In Figure 2b a multi-dimensional Venn diagram represents distribution of phosphorylation events in time. When studied at separate time points the following patterns were observed: 53 spots were changed compared to control at 3 hours, 27 spots at 6 hours, 67 spots at 24 hours and 52 spots at 48 hours. Only protein kinase A (PKA) and mitogen-activated protein kinase (MAPK) activation differed from uninfected lungs at all time points studied. Of note, the MAPK spot (960) is a MAPK_group phosphorylation site, thus does not indicate which pathway is implicated.
S. pneumoniae impact on signaling in the lung
Infection with S. pneumoniae induced a multitude of changes in kinome activity. Figure 3 shows an overview of ongoing processes during infection. Kinome chip analysis revealed that during pneumonia a Th1 associated signaling was induced as illustrated by the reduced activity of B-cell receptor (BCR, spot 68) at 3 hours, nuclear factor of activated T-cells (NFAT, spot 89) at 3 and 48 hours [19] and v-abl Abelson murine leukemia viral oncogene homolog 1 (ABL, spots 660 and 672) at 3, 24 and 48 hours [20]. Furthermore, increased activities of Ataxia telangiectasia mutated (ATM, spot 36) and DNA-dependent protein kinase (DNApK, spots 495 and 555) at 6 hours revealed the emergence of chemotoxic stress. ATM was upregulated likely due to presence of reactive oxygen species and DNA damage [21,22]. DNApK activation occurs in response to DNA damage signals [23,24].
In contrast to induction of chemotoxic stress, downstream insulin-receptor (INS-R) signaling was inhibited, as demonstrated by the reduced activity of pyruvate dehydrogenase kinase (PDK, spot 226) at 6, 24 and 48 hours, AMP-activated protein kinase (AMPK, spot 572) at 48 hours and the dynamic profiles of AKT (spots 46, 105, 361, 397, 557 and 981), which has events at all time points, and glycogen synthase kinase-3b (GSK-3b, spots 2, 116, 187 and 959) at 3 and 24 hours with both an increase and decrease in activity early during infection and a subsequent decrease/ stabilization in the late phases of pneumococcal pneumonia.
The found GSK-3b primary activation and reduction could also attribute to WNT signaling. In line, we found overall reduced activity of ephrin (EHP4, spot 12) at 3 hours and b-catenin(spots 187 and 321) at 3, 24 and 48 hours.
Western blots of selected kinases confirm kinomics chip kinome profile data
In order to validate the phosphorylation-states of AMPK-a both phosphorylated and unphosphorylated AMPK-a were detected in lung homogenates from S. pneumoniae infected animals ( Figure 4b).
When comparing AMPK-a activity on the kinomics chip ( Figure 4a) with the results from the western blots, a similar pattern emerged: decreased phosphorylation of AMPK-a at later time points of infection ( Figure 4c).
To investigate if phosphorylation status of substrates on the kinomics chip mirrors that of more upstream kinases the phosphorylation state of Ser9, an inhibitory site of GSK-3b was measured. GSK-3b phosphorylates the substrate glycogen synthase 1 (GS-1). However inhibition of GSK-3b at Ser9 results in reduced GS-1 phosphorylation. Indeed, GS-1 on the kinomics chip showed reduced phosphorylation at all time points except for 24 hours after infection. As demonstrated by Figure 5a, GSK-3b activity was enhanced during the first 6 hours of infection. Activation pattern of the substrate GS-1 matched the activity of the upstream kinase closely: the ratio of the inverted p-GSK-3b signal closely resembled the ratio of the kinomics chip signal of GS-1.
In general, CDK activity was decreased over time during pneumonia (Table 1). Since antibodies for the different CDK substrates (cyclins) are not available, we set out to validate these findings by using a pan CDK p-substrate western blot. By this approach we can detect activity of all CDKs at once. Semiquantitative analysis of the blots validated the results obtained by the kinomics chip kinome profiles (Figure 6a, b): overall CDK activity in S. pneumoniae infection was decreased.
Upon phosphorylation, b-catenin is rapidly degraded, reflecting the off state of WNT [27]. Unsuprisingly, only b-catenin levels were detectible by western blot assay (Figure 7). The total bcatenin amounts decreased over time. This, indirectly, represents a decrease of potentially active b-catenin.
Discussion
Current mass spectrometry techniques and novel proteomics approaches like antibody microarrays determine protein phosphorylation levels rather than the enzymatic activity resulting from it, while measurement of kinase activity using the peptide microarray provides a direct view on the extent of enzymatic activity leading to specific signal transduction. We here utilized kinome substrate arrays to obtain insight in alterations in kinase activity in the lungs during respiratory tract infection caused by the most common causative agent in CAP, S. pneumoniae. The most strongly affected pathways identified during pneumococcal pneumonia were associated with enhanced chemotoxic stress, a developing Th1 response, reduced WNT signaling and down regulated cell cycle activity.
The infectious dose of S. pneumoniae chosen causes lethality in mice from 48 hours onward with an overall mortality rate of 90%- 100% after 4 to 6 days [27,28]. In accordance, fulminant pneumonia was induced, as demonstrated by a profound bacterial outgrowth and a strong induction of inflammatory cytokines and chemokines. We obtained lung samples between 3 and 48 hours after infection, thus presenting the entire dynamics of the host response during pneumococcal pneumonia from shortly after infection until shortly before death. As such, kinases activities found in the kinomics analysis are representative of severe pneumococcal pneumonia.
The kinome profile dendogram (Figure 2a) generated from lung samples harvested 6 hours after infection was more closely related to control than the kinome profile obtained after 3 hours. We hypothesize that this is the effect of early and rapid changes occurring in response to the introduction of the pathogen. Of note, cells migrating into lung tissue in response to S. pneumoniae entering the airways likely affect kinome profiling patterns, since these were generated from whole lung homogenates. Kinome studies on specific leukocyte subsets purified from lung tissue at several time points after induction of pneumonia might circumvent this shortcoming. However, cell isolation procedures potentially influence phosphorylation states and thus we chose to utilize total lung homogenates, generated from snap frozen material. One also has to consider that, when searching for new therapies, it is the whole lung that will be exposed to a potential drug, e.g. administered via nebulization. Therefore, we chose to determine kinase profiles of total lung lysates without isolating different cell types.
Analysis of our chip data revealed several significant ongoing processes in pneumonia. Much to our surprise specific MAPKs and other kinases classically related with the immune response, like those involved with TLR signaling (e.g. inhibitory kb (IKK)-a/ IKKb, TANK-binding kinase 1 and MAPK-kinase 1 (MEK-1)) were not prominently present in the results obtained from the kinomics chip, not even at the earliest time point. A general MAPK substrate however was shared between time points, but no links to its specific contexts could be made. Ex vivo stimulation of bone marrow derived macrophages with serotype 2 S. pneumoniae gave rise to p38MAPK/c-Jun N-terminal kinase and extracellular signal-regulated kinase (ERK) phosphorylation [30]. Moreover, in a murine model of pneumococcal pneumonia p38MAPK inhibition resulted in enhanced bacterial loads [31], thus indicating that MAPKs may play a role in pneumococcal pneumonia.
PKA was also shared in all time points. This kinase is involved in regulation of proliferation and differentiation, microtubule dynamics, chromatin condensation and decondensation, nuclear envelope disassembly and reassembly, as well as regulation of intracellular transport mechanisms and ion fluxes [32]. Thus, its overall presence is not surprising.
In context of an immune response to bacterial infection, the Th1 response signaling was the only prominent pathway that appeared in our analysis. Th1 responses were mirrored by the decreased activity of BCR and NFAT. De-phosphorylation of NFAT (as a substrate) enables its translocation to the nucleus inducing IL-2 transcription [33]. Contrary to NFAT and BCR, ABL activation was decreased reducing its Th1 inducing responses [34]. Although in this model of acute respiratory tract infection, the innate immune response plays an important role, T cell immune function is important for generating an adaptive immune response and memory building.
Moreover, in recent studies CD8 knockout mice or CD8 + T-cell depleted wild type mice displayed an increased susceptibility to serotype 3 pneumococcal pneumonia, while adoptive transfer of CD8 + T-cell to knockout mice improved survival [35]. Our Th1 results combined with this study demonstrate that T cells play a role in serotype 3 pneumoccocal pneumonia.
A variety of CDKs and CDK associated kinases displayed a reduced activity during the course of pneumococcal pneumonia. These kinases regulate the progression through the cell cycle [36]. However, recent studies demonstrate that utilizing a CDK inhibitor reduced cerebrospinal fluid leukocyte count, hemorrhagic events and improved recovery in a mouse model of antibiotic treated S. pneumoniae meningitis [37]. It is proposed that the inhibition of CDKs in effect facilitates induction of caspase dependent apoptosis via myeloid cell leukemia sequence 1 (MCL-1) [38,39]. The functional role of CDKs during S. pneumoniae pneumonia remains to be established.
Several kinases not primarily associated with inflammation or infection, were abundant in our findings. Nonetheless, most of these kinases have been implicated to also contribute to an inflammatory response. AMPK-a and GSK-3b mainly are known in metabolic context, specifically in insulin and glucose signaling [40,41], but also have strong connections with inflammation. AMPK activation results in reduced TLR4 dependent lung inflammation [42]. GSK3 inhibition reduced IL-12p40, IL-6 and TNF-a in a mouse model of tularemia [43]. Furthermore, TLR induced pro-inflammatory cytokine production was reduced in monocytes by GSK-3b inhibition [44]. TGF-b is known to have strong anti-inflammatory effects [45]. Both GSK-3b and TGF-b, via CK2, are implicated in WNT signaling [46]. Our kinome analysis demonstrated that WNT signaling is overall reduced during pneumonia. To our knowledge, the role of WNT signaling in pneumococcal pneumonia has not been clearly defined yet, although b-catenin signaling has been AMPK-a (b,c). During course of infection the ratio of pAMPK/AMPK decreased. A similar activity pattern was found in the kinomics chip (a). The bar graph shows quantification of the relative amounts of phospho-AMPK-a corrected for total AMPK-a. Data are presented as mean 6 SD of n = 3. doi:10.1371/journal.pone.0018519.g004 Figure 5. Early increase and later decrease in GSK-3b activity. Western blot of total GSK-3b and GSK-3b phosphorylation at ser9 (a: blot, b: quantification of phospho corrected for total GSK-3b). P-GSK-3b at Ser 9 inhibits phosphorylation of its substrate GS1 as depicted in Figure 5c. These data were derived from the kinomics chip. Data are presented as mean 6 SD of n = 3. doi:10.1371/journal.pone.0018519.g005 associated with MAPK-signaling and modulation of NFkB function [46,47]. WNT5 was described to contribute to the inflammatory response of human macrophages [48].
It should be noted, however, that differences in pneumococcal serotype can illicit differential immune responses. Serotype 11a (M10) murine pneumococcal pneumonia resulted in an early local levels of TNF-a and IL-6 in the pulmonary compartment, whereas serotype 3 (ATCC6303) displayed a more delayed response, which remained until death occurred [49]. This serotype 3 data is accordance with induction of IL-6 observed here, which was detected after 24 hours of infection. Thus, the immune system seems to react in a delayed fashion to the serotype 3 pneumococcus.
Notably, kinome profiles were determined in homogenates prepared from right (whole) lungs. As such, since the extent of pneumonia and the associated inflammation likely are not equally distributed in all lung segments in our model of intranasal infection, our data are not representative for specific areas of infection. In addition, analyses of both (whole) lungs would have provided more definitive information about pulmonary kinome profiles, since this approach would have excluded bias due to unequal left-right distribution of the infection.
Here we demonstrate the use of a systems biology approach on kinome activity in the lung during pneumococcal pneumonia. We uncovered pathways that induce chemotoxic stress and promote the Th1 response. Moreover, we found an overall reduction in WNT signaling and reduction in cell cycle activity during the course of severe S. pneumoniae infection. These data may pave the way to future drug interventions seeking to interfere with specific signaling pathways e.g. activating WNT signaling or reducing CDK activity.
Ethics Statement
This study was carried out in accordance with the Dutch Experiment on Animals Act. The Animal Care and Use Committee of the University of Amsterdam approved all experiments (Permit number: DIX100121).
Induction of pneumonia/inflammation
Pneumonia was induced as previously described [28,29]. S. pneumoniae serotype 3 (ATCC 6303) was grown to a midlogarithmic phase at 37uC in Todd-Hewitt broth supplemented with 0.5% Yeast extract (both Difco, Detroit, MI). Bacteria were harvested by centrifugation at 4000 rpm for 15 minutes; washed twice and resuspended in sterile saline at a concentration of 5610 4 colony forming units (CFU)/50 ml. Mice were inoculated with 50 ml of bacterial suspensions intranasally under isoflurane inhalation anesthesia (Upjohn, Ede, The Netherlands).
Determination of bacterial load
Three, six, 24 and 48 hours after infection, 3 mice per time point were sacrificed by cardiac puncture under Domitor (Pfizer Animal Health Care, Capelle aan der IJssel, The Netherlands: active ingredient medetomidine) and Nimatek (Eurovet Animal Health, Bladel, The Netherlands, active ingredient ketamine) anesthesia. Left lungs were harvested and homogenized in 4 volumes of sterile saline with a tissue homogenizer (ProScience, Oxford, CT, USA). CFUs were determined from serial dilutions of samples, plated on blood agar plates and incubated at 37uC for 16 hours before colonies were counted.
Kinomics chip profiling
Right lungs were snap frozen in liquid nitrogen, after which they were homogenized in three volumes of lysis buffer (MPER (Pierce, Rockville, WI, USA) enriched with 1 mM MgCl 2 , 1 mMglycerophosphate, 1 mM Na 3 VO 4 , 1 mM NaF, 1 mg/ml leupeptin, 1 mg/ml aprotinin, and 1 mM phenylmethylsulphonyl fluoride). Homogenates were centrifuged at 14,000 RPM for 5 minutes and pellets were discarded. As a control, right lungs were harvested from 3 mice administered with sterile isotonic saline 3 hours earlier and treated as described above. For the kinomics chip kinome profile assay samples were pooled and diluted to a protein concentration of 1 mg/ml. Of these lysates, 80 ml was added to 12 ml of Activation mix (70 mM MgCl 2 , 70 mM MnCl 2 , 400 mg/ml PEG 8000 and 880 kBq [ 33 P-c]ATP) and this mixture was applied to a kinomics chip (Pepscan Presto, Lelystad, The Netherlands) per pool. The chips were incubated at 37uC for 2 hours. The arrays were washed twice in 2 M NaCl (1% TWEEN 20), once in PBS (1% SDS) and rinsed twice in demineralised H 2 O. Subsequently the chips were airdried and exposed to a phosphor imager plate for 72 hours. Radioactive signals were measured using a phosphor imager (Storm TM , Amersham Biosciences, Uppsala, Sweden).
Westernblot
Results from the kinome analysis were validated by performing phospho-specific Western blots for major kinases. Blots were done for: AMPK-a/pAMPK-a, GSK-3b/pGSK-3b, pan-CDK phospho-substrate, b-catenin (all antibodies from Cellsignaling Technology, Boston, MA, USA) and b-actin (Santacruz Biotechnology, Santa Cruz, CA, USA). Samples for western blotting were boiled at 95uC for 5 minutes in laemmli buffer and loaded onto SDS-PAGE gels. After electrophoresis the content of the gels was transferred onto Immobilon-PVDF membranes (Millipore, Billerica, MA, USA). The membranes were blocked in 5% BSA (Roche, Basel, Switzerland) in TBS-T at room temperature for 60 minute. All primary antibodies were diluted 1:500 with exception of b-actin, which was diluted 1:4000. The membranes were incubated overnight at 4uC. Next, the membranes were incubated for 60 minutes with 1:1000 anti-rabbit-HRP conjugated secondary antibody (Cell Signaling Technology) and blots were imaged using LumiLight Plus ECL (Roche, Basel, Swizerland) on a GeneGnome chemiluminescence imager (Syngene, Cambridge, UK). | 5,091.8 | 2011-01-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Use of Virtual Reality in the Reduction of Pain After the Administration of Vaccines Among Children in Primary Care Centers: Protocol for a Randomized Clinical Trial
Background: Pain and anxiety caused by vaccination and other medical procedures in childhood can result in discomfort for both patients and their parents. Virtual reality (VR) is a technology that is capable of entertaining and distracting the user. Among its many applications, we find the improvement of pain management and the reduction of anxiety
Introduction
Pain is defined as a complex and multidimensional sensory experience that comprises cognitive, behavioral, and psychological elements [1,2]; it is usually associated with unpleasant and subjective experiences and involves an adaptive function that allows for the initiation of protective responses [2]. Fear and stress influence the perception of pain, and the failure to apply appropriate pain reduction techniques exposes patients to unnecessary negative experiences, which can lead to long-term consequences, such as needle phobia or anxiety about medical procedures [2][3][4].
Pediatric patients seen in primary care settings frequently experience negative reactions, such as fear, anxiety, pain, or feelings of aggression, during routine invasive procedures, such as vaccinations, blood draws, wound sutures, or nursing care.
According to the World Health Organization, the possible adverse reactions to vaccination that are linked to procedural stress include fainting, hyperventilation, vomiting, and seizures [5]. Further, 19% of patients aged 4 to 6 years have needle phobia [6], with the average age of onset being 5.5 years [7]. Additionally, 7% of children experience syncope (fainting) during the administration of a medical injection, and 5% avoid treatment [8]. In a study on the impact of the fear of needles on adherence to vaccination, it was estimated that fear was the cause of nonadherence in 8% of children [9].
For all of these reasons, different mechanisms have been developed to try to minimize or eliminate adverse reactions to vaccination [2]. In the clinical practice guidelines for reducing pain during the administration of vaccines in the pediatric age group [10,11], the following main recommendations are made: breastfeeding during the procedure, offering sweet solutions before vaccine administration to those older than 1 year if they cannot be breastfed, and comfortably positioning patients if they can be held by their parents or using a rapid vaccination technique without aspiration.
Distractions help to reduce anxiety because they prevent painful stimuli from being transmitted either to the thalamus (ie, to the limbic system) or to the sensory cortex in such an effective way [12,13], thereby helping one to focus their attention on external and internal stimuli and not on nociceptive stimuli [14,15]. Distractions sometimes surpass local anesthetics' ability to control pain and discomfort associated with medical interventions [12]. Distractions can be active (immersive), which involves patient participation through the manipulation of the environment, or passive (nonimmersive), which involves observation only [13].
Current technological advances, especially in the field of virtual reality (VR), have resulted in new types of distractions, which can be used along with traditional distractions [14] to achieve better pain control in pediatric patients for procedures such as vaccination, according to the Protocol of Preventive and Health Promotion Activities in Pediatric Patients: Healthy Childhood [15].
VR is a term that was proposed in the mid-1980s by Jaron Launier, and it refers to a computer technique that allows for the creation of a simulated environment by means of a device with sensors that can be connected to a computer, mobile device, or tablet [1,3]. Its effectiveness is based on the psychological theory of "presence" [2,3]. This theory posits that people interact with their environments via the following three types of components: auditory, visual, and tactile components [1,2]. VR technology redirects one's attention to a more pleasant environment [2] by replacing real stimuli with virtual stimuli. This activates users' higher cognitive and emotional brain regions, resulting in the dissociation of pain [2,3].
The idea behind VR-generated analgesia was probably inspired by the intercortical modulation of pain matrix signaling pathways via attention, emotion, memory, and other senses [12,13].
In a systematic review published by the Cochrane Library in 2019 on VR distraction for reducing acute pain in children, it was concluded that the use of VR had low evidence regarding its benefits. However, in that review, due to the limited amount of data available, no conclusions could be drawn about the side effects of VR, satisfaction with VR, the level of parental anxiety, or the cost of VR use [12].
Many aspects remain to be clarified; however, the use of VR for pain management has shown great benefit in hospital procedures. Our study seeks to introduce VR as an analgesic tool in pediatric primary care services and daily procedures, such as vaccination, thereby combining new technologies with traditional concepts, such as distraction [1][2][3].
Trial Design
We will conduct a randomized, single-center, open, parallel, and controlled clinical trial with 2 assigned groups (intervention group and control group).
Scope and Period of the Study
The study population will include children aged 3 to 6 years who are included in the patient registry and are being seen in a primary care center of the Catalan Institute of Health in Central Catalonia.
The study will be conducted during the period of January 2022 to January 2023. If the minimum sample size is not achieved, the study period will be extended until it is achieved.
Inclusion Criteria
The patients who will take part in the study will be those from the pediatric population (ie, those aged 3 to 6 years) in the register of patients from a primary care center of the Catalan Institute of Health in Central Catalonia who, according to the vaccination schedule, are due to receive 1 of the following 2 vaccinations: (1) the triple viral+varicella vaccine at 3 years of age and (2) the hepatitis A+diphtheria-tetanus-pertussis vaccine at age 6.
Exclusion Criteria
The exclusion criteria will include the following: (1) patients who have already received 1 of the 2 vaccines to be administered; (2) patients and accompanying persons who do not understand and speak Catalan or Spanish; (3) patients with physical or mental illnesses, as well as those with blindness or deafness; (4) patients with a known history of epileptic episodes or severe motion sickness; (5) patients with any infections, burns, or injuries to the face, head, or neck that may interfere with the placement of the VR device; and (6) the absence of legal guardians for signing the informed consent form.
Intervention
The intervention group will use the Pico G2 VR goggles (Pico Interactive Inc) during the administration of the two vaccines, together with an Android AOYODKG tablet, which will be connected to the goggles as a controller.
The control group participants will receive traditional distractors, such as being held by the parent or guardian who accompanies them to the appointment, receiving stickers at the end of the appointment, or receiving rewards that the parent or guardian has prepared from home.
Eligible patients will be invited to participate in the study on an ongoing basis, and the assignment to study groups will be randomized.
Sequence Generation
Randomization will be carried out by using the RandomizedR computer system-a randomization tool of the R Studio program.
Implementation
The participants will be selected from a patient diary register. Both the sequence and the allocation of participants to the interventions will be generated by using the RandomizedR computer system.
Masking
Due to the nature of the study, it will not be possible to mask patients or health care professionals. Therefore, the trial will be open or unblinded. However, a blind evaluation by third parties will be carried out, as the person in charge of data analysis will not be involved with the intervention.
Sample Size Determination
To detect a 1-point difference between the two groups on the pain level scale, a sample of 150 boys and girls in each group is required, assuming an SD of 3 points, an α risk of 5%, a power of 80%, and an estimated loss to follow-up rate of 5% [16].
Recruiting Patients to Participate in the Study
The procedure will be carried out by 1 pediatric team consisting of a pediatrician and a nurse of the primary care center in Súria (Spain). The families will be contacted by telephone to schedule the patients for the checkup at 3 and 6 years of age, as is currently done. The nurse will administer the vaccines and will continue to be part of the Well-Child checkup team.
Information on the purpose, risks, and benefits of the study will be provided, and any queries will be answered. In addition to verbal information, an information document about the study will be provided.
Prior to the start of the study, training on the use of the devices will be given to all of the health care personnel involved.
If a family agrees to be included in the study, the informed consent form must be signed by at least 1 of the parents or legal guardians. The signing parent or guardian will agree to inform the other parent or guardian.
Data Collection, Sources of Information, and Intervention
Study data collection will begin once informed written consent has been obtained from the legal guardian. For patients' assignment to a study group, randomization will be carried out by using the RandomizedR computer system. The person responsible for data collection will indicate the patients' age, gender, and study group and the type of intervention performed. Prior to administering the vaccine, the patients' condition and heart rate on arrival at the clinic will be recorded, regardless of their assigned study group.
For patients in the intervention group, it will be explained to them that they will be able to use the VR device; they will be assisted in fitting the device and will be given a brief explanation of the content that will be played. The Leia's World (VRPharma Immersive Technologies SL) content, which was specially designed for vaccination, will be used, and data will be collected before and after the procedure for each of the first 2 vaccinations.
The Wong-Baker Faces Pain Rating Scale, which ranges from 0 to 10, and the Children's Fear Scale, which ranges from 0 to 4, will be used to evaluate the reduction of pain and anxiety. The data collected from the control group will be compared with those collected from the intervention group (heart rate, the level of pain perception, the level of distress and fear, and the length of visits) [17][18][19]. In addition, to understand the perceptions of the tutors on the use of VR, a satisfaction survey will be conducted [20].
The aforementioned data will be collected by the nursing or pediatric professional via a web questionnaire generated by the Microsoft Forms tool on the tablet and will be hosted in a computer server of the Institut Català de la Salut de la Catalunya Central.
Statistical Analysis
An intention-to-treat analysis will be performed; the subjects will be analyzed according to the group to which they were initially assigned and not the group in which they finally participated.
The data will be obtained through Microsoft Forms (an application included in Office 365 [Microsoft Corporation] that allows one to create customized questionnaires, surveys, and records) and analyzed with R software (version 4.0.3) [21,22]. Categorical variables will be described with absolute frequencies and percentages, and continuous variables will be described with means and SDs or medians and quartiles. A 2-tailed t test will be used to compare the values related to pain, anxiety, and satisfaction across the two groups. The correlations between pain perception and anxiety values reported by the children and those reported by their caregivers and nurses will be evaluated by means of a Pearson correlation. The significance level will be set at 5%, and all CIs will be set at 95%.
The data will be stored in a database. The Pearson chi-square test will be used for the calculation of statistical significance.
Ethics Approval
The University Institute for Research in Primary Health Care Jordi Gol i Gurina (Barcelona, Spain) ethics committee approved the trial study protocol (approval code: 21/233). Written informed consent will be requested from all parents or legal guardians participating in the study.
Results
The study is scheduled to begin in January 2022 and is scheduled to end in January 2023, which is when the statistical analysis will begin. As of March 2022, a total of 23 children have been recruited, of which 13 have used VR during the vaccination process. In addition, all of the guardians have found that VR helps to reduce pain during vaccination.
We hope that sufficient evidence can be obtained to demonstrate that the use of VR is effective in reducing anxiety and pain. In this context, the Catalan health system could introduce the use of VR in usual practices and extend its use to other potentially painful processes. Statistically significant differences in heart rate and decreased pain perception that are in favor of the intervention group will be considered a satisfactory result.
Discussion
Our study aims to demonstrate that the use of VR goggles reduces the pain reported by children aged 3 to 6 years during the administration of 2 vaccines. The use of VR goggles may also reduce anxiety after such a procedure and thus result in greater satisfaction among the parents or legal guardians.
There are studies that have already used VR with children for painful procedures and during the administration of vaccines [12][13][14]. However, to date, there is no literature describing studies that focus on the population of children aged 3 to 6 years who are administered 2 vaccines at the same visit and also record the adverse effects of the use of VR goggles. In this context, our study may provide further support for the use of VR in the management of pediatric patients. Obtaining favorable results could lead to the use of VR as a standard practice for painful procedures performed in the primary care centers of Catalonia. The use of VR for pain reduction is likely to result in a decrease in visit duration. To demonstrate the efficacy of the use of VR, the professionals will note the length (in minutes) of the visits in order to evaluate whether VR reduces or increases the duration of the visits.
Our study has several limitations. The main limitation of the study is the pressure of care and time management in the pediatric consultations conducted by the Well-Child Programme. The time available for caring for a patient is limited. The time required for secluding the participants and obtaining consent and questionnaire data is estimated to be 5 minutes. This means that if the burden of care is high, the quality of care will have to be prioritized, and the recruitment of patients who are eligible to participate in the study will have to be paused.
Another limitation is that parents (or legal guardians) and minors who do not speak Catalan or Spanish cannot participate in the study. The use of VR could be highly advantageous if the VR content is translated into the native languages of these children, because they are usually more fearful and anxious when they cannot understand the pediatrician or nurse.
There are other limitations associated with the study design, patient recruitment, and the inclusion and exclusion criteria. First, patients receiving vaccines from their private pediatricians will be excluded. Second, the study population will be limited to a specific age range. Third, evaluations will only be conducted for vaccination and not for other invasive procedures. Fourth, patients who have already received 1 of the 2 scheduled vaccines will be excluded, since they would have previously presented the related pathology (eg, patients who have already been vaccinated for chickenpox). Fifth, parents or legal guardians who do not agree with vaccination in general or are associated with the masking process may introduce bias when they report their experiences in the survey.
Data Availability
The principal study researchers will have access to the full data set, and the data generated and analyzed during the study will be available from the corresponding author. The results obtained are expected to be published in peer-reviewed journals and at national and international conferences. | 3,763.2 | 2021-12-22T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Boundary asymptotics of non-intersecting Brownian motions: Pearcey, Airy and a transition
We study n non-intersecting Brownian motions, corresponding to the eigenvalues of an n × n Hermitian Brownian motion. At the boundary of their limit shape we find that only three universal processes can arise: the Pearcey process close to merging points, the Airy line ensemble at edges and a novel determinantal process describing the transition from the Pearcey process to the Airy line ensemble. The three cases are distinguished by a remarkably simple integral condition. Our results hold under very mild assumptions, in particular we do not require any kind of convergence of the initial configuration as n→∞ . Applications to largest eigenvalues of macro- and mesoscopic bulks and to random initial configurations are given.
Introduction and Statement of Results
Non-Intersecting Brownian Motions are arguably among the most important dynamical models of eigenvalues of random matrices, sharing many characteristics with other eigenvalue processes and random growth models.For the definition, let (M (t)) t≥0 be a Brownian motion in the space of n × n Hermitian matrices, starting with a deterministic Hermitian matrix M (0).Such a Hermitian Brownian motion is determined by requiring that the diagonal entries of (M (t) − M (0)) t≥0 are independent standard real Brownian motions and the entries above the diagonal are independent standard complex Brownian motions, independent from the real Brownian motions on the diagonal.The stochastic process (X(t)) t≥0 of the n real eigenvalues of (n −1/2 M (t)) t≥0 , ordered increasingly for definiteness, is then called Non-Intersecting Brownian Motions (NIBM) because it has the same law as n independent real Brownian motions (with diffusion factor n −1/2 ) conditioned not to intersect for all times [25].It is also known as Dyson's Brownian motion due to Dyson being the first one to study the eigenvalues of a Hermitian Brownian motion [21].Random Matrix Theory is mostly about the understanding of the behavior of the random eigenvalues and eigenvectors in the limit of the matrix size n → ∞.The theory's success in manifold applications is boosted by the property of universality, which is also fascinating from a mathematical perspective.Universality here means that certain asymptotic behavior of spectral quantities, especially eigenvalues, depends only on very few properties of the underlying matrix model.For example, distributions of a random matrix' entries have typically very little influence on certain important eigenvalue asymptotics, like their spacings, while different symmetries of the matrix usually lead to different universality classes.
For NIBM we can observe, as indicated by Figure 1, that for n → ∞ certain regions in space-time become dense with a bulk of eigenvalues present, while other regions remain void (with high probability).We will in this article study the correlations of eigenvalues around the boundaries of dense regions, deriving a full classification for the possible asymptotic behavior at non-vanishing times in the absence of outliers.At an edge of the boundary between a dense and a void region, for n → ∞ in an appropriate scaling, typically the Airy line ensemble arises, being a determinantal stochastic process described in terms of the extended Airy kernel [36,19] where τ 1 , τ 2 ∈ R are time and u, v ∈ R are spatial arguments.The contour Σ Ai consists of the two rays from ∞e −i π 3 to 0 and from 0 to ∞e i π 3 and Γ Ai consists of the two rays from ∞e −i 2π 3 to 0 and from 0 to ∞e i 2π 3 .Note that this definition of the kernel differs from the one used in [26,10,35] on random growth models by a conjugation and a shift of variables.We use the same definition as in [20].Statistics of the Airy line ensemble have been found for large classes of random matrices and related models, see e.g.[37,34,11,23,28,30].It also appears in a number of studies of interacting particle systems belonging to the KPZ universality class, see e.g.[18,38,29] and references therein.
As can be seen in Figure 1, at certain critical points in space-time two dense regions merge into one, and this is where other interesting correlations appear.This situation has so far been mainly associated with the Pearcey process, another determinantal process, see e.g.[39,22,13,12,9,31,32,24,33,14].This process is given in terms of the extended Pearcey kernel where Γ P consists of four rays, two from the origin to ±∞e −iπ/4 and two from ±∞e iπ/4 to the origin.In our main result, Theorem 1.2 below, we will see that apart from the (extended) Airy and Pearcey kernels there is one more kernel that can appear at the boundary of the dense regions.We call this kernel the Pearcey-to-Airy transition kernel with parameter a, in the following called transition kernel, given for τ 1 , τ 2 , u, v ∈ R and a ≥ 0 by where the contour Γ P is the X-shaped contour of the Pearcey kernel.This kernel interpolates between the Pearcey kernel (a = 0) and the Airy kernel (a → ∞) as can be seen from the following proposition.
Proposition 1.1.We have K 0 = K P and for any u, v, τ 1 , τ 2 ∈ R lim a→+∞ a 1/3 K a 2a 2/3 τ 1 ,2a 2/3 τ 2 (a 1/3 u, a 1/3 v) = K Ai τ 1 ,τ 2 (u, v). (1.4) We will first sketch a non-technical version of our main result.It is formulated in terms of the initial empirical spectral distribution which is the only input to our model and we recall that X 1 (0), . . ., X n (0) are the deterministic initial eigenvalues.At time t > 0, the random is for n large very close to the non-random free convolution µ n σ t of µ n and the semicircle distribution σ t with support [− √ 2t, √ 2t].It is also called the deterministic equivalent and is absolutely continuous for t > 0. The boundary of the support of µ n σ t , considered in space-time (x, t), provides a non-random shape around which to study NIBM.It can be parameterized conveniently in terms of an initial point x * n ∈ R \ {X 1 (0), . . ., X n (0)} through the linear evolution This evolution is the initial stage of a more general evolution [15,17] motivated by Biane's deep study of the free convolution [8].The crucial fact about x * n (t) is that there is a critical time for which we have with ψ n,t denoting the density of µ n,t for t > 0 ψ n,t (x * n (t)) = 0, for all 0 < t ≤ t cr (x * n ), and for all t > t cr (x * n ) small enough we have ψ n,t (x * n (t)) > 0, see e.g.[17, pp. 1504-1505] for details.We thus see that (x * n (t)) t≥0 traces an initial point x * n until it gets absorbed into a dense region at time t cr (x * n ).Varying x * n allows us to express every boundary point of the support of µ n σ t as (x * n (t cr ), t cr ), where t cr = t cr (x * n ) (we elaborate on this following (2.1) below).
Our main theorem will use the following two assumptions.Assumption 1: Significant portion of initial eigenvalues stays in a compact.There is some n-independent constant L > 0 such that lim inf n→∞ µ n ([−L, L]) > 0.
This natural assumption prevents too many starting points X j (0) going to ±∞ as n → ∞.
Assumption 2: Behavior of inital eigenvalues around x * n .Let (x * n ) n∈N be a bounded sequence of real numbers and (µ n ) n∈N be such that there exists an n-independent constant C > 0 with for n large enough.
Assumption 2 guarantees two things: Firstly, it prevents a fast accumulation of starting points X j (0) around x * n , in particular implying a non-vanishing critical time t cr (x * n ).While the case of vanishing t cr is interesting as well and, say, edge universality has been shown in many typical situations for vanishing times e.g. in [11], the formation of limits relies on a more delicate interplay between the initial configuration µ n and the boundary point, which typically makes these situations less universal.To see this, note that in regions with high density of initial eigenvalues we will in general expect sine kernel correlations, not boundary correlations.The time until such sine kernel correlations can be observed strongly depends on µ n , see [16, Theorems 1.2 and 1.3] and [23] and references therein, showing that in the case of vanishing t cr less universality is to be expected.
Secondly, Assumption 2 forbids initial eigenvalues to be too close to x * n , more precisely we have an asymptotic gap of mesoscopic size around x * n : For any δ > 1 5 we have for all n large enough µ n x * n − n −δ , x * n + n −δ = 0.This is important as too close initial eigenvalues would turn into outliers close to x * n (t cr ) at the critical time.Such outliers would lead to perturbations of the universal kernels; we will elaborate further below.
We can now formulate our main theorem in a nutshell: Define (1.8) Then under Assumptions 1 and 2, the correlation kernel of NIBM (to be defined below), locally rescaled in the vicinity of the boundary points shows This result completely characterizes the boundary behavior of NIBM for non-vanishing times and in the absence of outliers.It establishes the transition kernel as the link between Pearcey and Airy universality.
Let us now define the more technical terms to give a proper statement of our main result.We first note that we will, as is usual, study NIBM not as a process of n distinguishable particles (X 1 (t), . . ., X n (t)) t≥0 in R n but instead as the time-dependent point process n j=1 δ X j (t) t≥0 of n indistinguishable particles.Passing to the point process reveals the determinantality of NIBM, meaning that the space-time correlation functions of n j=1 δ X j (t) t≥0 can be written as determinants of matrices built from a single kernel.
To make this precise, take 0 can be written as the integral where x (j) m j ).The correlation functions of NIBM can be obtained as symmetrized marginals and can be written as (see e.g.[39,27]) the matrix on the r.h.s., being of size k j=1 m j × k j=1 m j and ((s, x), (t, y)) → K n,s,t (x, y) is the correlation kernel Here g µn is the log-transform where we use the principal branch of the logarithm, x 0 + iR is a vertical line and Γ is a closed curve with positive orientation encircling X 1 (0), . . ., X n (0) which has no intersection with the line x 0 + iR.We note that NIBM are (viewed as a point process) completely characterized by the correlation kernel (1.12), therefore we will state our main results in terms of the correlation kernel.
Interestingly, all decisive quantities can be expressed through the Stieltjes transform (1.14) As a consequence of Assumptions 1 and 2, G , −G are also bounded away from zero.
We recall from (1.5) the linear evolution and from (1.6) the critical time .
The crucial quantity I n from (1.8), determining the limiting correlations, is n .
We will only consider the case of asymptotically non-negative I n below, the non-positive case following by obvious modifications.We need two sets of scaling coefficients, depending on whether I n is bounded or not.As we will see later, the bounded case corresponds to boundary points close to a merging, the divergent case to points at the edge.With that in mind we define ("E" and "M" standing for edge and merging, respectively) for and (1.16) The constants c 2 and c 3 are related to the shape of ψ n,tcr around x * n (t cr ), see Remark 1.3 below.We remark that c 2 and c 3 in general depend on n, in particular c 2 can tend to infinity as n → ∞.Finally, we note that a conjugation of the kernel K n in (1.12) by setting for functions f n Kn,s,t (x, y) does not change the determinant (1.11) and thus Kn generates the same determinantal process as K n .Choosing the particular gauge factors will lead us to a converging kernel allowing us to state many of our results in terms of the kernel directly instead of correlation functions.
The following is our main result in full detail.
Theorem 1.2.Let (µ n ) n∈N and (x * n ) n∈N be such that Assumptions 1 and 2 hold, and recall (1) If I n → ∞ as n → ∞, we have as n → ∞ for any σ ≥ 0, uniform with respect to τ 1 , τ 2 belonging to an arbitrary compact subset of R. If I n n γ for some γ > 0, the error term can be improved to O e −σ(u+v) n −ε and is uniform with respect to u, v ∈ [−M, M n ε ] for some ε > 0 and an arbitrary constant M > 0. If 1 I n n γ for all γ > 0, the error holds uniform in u, v ∈ [−M, M s n ], where s n is an arbitrary sequence with for all n large enough and some n-independent constants C 1 < C 2 , then with (1) The generality of the results in terms of arbitrary and n-dependent x * n and µ n is remarkable: neither µ n nor x * n are required to converge as n → ∞.This is a very strong form of universality in the starting points of NIBM.(2) We believe this result to be optimal up to the exponent 5 in (1.7), and refinements of error terms.The exponent 5 can be improved to 4+ε in the Pearcey case and up to 3+ε in the Airy case with I n n γ by another more technical argument but for the sake of readability of the proof of Theorem 1.2 we omit this further complication and use one condition that fits all cases.It is clear for t cr to be positive, the exponent can not be smaller than 2. If the exponent is between 2 and 3, then the quantities G (3) The critical space-time points as well as the rescalings of the kernel in Theorem 1.2 are described in terms of the initial objects µ n and x * n : An initial point x * n is picked and the time t = t cr is then determined by x * n .In the literature it is more customary to first fix the time t > 0 and then to look at an edge or merging point x for this fixed time.It is also customary to express the rescaling factors c 2 and c 3 in terms of the density ψ n,t of µ n σ t around x.For instance, to have Airy universality at a given edge point (x, t), one usually needs to have a square root vanishing at x of the form ψ n,t (y) ∼ c 2 √ x − y for all y < x small enough and some c 2 > 0. Biane has shown in [8, p. 715ff] that this pre-factor c 2 can be expressed in terms of our c 2 , similarly for c 3 , and thus our rescaling is compatible with the rescalings used in the literature.However, it is worth noting that computing c 2 in (1.15) and c 3 in (1.16) is simpler (see (1.14)) than e.g.determining the pre-factor of the square root vanishing of the density of the free convolution µ n σ t at some point x.
Interpretation of Theorem 1.2 in terms of edge and merging points.Theorem 1.2 identifies the universality classes based on the behavior of the quantity I n in (1.8) without specifying the nature of the considered boundary points.Let us call a boundary point (x, t) with t > 0 a merging point with respect to µ n σ t if for the density ψ n,t of µ n σ t we have ψ n,t (x) = 0 and ψ n,t (x ) > 0 for all x = x in some neighborhood of x which is allowed to depend on n.A boundary point (x, t) with t > 0 that is not a merging point will be called edge point with respect to µ n σ t .For edge points we have ψ n,t (x) = 0 and ψ n,t (x ) > 0 either for all x > x small enough, or for all x < x small enough, but not for both.It is easy to relate edge and merging points to the three cases of Theorem 1.2 via the following geometric argument: For x * n not being one of the starting points, This means that starting the evolution from x > x * n small enough leads to a boundary point with a larger critical time than starting from x * n .This is equivalent to ψ n,tcr (x) = 0 for x > x * n (t cr ) small enough.As (x * n (t cr ), t cr ) is a boundary point and ψ n,tcr is continuous, we must then have ψ n,tcr (x) > 0 for x < x * n (t cr ) small enough, hence (x * n (t cr ), t cr ) is an edge point at the upper edge of some bulk.Similar arguments show that I n (x * n ) = 0 correspond to a merging point (x * n (t cr ), t cr ).We can determine the unique x * n leading to a merging point as follows: it lies in an interval (X j (0), X j+1 (0)) for some j and fulfills This is useful and intuitive as it means that a merging point is the boundary point of a space-time gap in the spectrum that extends furthest in time.
Starting from an x * n leading to a merging point, i.e. with (1.20), we can describe the transition from Pearcey to Airy statistics as follows: Defining for c n > 0 , and assuming x * n ∈ (X j (0), X j+1 (0)) such that Assumption 2 holds for x * n , we observe Pearcey statistics for c n → 0, Airy statistics for c n → ∞ and transition kernel statistics for c n bounded away from 0 and infinity.It is straightforward to check that an initial deviation of order n −1/4 from the point x * n results in a spatial deviation of order O(G µn (x * n )n −1/2 + n −3/4 ) from the merging point, and a temporal deviation of order O(n −1/2 ).The quantity G µn (x * n ) is zero if µ n is symmetric around x * n but will in general be non-zero, e.g. for the merging of the upper two bulks in Figure 1, showing that the transition from Pearcey to Airy statistics in general takes place at edge points that are further away from the merging point than the usual n −3/4 Pearcey scaling.
The transition described by Theorem 1.2 is a transition from the extended Pearcey kernel to the extended Airy kernel.A transition in the opposite direction, i.e. from Airy to Pearcey, has been found in [2,4].It can be achieved by arranging for r outliers at an edge of the spectrum of NIBM in a precisely defined way.This leads to the r-Airy kernel which is a perturbation of the extended Airy kernel and has been shown to interpolate from the Airy kernel (r = 0 outliers) to the Pearcey kernel (r → ∞ outliers) [2,3].Note that due to Airy and Pearcey kernels emerging from different scalings of NIBM, it can be expected that a transition from one kernel to the other can only be observed sending an interpolating parameter to infinity, and that a transition from Airy to Pearcey will not be the same as a transition from Pearcey to Airy.The transition from the Pearcey process to the Airy line ensemble has been studied in [1] and [7].In [1] an asymptotic relation between the Pearcey and the Airy kernel has been found and used to study gap probabilities.This has been extended in [7] via a novel Riemann-Hilbert problem to show that the probability of a large gap around the merging for the Pearcey process asymptotically factorizes into two Airy line ensemble gap probabilities.Interestingly, these results do not involve an interpolating kernel like the r-Airy kernel for the Airy-to-Pearcey transition.Essentially it is shown in [1] with a clever PDE approach that (in a different parameterization) The meaning of (1.21) is not obvious, however, utilizing the transition kernel K a we can explain (1.21) as follows.The difference between the transition kernel (1.3) and the Pearcey kernel (1.2) is the presence of cubic terms in the phase function of the former kernel.As is well known, for any polynomial p(x) of degree k it is always possible to find an h ∈ R such that the x k−1 term in p(x + h) has the coefficient 0 (being a simple instance of the Tschirnhaus transformation).Applying this to the polynomial in the phase function of the transition kernel and absorbing the resulting changes in the quadratic and lower order terms by modifying all quantities τ 1 , τ 2 , u, v shows that the transition kernel can be masked as the Pearcey kernel with changed parameters and a conjugation, Combining (1.22) with the interpolation property (1.4) then gives the asymptotic relation (1.21).It is worth mentioning that the expression (1.22) also shows that the transition process generated by the transition kernel has the same regularity properties as the Pearcey process.
Correlations around the boundary of µ σ t .In our results above we considered the boundary of the support of µ n σ t as a deterministic shape around which to study correlations of NIBM.We did not make any assumption about convergence of µ n as n → ∞.If however the sequence (µ n ) n≥1 does have a weak limit µ as n → ∞, then it is a natural question to ask about the correlations of NIBM around the boundary of the support of µ σ t , which is in this case the weak limit of µ n σ t as n → ∞.
There are some interesting differences between using µ n σ t and using µ σ t for a deterministic shape.For example, in the µ n σ t setting, bulks consisting of o(n) eigenvalues can be visible and their edge and merging statistics can be studied, while such bulks are not visible using µ σ t .Another notable difference is that µ σ t has a richer structure than µ n σ t in terms of boundary of support and behavior of the density at boundary points.For instance, it is easy to show that the boundary of µ n σ t always evolves non-linearly in time, and each boundary point of µ n σ t for t > 0 is critical in the sense that it can be written as (x * n (t cr ), t cr ) for some x * n (see the arguments following (2.1) below).In contrast to this, the boundary of µ σ t can evolve linearly in time and in this case there are pre-critical boundary points.To see this, we note that similarly to above we can describe any boundary point of the support of µ σ t as (x * µ (t), t) for some x * ∈ R and t ≤ t cr,µ , where provided the first integral exists.Note that we use the subscript µ to distinguish this evolution and critical time from the previous evolution (1.5) and critical time (1.6).Note also that we consider x * as being solely determined by µ and thus n-independent.Now let x * be a boundary point of the support of µ with i.e. t cr,µ (x * ) > 0. Then x * evolves linearly as a boundary point of µ σ t up to the critical time.Any point (x * µ (t), t) with t < t cr,µ (x * ) is then pre-critical in the sense that it cannot be written as (x µ (t cr,µ (x)), t cr,µ (x)) for any x.Moreover, a point x * with (1.24) may even be an interior point of the spectrum, like x * = 0 for µ with density ψ(x) = 8 15 √ π x 6 e −x 2 , x ∈ R. It is a feature of using µ σ t that special points x * can be easily identified and followed in time until they become truly part of a bulk, for example for the x * = 0 above for t cr,µ = 5 2 .The behavior of the density ψ t of µ σ t at boundary points is also richer than that of ψ n,t .It has been shown in [5,6] that the density ψ n,t of µ n σ t has a square root vanishing at edge points and a cubic root vanishing at merging points, suggesting Airy and Pearcey universality at these boundary points.However, for pre-critical internal boundary points of µ σ t as in our example above, typically vanishing of order higher than 1 takes place while at merging points one may have square root vanishing from one side and a higher power vanishing from the other side [15].This leads to being able to observe Pearcey or Airy correlations at a merging point (defined using µ σ t ) depending on the behavior of the density at that merging point [17].
Finally, let us mention that computing all quantities I n , t cr , c 2 , c 3 approximately can be much easier using µ σ t than µ n σ t .For example, in Figure 2 the starting points are chosen as quantiles, slightly modified to fulfill Assumption 2, of a µ with a piecewise polynomial density.All quantities in terms of µ are explicitly computable and may be used as approximations of the harder to compute quantities in terms of µ n .
In the following we will give an analog of Theorem 1.2 about correlations of NIBM around boundary points of µ σ t .For Theorem 1.2 the decisive quantity is 3 , leading to the three cases of |I n | → ∞, |I n | → 0 and |I n | bounded away from 0 and infinity.In contrast to that, working with the limit µ σ t the correct analog Assuming existence of I µ and as above restricting to non-negative I µ , only two cases can occur: either I µ > 0 or I µ = 0.In contrast to Theorem 1.2, using an n-independent rescaling in terms of µ instead of µ n requires a control over the rate of convergence of µ n to µ.To this end, let d(µ n , µ) := sup x∈R |F n (x) − F (x)| denote the Kolmogorov distance of µ n and µ, where F n and F are the distribution functions of µ n and µ, respectively.Moreover, we define assuming existence of these expressions.
(1) Suppose that , where the convergence is uniform for τ 1 , τ 2 and u, v in compacts of R.
We remark in passing that the condition lim inf n→∞ dist(x * , supp(µ n )) > 0 is a convenient replacement of Assumption 2 avoiding further technical assumptions on the behavior of µ n and µ at x * .
Analogously to above, edge points in the present setting (defined using µ σ t ) correspond to initial points x * with I µ (x * ) > 0.Moreover, merging points correspond to initial points x * maximizing their critical time within a gap of the support of µ: we speak of µ having the gap (α, β) if α < β are such that µ((α, β)) = 0, µ((α − δ, α]) > 0 and µ([β, β + δ)) > 0 for all δ > 0. The x * ∈ [α, β] leading to the merging point of the two bulks left and right of the gap is given by again as the x * with the largest critical time in the gap.In contrast to the µ n -dependent setting above, we do not necessarily have I µ (x * ) = 0 which may be the case if x * is α or β.While we do not consider this here, [17] has shown in a situation with α = β that Airy correlations can arise at the µ σ t -merging point.This is the case if one bulk dominates the other, effectively pushing it away.In [17] it was found that in such a case there is a mesoscopic one-sided gap in the spectrum of order larger than n −2/3 is present with high probability, effectively making the dominated bulk negligible to the dominating one as far as edge scaling is concerned.Part 3 of Corollary 1.4 shows that the transition kernel can make an appearance in the µ σ t setting right at a merging point.However, it is less universal than in the µ n σ t setting as now the limit retains more information about the particular sequence (µ n ) n (displayed by the constant b in the kernel's arguments).The proof, to be found in Section 5, is constructive, giving a general geometric procedure to construct examples in which this happens.
Largest eigenvalue of a bulk.Theorem 1.2 implies that the space-time correlation functions of the appropriately rescaled NIBM converge to the ones of the Airy line ensemble, the Pearcey process or an infinite-dimensional stochastic process governed by the transition kernel, respectively.All three can be seen as a collection of infinitely many layers.In contrast to the other two, the Airy line ensemble has with probability one a largest layer, which is called Airy 2 process.It is of independent interest as it describes the typical fluctuations of the largest eigenvalue of NIBM as well as the fluctuations of height functions in certain random growth models belonging to the KPZ universality class.For NIBM, this has been shown to various extents in situations where µ n → µ for some µ, most notably in the form of universality of its one-dimensional marginals, which have the Tracy-Widom distribution (with parameter β = 2).Theorem 1.2 allows us to give a strong version of this statement, valid not just for the largest eigenvalue overall but for the largest eigenvalue of a bulk that might only have a mesoscopic distance to another bulk.Moreover, we do not require the initial configuration µ n to have any large n limit at all.The Airy 2 process (A(τ )) τ ∈R [36] is defined by its finite-dimensional distributions and r(τ j , x) := 1 (a j ,∞) (x), j = 1, . . ., m.The Airy 2 process is stationary with onedimensional marginals that have the (β = 2) Tracy-Widom distribution.
Corollary 1.5.Let µ n and x * n satisfy Assumptions 1 and 2 and assume that Then there is some ε > 0 such that for any sequence (s n ) n with 1 τ ∈R , to be understood as weak convergence of the finite-dimensional distributions.In particular, for any τ ∈ R, we have weak convergence of c 2 n 2/3 (ξ(τ ) − x * n (t E n (τ ))) to the Tracy-Widom distribution as n → ∞.
The corollary is a strong generalization of [17,Theorem 1.4], where convergence was shown for the largest eigenvalue of a bulk around a mesoscopic gap close to the merging, under strong assumptions on the convergence of µ n to some specific µ having a density with an isolated zero of order higher than 3.We see here that all these assumptions are not necessary.As the proof of [17,Theorem 1.4] applies with minor changes, we will omit it here but note in passing that the proof requires both the strong estimate on the decay of the remainder in u and v in (1.19) as well as the convergence statement (1.19) being valid in a growing range of u and v.
Application to random starting points.So far the initial configuration µ n has been deterministic.It is however also common to consider Markov processes like NIBM with random starting points and from a random matrix point of view it is very natural to consider a random matrix as a starting point of a Hermitian Brownian motion.In fact, proving universality for NIBM with random starting points for very short times is at the heart of many universality proofs of different random matrix ensembles using the so-called three-step approach (see e.g.[23] and references therein).
Given the minimal assumptions on the (deterministic) starting points in the previous results, it should come as little surprise that we can extend these results to random starting points as long as Assumptions 1 and 2 are satisfied.In fact we can in our previous results simply allow µ n to be random and then start NIBM independently from the starting points.Note that with random starting points NIBM are in general not determinantal but their space-time correlation functions can be defined using (1.9) and (1.10).Note also that for such statements the necessary rescaling quantities x * n , c 2 , c 3 are in general random as well, much like in self-normalizing limit theorems.Stating such a result would however be highly repetitive compared to the previous results, thus we just mention this possibility and focus instead on a version of Corollary 1.5 which should be of interest in its own right.It shows that under mild assumptions the largest eigenvalue of any Hermitian random matrix with sufficiently bounded spectrum exhibits Tracy-Widom fluctuations provided that it has a sufficiently large Gaussian component.
Corollary 1.6.Let for each n ∈ N, A be an n × n Hermitian random matrix and let µ n denote the empirical distribution of its eigenvalues.Assume that there are non-random n-independent a < b such that, as n → ∞, there is no eigenvalue of A larger than b and µ n ([a, b]) ≥ c for some n-independent c > 0, both with probability 1 − o(1).Let x max denote the largest eigenvalue of A + √ tG, where G is a random matrix from the Gaussian Unitary Ensemble with diagonal entries of variance n −1 , independent of A. Then for any fixed t > (b − a) 2 /c, x max has Tracy-Widom fluctuations in the limit n → ∞.More precisely, there are real random variables b n and k n > 0 independent of G such that for any s ∈ R where F TW is the distribution function of the (β = 2) Tracy-Widom distribution.
The remainder of this paper is organized as follows.Theorem 1.2 will be proved via a careful asymptotic analysis of the double contour integral (1.12).The main asymptotic contribution of the double contour integral will come from a neighborhood of x * n , which size ranges from n −1/3 for I n n γ for some γ > 0, up to n −1/4+ε for I n bounded.The later called slow Airy case of small growth 1 I n n γ for any γ > 0 is particularly delicate and requires working on two scales simultaneously.Section 2 prepares for the choice of the contour Γ in (1.12).This requires a good understanding of the behavior of the density ψ n,tcr around the merging point, which will be provided in Proposition 2.2.In Section 3, Theorem 1.2 is proven for the fast Airy case I n n γ for some γ and the slow Airy case; the Pearcey and transition cases of Theorem 1.2 are proven in Section 4. Proofs of Proposition 1.1, Corollary 1.4 and Corollary 1.6 are given in Section 5.
Acknowledgements:
The authors would like to thank Torben Krüger for valuable discussions.Partial support by the DFG through the CRC 1283 "Taming uncertainty and profiting from randomness and low regularity in analysis, stochastics and their applications" is gratefully acknowledged.
Preliminaries
The proof of Theorem 1.2 relies on an asymptotic analysis of the rescaled correlation kernel Kn defined in (1.17) based on the representation as a double complex contour integral given in (1.12).In order to study the asymptotic behavior of the integral, it is crucial to make appropriate choices for both contours of integration x 0 + iR and Γ in (1.12), being particularly important in the neighborhood of critical points of the integrand which provide the main contributions to the limits.
For the choice of the w-integration contour Γ, up to a local modification around the points x * n , we will use the graph of the function y t,µn : R → R, defined by y t,µn (x) := inf y > 0 : for t = t E n (τ 1 ) or t = t M n (τ 1 ), viewed as a curve in the complex plane, joint with its complex conjugate.The significance of this function stems from the fact that it is a multiple of the density ψ n,t of the measure µ n σ t under a change of variable.More precisely, Biane [8] has shown that where for every x and t > 0, the function x → xn (t), is a homeomorphism from R to R (for any t > 0 fixed).We note that xn (t) coincides with the linear evolution x * n (t) from (1.5) for x = x * n if y t,µn (x) = 0.Moreover, using these notions we can see that every boundary point of the support of µ n σ t can be expressed as (x * n (t cr ), t cr ) with t cr = t cr (x * n ).To this end, we recall that such a boundary point, say b, is characterized by ψ n,t (b) = 0 and that ψ n,t takes on positive values on any neighborhood of b.Using the homeomorphism (2.2) we can write b = xn (t) for some initial point x = x * n , which implies y t,µn (x * n ) = 0 and, using (2.1), we then have t = t cr (x * n ).For our analysis of the double contour integral, we need to understand the behavior of the function y t,µn in small neighborhoods of the points x * n , or equivalently of the density ψ n,t at x * n (t), where t = t E n (τ 1 ) or t = t M n (τ 1 ).For the Pearcey and transition cases, in which we have t = t M n (τ 1 ), it suffices to know where the graph of y t,µn enters the disk around x * n of radius n −1/4+ε for some small ε > 0, whereas for the Airy case, in which we have t = t E n (τ 1 ), we need to study the behavior of y t,µn on the boundary of the disk around x * n of radius slightly larger than (nG n ) −1/3 .To address the problem of separation of the scales (nG (2) n ) −1/3 and n −1/4 , we will also need information on y t E n (τ 1 ),µn on the larger scale n −1/4+ε , where we recall that for the statement and the proofs of the first and the last part of Theorem 1.2 we focus on the case of positive values of G (2) n .Before we turn our attention to these points in depth in Proposition 2.2 below, for convenience of the reader we first state the following elementary lemma giving the expansion of the Stieltjes transform G µn in shrinking disks in the plane, which we will use frequently and which readily follows from standard arguments.For the definition of the coefficients G (j) n we refer to (1.14), and we consider Assumptions 1 and 2 valid throughout the section.Lemma 2.1.For every 0 < ε < 1 20 we have and 0 < ε < 1 20 we have We infer We turn to a proposition providing information on the behavior of the density describing function y t,µn on small disks centered at the points x * n .This information will be crucial for choosing convenient integration contours for our analysis later on.We will have to distinguish between a fast and a slow Airy case, depending on the speed of divergence of n = 2I n to infinity.In the latter we need detailed information about the behavior of y t,µn close to x * n on two scales.More precisely, we will study the location of the graph of y t,µn with respect to parts of the boundaries of two differently sized disks centered at x * n , defined in the proposition below as S j for j = 1, 2. We will see that for large values of n the graph of y t E n (τ ),µn (x) lies below S 1 3 1 and above S 1 3 2 , whereas the graphs of y t M n (τ ),µn (x) and y t E n (τ ),µn (x) enter the disk D n + e iπ/3 , leaving it around x * n + e i2π/3 , whenever the growth of the sequence is sub-polynomial, see Figure 3 below for a visualization.
(1) (Fast and slow Airy cases) Assume that we have n → +∞, as n → ∞, and let (r n ) n be a sequence of positive real numbers with (nG n ⊂ C be the closed disk centered at x * n with radius r n .For δ > 0 let S 1 3 1 (δ) be the subset of the boundary ∂D 2 (δ) be the subset of the boundary ∂D n consisting of all points w with Then for any δ > 0 small enough, n large enough and uniformly with respect to τ in compact subsets of R, the graph of the function x → y t E n (τ ),µn (x) (considered as a subset of the complex plane) lies for x ∈ R ∩ D 1 (δ) be the subset of ∂D n consisting of all points w with and let S 1 4 2 (δ) be the subset of ∂D n consisting of all points w with Then for every δ > 0 small enough, n large enough and uniformly with respect to τ in compact subsets of R, the graphs of the functions x → y t E n (τ ),µn (x) and x → y t M n (τ ),µn (x) lie for x ∈ R ∩ D n , embedded in C, above S 1 4 1 and below S 1 4 2 .In the statement on y t E n (τ ),µn we assume positive values of G n .
Proof of Proposition 2.2.In the first part of statement (1) we claim that the graph of n lies below the arc S 1 3 1 .To show this, by (2.1) it is sufficient to show for all points x + iy ∈ S 1 3 1 that the inequality holds for τ coming from a compact subset, if n is chosen large enough.In the other cases we follow a similar reasoning.
We begin with the proof of the first statement by observing the identity for y > 0 (see (2.1)) For a subsequent expansion we observe using the definitions of t cr , c 2 and t E n (τ ) from (1.6) and (1.15) that −G (1) n = as n → ∞, uniformly with respect to δ < θ < π 2 − δ and τ in compact subsets.It follows directly from our assumptions that as n → ∞ and thus we can conclude as n → ∞, uniformly with respect to δ < θ < π 2 − δ and τ in compact subsets.Hence, for every fixed 0 < δ < π 2 and τ coming from a compact subset, if n is chosen large enough we have for all points x + iy lying in S 1 3
In an analogous fashion we obtain for π 2 + δ < θ < π − δ the inequality , which is valid for all τ in a compact subset and all points x + iy lying in S 1 3 2 (δ) if n is large enough.Now, the statement about the graph of y t E n (τ ),µn follows directly from the definition of y t E n (τ ) in (2.1), proving part 1 of the proposition.For part 2, we first consider the behavior of the graph of n sin 3θ n sin 3θ as n → ∞, uniformly with respect to τ in a compact subset and in θ coming from the above union of intervals.It follows from the assumptions that
Proof of Theorem 1.2 for the Airy case
In this section we start the proof of Theorem 1.2.The first part deals with the Airy case, where we have to investigate the rescaled correlation kernel Here Kn,s,t is the gauged kernel Kn,s,t (x, y) = K n,s,t (x, y) exp(f n (t, y) − f n (s, x)), where the gauge factors f n are defined in (1.18), the kernel K n,s,t is given in (1.12), and We drop the superscript E in the time parameter throughout this section (as we only consider this parameterization).Moreover, we recall that in the Airy case we assume We observe first that it is sufficient to focus on the case τ 1 ≤ τ 2 , the convergence of the rescaled heat kernel part in (1.12) follows from a simple computation.Thus, we have to deal with the double complex contour integral appearing in (3.1), which we write as abbreviating the phase function of the rescaled kernel as with the log-transform g µn being defined in (1.13), and is the rescaled gauge factor from (1.18).
3.1.
Choice of contours and preparations.In order to prepare the expression (3.2) for the asymptotic analysis, in this subsection we are mainly concerned with an appropriate choice of the contours of integration.Due to the multiple n-dependence of the phase function φ n,τ this requires some care.Our analysis will not use the saddle points of φ n,τ explicitly, however it is instructive for the choice of contours to look at two facts that can be found in [17, Section 3.1]: the saddle points of the integrand (the critical points of the phase function φ n,τ ) in both of the variables z and ω lie on the graph of the function y tn(τ ),µn introduced in (2.1).Moreover, a further analysis of an extension of the homeomorphism in (2.2) to the complex plane reveals that these saddles are located in n-dependent vicinities of the point x * n .However, it is cumbersome to consider suitable descent or ascent paths in both variables for the phase function passing through the saddle points exactly.Therefore we will explicitly construct paths that pass by the relevant points close enough for the asymptotic analysis and have in particular an appropriate crossing behavior.
By analyticity, we change the contour in the z-variable in (3.2) to a contour Σ which is a straight vertical line with a local modification close to the point x * n .To this end, for some 0 < ε < 1 20 we define z n := x * n + n −1/4+ε e 7iπ 16 and where for points z 1 , z 2 ∈ C we denote by [z 1 , z 2 ] the straight segment from z 1 to z 2 and we use the notation (z 1 , z 2 ] and [z 1 , z 2 ) for the half-lines between z 1 and z 2 if z 1 or z 2 are points at infinity, respectively.We fix the orientation from bottom to top, see Figure 4 for a visualization, and we remark that the specific value 7π 16 is chosen so that we have n → ∞, as n → ∞.We will call the situation n 1/4 G (2) n n γ for some γ > 0 the fast Airy case, and the situation 1 n γ for all γ > 0 the slow Airy case.It is known (see [17,Lemma 3.3 (1)]) that the graph of x → y tn(τ 1 ),µn (x), embedded in the closed upper half plane of C, is for every u, τ 1 an ascent path for w → φ n,τ 1 (w, u) emanating from the saddle point located on this graph.As already indicated, avoiding convergence issues with the double contour integral in (3.2), for the global part of Γ we use the graph of y tn(τ 1 ),µn (x), but we will introduce local modifications in the vicinity of x * n .To motivate and prepare these modifications, we first apply Lemma 2.1 to the phase function φ n,τ 1 (w, u), giving via term-by-term integration n ⊂ C is the closed disk centered in x * n with radius n −1/4+ε and ε > 0 chosen sufficiently small as above.We see from (3.5) that φ n,τ 1 is, up to a small error, a quartic polynomial in w ∈ D 1/4 n , and we will see later that for w − x * n of the order r n larger than (nG (2) n ) −1/3 but smaller than n −1/4 , the third power term in (3.5) dominates.With regard to the envisaged limiting kernel in (1.1), this indicates that the asymptotic main contribution to the integral is coming from neighborhoods of x * n of size r n .
The contour Γ in the fast Airy case.We will start with the w-contour in the fast Airy case, in which we have n 1/4 G (2) n n γ for some γ > 0, meaning we can separate the two scales nG and n −1/4 by a suitable sequence r n in terms of a power of n.To this end, we choose such that we have (nG , and thus r n is bounded well away from both scales.Now we look at the closed disk centered at x * n of radius r n , denoted by D 1/3 n .From Proposition 2.2 we know that the graph of y tn(τ 1 ),µn enters D 1/3 n coming from the right at some point w 1,n about which we know that for any small δ > 0 for n sufficiently large we have 0 ≤ arg(w 1,n − x * n ) < δ.If there are several such points, we choose as w 1,n the left-most of them, and we choose from now on a fixed 0 < δ < π/6.
The contour Γ is now defined as follows: We start at a real point that lies to the right of the right-most edge of the support of y tn(τ 1 ),µn and right of D n .If we meet the support of y tn(τ 1 ),µn first, we follow this graph to the left to the point w 1,n .From w 1,n we go vertically down to the real line to the point w 1,n and then follow the line to x * n .In both cases, we go from x * n straight to w 2,n := x * n + r n e 2πi/3 ∈ ∂D 1/3 n .From w 2,n we then go vertically up to w 3,n := w 2,n + iy tn(τ 1 ),µn ( w 2,n ).
Note here that by Proposition 2.2 we have w 3,n ≥ w 2,n as the graph of y tn(τ 1 ),µn lies above this part of the disk D 1/3 n .The point w 3,n lies on the graph of y tn(τ 1 ),µn and we follow it to the left-most edge of the support of y tn(τ 1 ),µn .We close the contour by following the complex conjugate contour back to the starting point.The contour is depicted in Figure 5 in the more complicated case that y tn(τ 1 ),µn is non-zero also to the right of D 1/3 n in which case we may choose the starting point close or even at the right-most point of the support of y tn(τ 1 ),µn .
The contour Γ in the slow Airy case.In the slow Airy case, where 1 n γ for all γ > 0, we will work on both relevant scales.Let r n , ε > 0 be such that (nG The bound 1 20 is chosen in order to ensure that errors coming from (3.5) stay small later in the analysis of the remainder terms.Let D between these points.Moreover, they satisfy for every fixed 0 < δ < π/6 and n large enough Now, to define the w-contour Γ, we take the right-most point of the support of y tn(τ 1 ),µn and follow the graph of y tn(τ 1 ),µn to the left until we hit w 1,n .Note that such a point exists by Proposition 2.2 (2).From w 1,n we go down vertically to the point w 2,n determined by The relevance of w 2,n and in particular the angle π/7 is that w 2,n lies in a sector, bounded away from its boundaries, where the real part of w → (w − x * n ) 3 is positive and the real part of w → (w − x * n ) 4 is negative.From w 2,n we go straight to w 3,n := x * n + r n e iπ/7 ∈ ∂D 1/3 n .
From w 3,n we go down vertically to w 3,n and then follow the real line to x * n .From x * n we go straight to and then go straight to w 5,n .From w 5,n on, we follow the graph of y tn(τ 1 ),µn to its left-most support point and from there we close the contour by following its complex conjugate back to x n .We remark that it would also be sufficient to directly go straight from w 1,n to w 3,n but the detour via w 2,n makes the subsequent analysis more transparent.The contour Γ is shown in Figure 6.
Splitting the kernel.Let us introduce some notation in further preparation of the proof of Theorem 1.2 in the Airy case, so that we can treat both fast and slow Airy cases together as closely as possible.Using the above constructed paths, we now have where φ n,τ has been introduced in (3.3) and the rescaled gauge functions f * n (τ, u) have been defined in (3.4).We remark that the two contours Σ and Γ now intersect at x * n and thus violate the non-intersection condition imposed for (1.12).It is however easily seen that thanks to the explicit crossing behavior of the contours, the singularity at x * n is integrable and the representation is readily shown to be valid by taking a suitable limit.The leading contribution in the limit n → ∞ will be provided by the double integral restricted to the parts of the contours that lie inside of D 1/3 n .To define this, we set We will see below that the main contribution of (3.8) comes from We split the remaining part of the kernel into Γout dw e φn,τ 2 (z,v)−φn,τ 1 (w,u) z − w , (3.10) hence, we have
Local analysis of the main part.
In this subsection we will prove, simultaneously for the fast and slow Airy cases, that the kernel part K (1) n,τ 1 ,τ 2 gives the Airy kernel in the large n limit.Proposition 3.1.We have for every σ ≥ 0 u+v) ), n → ∞, where the o-term is uniform for u, v ∈ [−M, M rn ] with rn := r n (nG (2) n ) 1/3 , (3.12) for fixed M > 0 and uniform for τ 1 , τ 2 in compacts, and r n is given in (3.6). If n γ for some γ > 0, then the o-term can be replaced by O(e −σ(u+v) n −ε 0 ), for some ε 0 > 0, and the convergence is uniform for u, v ∈ [−M, M n ε 0 ] for every fixed M > 0 .
Proof.As the parts K (1) n,τ 1 ,τ 2 (u, v) are very similar in both Airy cases, we will give the details of the proof for the statement in the slow Airy case.The statement about the error terms in the fast Airy case, in which we have r n = (nG , is then straightforward to validate.In the treatment of the slow case, we first recall that r n is an arbitrary sequence satisfying nG with rn defined in (3.12).We note that all error terms will be uniform for τ 1 , τ 2 in compacts, and wherever not stated explicitly, the O, o-terms are to be understood with respect to n → ∞.
We begin by making the change of variables inside the disk D , ω := . Now, using this together with expansion (3.5), we obtain after some algebra where the o-term is uniform for ζ, ω ∈ D1/3 n and does not depend on u, v and the O-term does not depend on ζ, ω.Furthermore, it is readily seen that
Hence, defining
with the O, o-terms from above, we have after the change of variables in (3.9) ζ − ω . (3.15) In the last expression we denote ΣAi := (∞e − 7iπ 16 , 0] ∪ [0, ∞e 7iπ 16 ), (3.16) the contour Γ Ai has been defined following (1.1), and we set with j = 3 as we are in the slow Airy case (and it would be j = 1 in the fast Airy case).We remark that the integral over the rescaled version of [ w j,n , x * n ] in (3.15) is zero as it appears twice with opposite orientations and thus cancels out.
We show next that the contour Γ right can be removed from (3.15) at the expense of a small error.To see this, we observe that Γ right lies entirely in a sector where ω 3 > 0 and is bounded well away from its boundaries.Moreover, we have dist(Γ right , 0) → ∞, and these two properties imply that we have for some c > 0 and n large enough the bound .
In words, we connect the two rays of ΣAi or Γ Ai not at 0 but with a vertical segment with real parts σ and −σ, respectively, thereby achieving where we keep the bottom-to-top orientation of the contours.Using analyticity we can replace ΣAi and Γ Ai by their modifications.This gives uniformly with respect to u, v ∈ where we used h n = o(1), and the boundedness of the integral |ζ − ω| in n as long as u, v are bounded below.Summarizing, we have u+v) ).
In the last step we replace the n-dependent contours ΣAi ∩ D1/3 n and Γ Ai ∩ D1/3 n by their limiting contours ΣAi and Γ Ai as n → ∞ at the expense of a small error.To see this, we estimate the modulus of the integral ζ − ω by modifying the contour Γ Ai to Γ Ai σ first, and then we use the following estimates (for some c > 0): and bounded τ 2 , and for ω ∈ Γ Ai σ , u ≥ −M and |τ 1 | ≤ M .Then we have for some constants c , c > 0 The remaining double integral over ΣAi ∩ D1/3 n can be estimated similarly.Finally, by analyticity we may deform ΣAi into Σ Ai as defined following (1.1).This finishes the proof of the proposition.
3.3.
Analysis of remaining parts.The aim of this section is to see that the remaining kernel parts K n,τ 1 ,τ 2 , j = 2, 3 from (3.10) and (3.11) are asymptotically negligible.Proposition 3.2.We have for every σ ≥ 0 rn ] for fixed M > 0 and rn from (3.12), and uniform for τ 1 , τ 2 in compacts. If n γ for some γ > 0, then the o-term can be replaced by O(e −n d −σ(u+v) ), for some d > 0, and the convergence is uniform for u, v ∈ [−M, M n ε 0 ] for fixed M > 0 and some ε 0 > 0.
Proof.We will give a detailed proof for j = 2 in the more complicated slow Airy case, in which we have 1 n γ for all γ > 0, and we will afterwards indicate which modifications are needed for the fast Airy case.We also recall that we assume τ 2 ≥ τ 1 and 0 < ε < 1 20 .We first split the contours of integration in (3.10) further into n-dependent sub-contours We will now show that the corresponding double integrals are negligible.
Negligibility of the integral over
The contour Γ out,1 consists of the three segments [w 1,n , w 2,n ], [w 2,n , w 3,n ], [w 4,n , w 5,n ] and their complex conjugates, where these points have been defined following (3.7).As the conjugated parts can be treated similarly, we will focus on the integrals over Σ ∩ D with ΣAi from (3.16) and we know about d n (based on Proposition 2.2) that d n / dn is bounded away from 0 and infinity.
Similarly to (3.14), a computation using expansion (3.5) yields for z, w ∈ D with the o-term being uniform in ζ, ω ∈ D1/4 n and independent of u, v.Here we make use of the assumption ε < 1 20 in order to control the error terms, in particular it ensures that the remainder term we have ζ 4 > 0 and ζ 3 < 0 and for w ∈ [w 2,n , w 3,n ] we have ω 4 < 0 and ω 3 > 0.Moreover, for w ∈ [w 2,n , w 3,n ] we have ω → +∞ as n → ∞.Therefore, there is some small c > 0 such that for any u ∈ [−M, M rn ] we have This enables us (recalling that G n < 0) to deduce that there is some Now, after adjusting the constant C, the estimate (3.20), valid for ζ ∈ ΣAi ∩{|ζ| ≤ dn } and ω ∈ Γ, remains valid if we replace ΣAi ∩ {|ζ| ≤ dn } by its σ-modification ΣAi σ ∩ {|ζ| ≤ dn } from (3.17).This follows from the observation that the differing parts are bounded and the coefficient of ζ 4 in expansion (3.19) converges to zero, as n → ∞.We infer from this where we changed the contour ΣAi ∩ {|ζ| ≤ dn } by analyticity to its σ-modification before taking the absolute values inside the integrals.We can now argue that is finite and bounded for v ≥ −M , where Γ∞ := [0, ∞e ) and we use that on Σσ we have (ζ − σ) ≥ 0.Moreover, for |ζ − ω| we have lim n→∞ J n = J, from which we conclude that holding uniformly with respect to v.This in turn implies that (3.21) converges to 0 for n → ∞.
By similar reasoning, we can show that the double integral over Σ ∩ D is negligible: Since for w ∈ [w 4,n , w 5,n ] we also have ω 4 < 0 and ω 3 > 0, we can include the transformed variant of [w 4,n , w ) as a vanishing tail piece of a contour Γ∞ such that the integral I from (3.22) (with this new Γ∞ ) is finite.Then the same argument as above yields the desired estimate.
For the segment [w 1,n , w 2,n ], a different argument is needed as ω 3 changes signs on it.Briefly speaking, it will rely on the fact that [w 1,n , w 2,n ] lies in a region where the quartic term dominates the ω-terms in the expansion (3.19) and we have ω 4 < 0. To make this precise, we note first that for some c > 0 we have dist([w 1,n , w 2,n ], for some c > 0, u ∈ [−M, M rn ] and n large enough.As we are in the slow Airy case, in which n is bounded by n γ for every γ > 0, we thus obtain for some c > 0 we have for some C > 0, all (z, w) This exponential decay is clearly enough to show (using |z − w| ≥ C n −1/4+ε for some C > 0 and the lengths of the contours being O(n −1/4+ε )) that the double integral over Negligibility of the integral over Σ\D for some c > 0. We will now show that (3.24) can be extended to all z ∈ Σ\D along Σ.To see this, let α, β ∈ R and compute ∩ {| z| < n}.We thus have sufficiently fast nential decay of the integrand on the contours, and together with the polynomial lengths of the contours and |z − w| ≥ c n −1/4+ε for some c > 0, this yields with the standard estimate the negligibility of the integral over Σ\D Negligibility of the integral over (Σ ∩ {| z| ≥ n}) × Γ out,1 : In this case the z-contour is unbounded and we will use sub-Gaussian behavior of φ n,τ 2 on Σ to obtain the decay.Indeed, from the definition of (3.3) it is readily seen that the quadratic term eventually dominates the logarithmic one.To quantify this, we first consider the logarithmic term We can interpret g µn (x * n ) as a "normalization" to compensate for the potentially large parts of g µn (z) that occur if some initial points X j (0) go to ±∞ sufficiently fast.For z ∈ Σ we have for some C > 0, independent of large n.For the quadratic part of φ n,τ 2 we have for n large enough for some c > 0. Integrating this along Σ ∩ {| z| ≥ n} gives an integral of order e −Cn 3 for some C > 0. Taking into account that on Γ out,1 we have , uniformly with respect to bounded τ 1 , τ 2 and u, v ∈ [−M, M rn ], the negligibility of the double integral over (Σ ∩ {| z| ≥ n})×Γ out,1 now follows immediately by taking absolute values inside the integral and a decoupling of the integrals.
Biane has shown in [8,Corollary 3] that H is a homeomorphism on R, in particular H(α) ∈ R, and it is not hard to see that lim α→±∞ H(α) = ±∞.Defining ᾱ as the unique (saddle) point for which H(α) = x * n (t n (τ 1 )) + u c 2 n 2/3 holds, this implies that the derivative in (3.28) is negative for α < ᾱ, 0 for α = ᾱ and positive for α > ᾱ.Recall that by Proposition 2.2 the part of the graph of y tn(τ 1 ),µn between w 1,n and w 5,n lies entirely in D 1/4 n .Going back to (3.5), we see that the exponential decay gets slower as w moves from w 1,n or w 5,n closer to x * n , meaning that the point ᾱ + iy tn(τ 1 ),µn (ᾱ) lies in the interior of D 1/4 n .This implies that − φ n,τ 1 (w, u) decays even further as w moves from w 1,n to the right or from w 5,n to the left and hence inequality (3.27) extends to Γ out,2 .
The second ingredient we need is a bound on the relevant length of the contour Γ out,2 .To this end, we make the following Claim: The arc length of the graph of x → y tn(τ 1 ),µn (x), restricted to the support of y tn(τ 1 ),µn , is O(n 4 ), uniformly for τ 1 in compacts.
We note that the restriction to the intervals where y tn(τ 1 ),µn (x) > 0 is crucial.If some X j (0) tends to ±∞ as n → ∞, then the length of Γ out,2 goes to infinity with n, the speed of this divergence depending on the speed of the divergence of the X j (0).However, as Γ out,2 ∩ R is passed through in both directions, these integrals cancel and do not need to be considered.
To prove the claim, we note first that by definition of y tn(τ 1 ),µn in (2.1), we have y tn(τ 1 ),µn (x) = 0 if dist(x, supp µ n ) ≥ t n (τ 1 ).Thus the support of y tn(τ 1 ),µn consists of at most n intervals with total length 2n t n (τ 1 ).Each such interval can now be split into intervals of monotonicity, i.e. intervals on which y tn(τ 1 ),µn increases or decreases.It has been shown in the proof of [16,Lemma 2.1] that there are at most 4n 2 (2n − 1) + 1 such intervals of monotonicity.On each interval of monotonicity I we can bound the arc length of the graph of y tn(τ 1 ),µn by L(I) + t n (τ 1 ), where L(I) denotes the length of I and we used the fact that by definition y tn(τ 1 ),µn (x) ≤ t n (τ 1 ).With the crude bound L(I) ≤ 2n t n (τ 1 ) the claim follows.
With this have now all ingredients to complete the proof of the negligibility of the integral over Σ × Γ out,2 , which is the last step for the proof of Proposition 3.2 for j = 2 in the slow Airy case: on (Σ ∩ {| z| < n}) × Γ out,2 , we have (relevant) contour lengths that are polynomial in n.Hence, by decoupling (via bounding |z − w| below) and using (3.27) this proves that the double integral is O(e −cn 4ε ) for some c > 0. On (Σ ∩ {| z| ≥ n}) × Γ out,2 , however, we can use the monotonicity argument following (3.28) to extend the analysis following (3.26), and the sub-Gaussian decay can be employed again.This finishes the proof for j = 2 in the slow Airy case.
Let us comment on the remaining cases.The case j = 3 can be proved using analogous arguments to the case j = 2.The difference between the analysis performed in detail here and the fast Airy case is that in the contour Γ, the local modification around x * n essentially connects with the graph of y tn(τ 1 ),µn already at ∂D n .This makes the proofs conceptionally easier, as the contour part Γ out,1 is not present, and secondly, there is faster decay of the phase function already at ∂D 1/3 n , meaning that the same arguments as used here for discussing Γ out,2 can then be employed, leading to the specified error terms.
Proof of Theorem 1.2 for the transition and Pearcey cases
In this section we turn to the proofs of parts two and three of Theorem 1.2.In order to avoid repetition, we will essentially prove them together, while also relying on the knowledge gained in the proofs in Section 3. To this end, we will investigate the differently rescaled correlation kernel where we recall that Kn,s,t is the gauged kernel Kn,s,t (x, y) = K n,s,t (x, y) exp(f n (t, y) − f n (s, x)), the gauge factors f n are defined in (1.18) and the kernel K n,s,t is given in (1.12).The constants are where we drop the superscript M in the time parameter throughout this section (as we only consider this parameterization here).Moreover, we assume that n 2 is a bounded sequence.
As in the treatment of the Airy case we observe that it is sufficient to focus on the case τ 1 ≤ τ 2 , as the convergence of the rescaled heat kernel part in (1.12) follows from a simple computation.Thus, we have to deal with the double complex contour integral appearing in (4.1), which we write as abbreviating the phase function of the rescaled kernel again as are the adjusted gauge factors from (1.18).
Choice of contours and preparations.For the contour in the z-variable we choose the straight line Σ := x * n + iR, with orientation from bottom to top, whereas for the wcontour Γ special emphasis needs to be given to a n −1/4 -neighborhood of x * n .As before, let D n in between.With this, we now define the contour Γ: we start at a real point that lies to the right of the support of y tn(τ 1 ),µn and to the right of D n and follow the graph of y tn(τ 1 ),µn to the left until we arrive at the point w 1,n .From w 1,n we move on a straight line segment to x * n and from x * n we move on a straight line segment to the point w 2,n .From here we continue to follow the graph of y tn(τ 1 ),µn again to the left until we stop at a real point to the left of the support of y tn(τ 1 ),µn .Now we close the contour by joining it with the complex conjugate path so that the resulting contour Γ has a counter-clockwise orientation (see Figure 7).Now denote by Γ in the part of Γ located inside the disk D n and by Γ out the remaining part, i.e.
Analogously, we set Σ in := [x * n − in −1/4+ε , x * n + in −1/4+ε ] and Σ out := Σ \ Σ in .We use these contours in (4.2),where we remark that the singularity at x * n is again integrable, and we split up the rescaled kernel into the following parts: Local analysis of the kernel.In this subsection, we analyze the main part of the kernel K n,τ 1 ,τ 2 and show that it is asymptotically close to the transition kernel K an τ 1 ,τ 2 , where we recall that for n ≥ 1 Proposition 4.1.For 0 < ε < 1 24 we have uniformly for u, v, τ 1 , τ 2 in compacts as n → ∞.
Proof.We note that all error terms will be uniform for u, v, τ 1 , τ 2 in compacts, and wherever not stated explicitly, the O-terms are to be understood with respect to n → ∞.
Under the change of variables the disk D for an arbitrary constant K > 0 where we used ε < 1 24 to control the error term.Now, an elementary calculation using the definition of a n and c 3 leads to where the O-term is uniform for |ζ|, |ω| ≤ Kn ε for K > 0 (and u, v, τ 1 , τ 2 in compacts), and we also used Furthermore, we readily see
Now we define
with uniformity of the O-term as above, and performing the change of variables in both variables (4.6) gives us for the part (4.3) where Σ and Γ are the contours Σ in and Γ in under the change of variables (4.6), meaning that Σ = − in ε c 3 tcr , in ε c 3 tcr and Γ In order to deal with the function h n , we again use the inequality and obtain where we used h n = O(n −ε ) uniformly in all relevant variables, and the boundedness of the integral |ζ − ω| in n, which relies on the boundedness of a n and that the contours Σ and Γ lie in the regions where ζ 4 and ω 4 are positive and negative, respectively.Summarizing, we have uniformly Now, applying standard arguments we may replace the n-dependent contours Σ and Γ by their unbounded extensions at the expense of an exponentially small error, and finally, using analyticity, the ω-contour can be deformed to Γ P .4.1.Analysis of remaining kernel parts.In this subsection, we will show that the kernel parts K (j) n,τ 1 ,τ 2 for j = 2, 3 from (4.4) and (4.5) are asymptotically of exponentially small order.Proposition 4.2.We have n → ∞, for some D > 0 uniformly for τ 1 , τ 2 , u, v in compacts.
Proof.As the statements in (4.10) can be derived in the vein of the proof of Proposition 3.2, to keep it concise we will give the main steps, indicate the differences and how they can be dealt with.
Estimation of the kernel for j = 2: We first decouple the integral into two single ones via the simple bound (for some constant for some C > 0, uniformly for large n with respect to bounded u, v, τ 1 , τ 2 , where we recall that w i,n , i = 1, 2 are the final entrance and first exit points of x → y tn(τ 1 ),µn (x) into (out of) the disk D n .This estimate can be extended to Σ × Γ out along the lines extending (3.27), using the properties of Σ and Γ out being descent and ascent paths for the phase function.
Next, we split the contour Σ into a finite part Σ ∩ {|z| ≤ n} and an infinite part Σ ∩ {|z| > n}.The part of K (2) n,τ 1 ,τ 2 over Σ ∩ {|z| ≤ n} × Γ out is now readily seen to be of asymptotic order O(e −n D ) for some D > 0, if we use (4.11), (4.12), together with the bound on the lengths of the relevant contours.For the integral over Σ ∩ {|z| > n} × Γ out we first observe sub-Gaussian decay for z ∈ Σ ∩ {|z| > n} for some c > 0, for large n, uniformly in bounded τ 2 , v. Furthermore, by (4.7) we know for w ∈ {w 1,n , w 2,n } uniformly in bounded τ 2 , v, which extends to w ∈ Γ out using the same argument as in the extension of (4.12).Now, by decoupling via (4.11), using (4.13), (4.14) and the bound on the length of Γ out (which is the same as in the Airy case) shows that the integral Σ ∩ {|z| > n} × Γ out is of order O(e −n D ) for some D > 0, which proves the statement for j = 2.
Estimation of the kernel for j = 3: In this last case we have to deal with the double integral in (4.5) over Σ out ×Γ in .To this end, we split the contour Σ out again into a finite part Σ out ∩ {|z| ≤ n} and an infinite part Σ out ∩ {|z| > n}, and we observe that we can decouple the double integral the same way as in (4.11).Then, for the integral over Σ out ∩ {|z| ≤ n} × Γ in , we use the bound (4.12), this time extended to the range Σ out ∩ {|z| ≤ n} × Γ in .Taking into account that the lengths of the contours grow polynomially, this shows that this part is of order O(e −n D ) for some D > 0. For the integral over Σ out ∩ {|z| > n} × Γ in we use the sub-Gaussian estimate (4.13) for z ∈ Σ out ∩ {|z| > n}, together with the fact that for w ∈ Γ in we have the estimate (4.14), this shows that the integral is order O(e −n D ) for some D > 0. as n → ∞, uniform for u, v, τ 1 , τ 2 in arbitrary compact subsets of R, the latter following from (4.9).Taking the limit a → ∞ readily transforms the integrand into the integrand of the extended Airy kernel in (1.1).However, before taking the limit we have to take care of the contours of integration, which we will do using Cauchy's Theorem.To this end, first we bend the vertical line iR at the origin so that the ζ variable is integrated along the new contour σ consisting of the two rays from ∞e −iπ 7 16 to 0 and from 0 to ∞e +iπ 7 16 .Next, for the integration with respect to the ω variable, we recall that Γ P consists of four rays, two from the origin to ±∞e −iπ/4 and two from ±∞e iπ/4 to the origin.We leave the two rays on the left-hand side of the complex plane untouched, but we deform the ray in the first quadrant into the ray from ∞e iπ/7 to 0 and the ray in the fourth quadrant into the ray from 0 to ∞e −iπ/7 .The resulting set of rays we denote by γ.All these deformations ensure that we stay in sectors of exponential decay of the integrand for all a ≥ 0 as well as in the limit.In this form we can take a → ∞, which gives the limit Next we observe that the two rays of the contour γ lying on the right-hand side of the complex plane do not give any contribution to the integral, as we can fold them onto the positive real axis so that their contributions will cancel out.Finally, we can again use Cauchy's Theorem to deform the remaining rays into the shape of Σ Ai × Γ Ai , which shows the desired limit.For part (3), note that I µ (x * ) = 0 implies (1.26).Choose a sequence of starting configurations (µ n ) n such that we have d(µ n , µ) = O(n −1 ) and lim inf n→∞ dist(x * , supp(µ n )) > 0. Let x * n be the point (1.20) leading to the merging point w.r.t.µ n σ t .From d(µ n , µ) = O(n −1 ) and lim inf n→∞ dist(x * , supp(µ n )) > 0 we conclude that G µn (x * n ) − G µ (x * ) = O(n −1 ), t cr (x * n ) = t cr,µ (x * ) + O(n −1 ) and thus the two merging points (x * µ (t cr,µ ), t cr,µ ) w.r.t.µ σ t and (x * n (t cr ), t cr ) w.r.t.µ n σ t have a spatial and temporal distance of order O(n −1 ).Now we modify µ n by increasing the gap between the initial eigenvalues in µ n around x * symmetrically, which increases the critical time of the merging point w.r.t.µ n σ t .On the other hand, shifting all initial eigenvalues by some positive constant δ n , i.e. replacing X j (0) by X j (0) + δ n , shifts the support of µ n σ t as well.Using both principles, i.e. increasing the gap and shifting the initial spectrum, we can arrange that the µ σ t -merging point (x * µ (t cr,µ (x * )), t cr,µ (x * )) is a boundary point of the support of µ n σ t .Note that increasing the gap and shifting the initial spectrum lead to a deterioration of the rate d(µ n , µ).In fact, we can prescribe any rate r n = o(1), not faster than O (n −1 ), and find a µ n such that d(µ n , µ) ∼ r n , n → ∞, and the µ-merging point lies on the boundary of the support of µ n σ t .Now, in order to construct a desired sequence (µ n ) n , we can choose d(µ n , µ) to be of order n −1/4 and let x * n be the initial point leading to the point (x * µ (t cr,µ (x * )), t cr,µ (x * )) in the µ n σ t -evolution.Then we have I n (x * n ) ∼ a for some a > 0, and in fact we can arrange this for every prescribed a > 0. Now we have limiting transition correlations with parameter a := (−G µ (x * )) −3/4 a by Theorem 1.2.The spatial shift by bτ 1 and bτ 2 , respectively, can be seen as follows.We must have (5.1) also in this case.Expliciting this and using that
n
might not be finite, which are in turn needed for defining c 2 , c 3 and I n .
Figure 2 .
Figure 2. Sample paths of n = 300 NIBM (l.h.s.) started from (modified) quantiles of a probability measure with density given on the r.h.s.For x * n = 0 the evolution (1.5) at critical time t cr (0) ≈ 1.61 leads to correlations descibed by the transition kernel.Starting at x * n = ±3 leads to Airy correlations, whereas starting at suitable points ≈ ±1.46 leads to Pearcey correlations.
1 .
For any fixed 0 < ε < 1 20 , C > 0 and n large enough, by Assumption 2 we have analyticity of the Stieltjes transform G µn in an open (complex) disk centered at x * n with a radius Cn −1/5 for some C > 0. Hence, for all z with |z − x * n | ≤ Cn −1/4+ε
1 4 n
centered at x * n from the right around x *
1 3 n
consisting of all points w with arg (w − x * n )
n
(Slow Airy, Pearcey and transition cases) Assume that for every γ > 0 we have n 1/4 G ⊂ C be the closed disk centered at x * n with radius n − 1 4 +ε .For δ > 0 let S 1 4
) valid uniformly in w ∈ D 1
position of the starting point does not matter).We then follow the real line to the left until we either meet D 1/3 n or the support of the graph of y tn(τ 1 ),µn , that means the point at which the graph becomes positive.If we meet D 1/3 n first, we follow the real line further to x *
Figure 5 .
Figure 5. Choice of contour Γ in the fast Airy case
Figure 6 .
Figure 6.Choice of contour Γ in the slow Airy case | 20,346.2 | 2022-12-07T00:00:00.000 | [
"Mathematics"
] |
Identification of the high-yield monacolin K strain from Monascus spp. and its submerged fermentation using different medicinal plants
Background Medical plants confer various benefits to human health and their bioconversion through microbial fermentation can increase efficacy, reduce toxicity, conserve resources and produce new chemical components. In this study, the cholesterol-lowering monacolin K genes and content produced by Monascus species were identified. The high-yield monacolin K strain further fermented with various medicinal plants. The antioxidant and anti-inflammatory activities, red pigment and monacolin K content, total phenolic content, and metabolites in the fermented products were analyzed. Results Monacolin K was detected in Monascus pilosus (BCRC 38072), and Monascus ruber (BCRC 31533, 31523, 31534, 31535, and 33323). It responded to the highly homologous mokA and mokE genes encoding polyketide synthase and dehydrogenase. The high-yield monacolin K strain, M. ruber BCRC 31535, was used for fermentation with various medicinal plants. A positive relationship between the antioxidant capacity and total phenol content of the fermented products was observed after 60 days of fermentation, and both declined after 120 days of fermentation. By contrast, red pigment and monacolin K accumulated over time during fermentation, and the highest monacolin K content was observed in the fermentation of Glycyrrhiza uralensis, as confirmed by RT-qPCR. Moreover, Monascus-fermented medicinal plants including Paeonia lactiflora, Alpinia oxyphylla, G. uralensis, and rice were not cytotoxic. Only the product of Monascus-fermented G. uralensis significantly exhibited the anti-inflammatory capacity in a dose-dependent manner in lipopolysaccharide-induced Raw264.7 cells. The metabolites of G. uralensis with and without fermentation (60 days) were compared by LC/MS. 2,3-Dihydroxybenzoic acid, 3,4-dihydroxyphenylglycol, and 3-amino-4-hydroxybenzoate were considered to enhance the antioxidant and anti-inflammatory ability. Conclusions Given that highly homologous monacolin K and citrinin genes can be observed in Monascus spp., monacolin K produced by Monascus species without citrinin genes can be detected through the complementary methods of PCR and HPLC. In addition, the optimal fermentation time was important to the acquisition of antioxidants, red pigment and monacolin K. These bioactive substances were significantly affected by medicinal plants over fermentation time. Consequently, Monascus-fermented G. uralensis had a broad spectrum of biological activities. Supplementary Information The online version contains supplementary material available at 10.1186/s40529-022-00351-y.
Information, 19 typical Monascus species have been identified. Monascus purpureus, Monasucs pilosus, and Monascus ruber are frequently used in research. Many pigments, such as rubropunctatin, monascorubrin, rubropunctamine, monascorubramine, monascin, and ankaflavin, are detected in Monascus species . They are important food colorant additives. Moreover, the pharmacological efficacy of Monascus species has attracted considerable interest for the improvement of health in lipid metabolism. Monacolin K, also known as lovastatin, has a cholesterol-lowering effect and can be found in Monascus species (Yanli and Xiang 2020). In addition to lowering lipid level, monacolin K prevents many diseases, such as colon, gastric, breast, lung, and thyroid cancer; acute myeloid leukemia; Parkinson's disease; schizophrenia; depression; and type I neurofibromatosis (Chen et al. 2015;Hong et al. 2008;Lin et al. 2015;Xiong et al. 2019;Zhang et al. 2019b). Therefore, monacolin K production through the addition of linoleic acid, non-ionic surfactant, and glutamic acid has been extensively explored Yang et al. 2021;Zhang et al. 2019a). A monacolin K biosynthetic gene cluster containing nine genes (mokA-mokI) is homologous to the lovastatin gene cluster from Aspergillus terreus (Chen et al. 2008b). Monacolin K production by Monascus pilosus can be enhanced by mokH overexpression and up-regulation of monacolin K biosynthetic genes (Chen et al. 2010b).
Probiotic-fermented food offers health benefits through the bioconversion of dairy products (Lee et al. 2021). Rice or different grains are generally used as substrates for Monascus fermentation to improve secondary metabolites. Extracts from Monascus-fermented soybean have antioxidant capacities and inhibitory activities against enzymes related to skin aging (Jin and Pyo 2017). The antioxidant activities of scavenging free radicals have been observed in Monascus-fermented coix seed (Zeng et al. 2021). Fish bone as an antioxidant-active peptide source can be fermented by M. purpureus, which increases 2,2-diphenyl-1-picrylhydrazyl (DPPH) and 2, 2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) radical scavenging ability . Additionally, Monascus-submerged fermentation using agro-industrial residues, such as rice flour and molasses, is a promising method for red pigment production (Da Silva et al. 2021). Different cereal substrates through the solid-state fermentation of M. purpureus have been developed for pigment production (Srianta et al. 2016). Besides, millet is a good candidate for monacolin K production using Monascus species (Maric et al. 2019;Zhang et al. 2018a).
Medicinal plants offer various benefits to human health. Medicinal plant bioconversion through fermentation can enhance medicinal effects by increasing efficacy, reducing toxicity, conserving resources, and producing new chemical components (Li et al. 2020a). Microbes used in the fermentation of medicinal plants include Bacillus, lactic acid bacteria, yeast, and filamentous fungi. By contrast, natural fermentation without additional microbes is a common method but less effective and specific than other known methods. In this study, a high-yield monacolin K strain from 18 Monascus spp. was screened and identified. Subsequently, medicinal plants for treating diarrhea, dementia, pain, cold, inflammation, and immune disorders were used as substrates for Monascus fermentation. The fermentation products were used in analyzing antioxidant and anti-inflammatory activities, red pigment, monacolin K, and total phenolic content. Metabolites from the fermentation product with the best efficacy were further analyzed through LC/MS.
Monascus spp., medicinal plants, and growth conditions
Monascus spp. listed in Table 1 were used in this study. To detect a high-yield monacolin K strain, the medium with 7% glycerol, 3% glucose, 3% monosodium glutamate, 1.2% polypetone, 0.2% NaNO 3 , and 0.1% MgSO 4 ·7H 2 O was performed for Monascus spp. incubation. Mycelia were harvested after 14 days of cultivation at 25 ℃ for DNA manipulation. Medicinal plants including Angelica pubescens, Pogostemon cablin, Paeonia lactiflora, Alpinia oxyphylla, Melaleuca leucadendron, Lavandula angustifolia, Osmanthus fragrans, Glycyrrhiza uralensis, Phellodendron chinense, and rice as the control were utilized for the fermentation materials (Beijing Tongrentang Traditional Chinese Medicine Co., Ltd, Chengdu, China). Two grams of medicinal plants as the sole substrate ground into a powder with 50 mL reverse osmosis water were sterilized at 121 ℃ for 30 min. Spores of Monascus spp. harvested from potato dextrose agar plate were added into the medicinal plant liquids for submerged batchfermentation. Monascus spp. with medicinal plants were stood for 60 and 120 days at 25 ℃. After 60 and 120 days of submerged fermentation, the culture was centrifugated and filtered, and the suspension was freeze-dried. The freeze-dried powders were carried out for the bioactive assays.
DNA manipulation
Approximately 0.5 g Monascus mycelia were ground using a mortar and pestle by liquid nitrogen. DNA was extracted by phenol and chloroform, and precipitated by isopropanol. DNA was finally dissolved in TE buffer. PCR was implemented according to the condition by Chen et al. (2008a). The primer sets of mokA and mokE genes involved in the monacolin K biosynthesis were mokA-F, ATC ATT CTT TCC NCGC TCC A, mokA-R, CGG GCT ATT GTC GGC CAT AG; mokE-F, GTG GTG GAC TCG ACG TTG GT, and mokE-R, TTC TCG CAG TAC ACG GTC AC. PCR was performed by an ABI 2700 PCR (Applied Biosystems, Thermo Fisher Scientific, Waltham, MA) and the reaction condition was as follows, 96 ℃ for 5 min by 1 cycle, 96 ℃ for 1 min, 50 ℃ for 1 min, and 72 ℃ for 1 or 2 min by 30 cycles, and 72 ℃ for 10 min with a final extension. The PCR products were recovered from agarose for DNA sequencing.
Monacolin K detection by HPLC
The culture of Monascus spp. fermentation was filtered by a 0.2 mm filter. Monacolin K was determined by a high-performance liquid chromatography (HPLC) (Shimadzu, Kyoto, Japan) fitted with a reverse-phase C18 column (InsertSustain, 5 μm, 4.6 × 150 mm)(GL science Inc., Tokyo, Japan). The HPLC reaction was followed by 0.1% phosphorus acid in water with 35% and methanol with 65% at a flow rate of 1 mL/min. Monacolin K was scanned by a UV spectroscopy from 210 to 400 nm.
Antioxidant analysis of fermentation product
The radical scavenging assay of DPPH and ABTS was used to estimate the antioxidant capacity of the fermentation product. The fermentation product of Monascus spp., 3,4-dihydroxyphenylglycol, and 3-amino-4-hydroxybenzoate were respectively mixed with 0.2 mM DPPH solution for 30 min and the mixture was measured by an enzyme-linked immunosorbent assay (ELISA) reader (Molecular Devices, Sunnyvale, CA) at OD 517 . The ABTS radical scavenging capacity was evaluated by the T-AOC Assay Kit (Beyotime Biotechnology, Shanghai, China). The reaction was detected by the absorbance of OD 405 . The radical scavenging activity of DPPH and ABTS (%) was calculated as follows, the radical scavenging activity (%) = ([OD 517 or 405 of control-OD 517 or 405 of fermentation product]/OD 517 or 405 of control ) × 100. The sterile H 2 O was utilized as the control.
Cell viability of fermentation product
The Raw264.7 macrophages were carried out to implement the 3-(4,5-cimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) assay. The cell in a 96-well plate was incubated at 37 °C for 24 h with 5% CO 2 incubation. The fermentation products, 3,4-dihydroxyphenylglycol, and 3-amino-4-hydroxybenzoate were respectively added into the cells while the sterile H 2 O was utilized as the control. After 24 h incubation, the supernatant medium was removed and the fresh medium containing 10 µL MTT (5 mg/mL) was added into the 96-well plate of the cells for 4 h incubation. Then, the supernatant medium was discarded, and dimethyl sulfoxide was added to the 96-well plate. The mixture was detected by the absorbance of OD 570 .
Anti-inflammatory analysis of fermentation product
The Raw264.7 macrophages were incubated at 37 °C for 24 h and 5% CO 2 incubation in a 96-well plate. The fermentation products, 3,4-dihydroxyphenylglycol, and 3-amino-4-hydroxybenzoate were respectively added into the 96-well plate of the cells for 1 h. To stimulate the inflammation response, 1 µg/mL lipopolysaccharide was added into the cells. After 24 h incubation, the nitrite of the medium was determined by the nitrite detection kit (Beyotime Biotechnology) using the Griess reagent. The mixture was detected by the absorbance of OD 540 . The NO decreasing rate was evaluated as follows, NO decreasing rate (%) = ([OD 540 of control-OD 540 of fermentation products, 3,4-dihydroxyphenylglycol, and 3-amino-4-hydroxybenzoate]/OD 540 absorbance of control) × 100. The sterile H 2 O was utilized as the control.
Red pigment analysis of fermentation product
The freeze-dried products of Monascus spp. fermentation was dissolved by 1 mL 70% ethanol and centrifugated at 15,000×g (Hettich, Mikro 220R, Germany) for 10 min. The supernatant was obtained and detected by the absorbance of OD 505 . The red pigment was calculated as follows, red pigment (U/g) = OD 505 of fermentation product × (1/gram) × (1/volume) × dilution factor.
Monacolin K biosynthetic gene expression by RT-qPCR
Approximately 0.5 g Monascus mycelia were ground and performed by TRIzol reagent (Thermo Fisher Scientific). Total RNA was extracted by chloroform and treated with RNase-free DNase. Finally, total RNA was preserved in DEPC H 2 O. HiFi-MMLV cDNA kit (Beijing ComWin Biotech Co., Ltd., Beijing, China) with oligo (dT) and random hexamers was used to carry out the first-strand cDNA. The qPCR in a final volume of 25 µL was implemented using UltraSYBR Mixture kit (Beijing ComWin Biotech) by the Roche LightCycler ® 480 System (Roche Group, Switzerland). The primer sets of monacolin K biosynthetic genes were utilized as follows, qmokA-
Analysis of total phenol and metabolites from the fermentation product
Total phenol of fermentation product was measured by the Folin-Ciocalteu method. The fermentation product was added into Folin-Ciocalteu reagent for 5 min reaction, and then 10% sodium carbonate solution was used for color development in darkness. The gallic acid of different concentrations as a positive control was utilized for the calculation of the standard curve. The mixture was centrifugated at 15,000×g for 10 min and the supernatant was measured by the absorbance of OD 760 . Thermo Vanquish system equipped with an ACQUITY UPLC ® HSS T3 (150 × 2.1 mm, 1.8 μm, Waters Corporation, Milford, MA) column was accomplished for the analysis of fermentation product. The UPLC reaction was followed by 0.1% formic acid in acetonitrile (A) and 5 mM ammonium formate in acetonitrile (B) at a flow rate of 0.25 mL/min. The gradient of solvent A/B (v/v) was set as follows, 2% A/B from 0 to 1 min; 2-50% A/B from 1 to 9 min; 50-98% A/B from 9 to 12 min; 98% A/B from 12 to 13.5 min; 98-2% A/B from 13.5 to 14 min; 2% A from 14 to 20 min for positive model (2% B from 14 to 17 min for negative model). The ESI-MS was implemented by the Q Exactive Plus mass spectrometer (Thermo Fisher Scientific) with the spray voltage of 3.5 kV and − 2.5 kV in positive and negative modes, respectively. Data-dependent acquisition MS/MS was carried out with the HCD scan and the normalized collision energy was 30 eV. Unnecessary information in MS/MS spectra was removed by dynamic exclusion. The metabolites were screened by the accurate molecular weight (molecular weight error < 30 ppm) and identified by Metlin (
Statistical analysis
Duncan's multiple range test, Pearson correlation, and repeated-measures analysis of variances (ANOVAs) were performed by IBM SPSS Statistics v20 software package (SPSS Inc. Chicago, USA) at a confidence level of 95%. The mean and standard deviation of three replicates were shown.
Identification of monacolin K biosynthetic genes and content from Monascus spp.
mokA and mokE genes encoding polyketide synthase and dehydrogenase involved in monacolin K biosynthesis from 18 Monascus spp. and one Aspergillus terreus were amplified through PCR. The expected bands of approximately 1 and 0.5 kb were detected (Fig. 1) (Fig. 2). However, rare antioxidation was detected in rice. On the other hand, the effect of these products at 0.5 mg/ mL on the antioxidant activity of ABTS free radical were not compared because all of them had ABTS scavenging rates of over 90%. Nevertheless, the ABTS scavenging capacity of the medicinal plants had a similar antioxidant profile to DPPH result at 0.0625 mg/mL. After the 60-day fermentation of M. ruber BCRC 31535, the DPPH scavenging ability of the fermentation product was elevated, except in M. leucadendron, and L. angustifolia. In addition, the order of the DPPH scavenging capacity by fold change was rice > G. uralensis > P. lactiflora, whereas that of ABTS was A. pubescens > P. lactiflora > A. oxyphylla in the 60-day fermentation. However, the ABTS and DPPH scavenging capacities significantly declined after 120 days submerged fermentation.
Anti-inflammatory analysis of M. ruber BCRC 31535-fermented product
To analyze the anti-inflammatory capacity of M. ruber BCRC 31535-fermented product, the cell viability of the fermentation product was explored (Fig. 3). Most medicinal plants without fermentation were harmless to Raw264.7 cells, and cell viability below 80% was observed only in P. chinense. However, the cell viability of the fermentation products significantly decreased, and M. ruber BCRC 31535-fermented P. lactiflora, A. oxyphylla, G. uralensis, and rice had survival rates of over 90% after 60 days of fermentation. Thus, their anti-inflammatory effects were further investigated using their fermentation products (Fig. 4). A. oxyphylla without fermentation showed the best anti-inflammatory effect. After the fermentation of M. ruber BCRC 31535, the fermented G. uralensis significantly improved anti-inflammatory capacity. The trend of anti-inflammatory effect of the fermentation products was similar to that of their antioxidant ability. After 120 days of submerged fermentation, anti-inflammation obviously decreased.
Red pigment and monacolin K analysis of M. ruber BCRC 31535-fermented product
Red pigment and monacolin K derived through polyketide synthesis can be produced by M. ruber BCRC 31,535 and these of fermentation product were further determined. The red pigment of the fermentation product increased with fermentation time, except in A. pubescens (Fig. 5). The red pigment of the 120-day fermentation product from rice was 8.9-fold that of the 60-day fermentation product.
Monacolin K was obviously detected in the fermentation of A. pubescens, P. cablin, P. lactiflora, M. leucadendron, G. uralensis, P. chinense, and rice (Fig. 6). It was not detected during the fermentation of L. angustifolia and O. fragrans. Monacolin K content in these fermentation products increased, similar to that of the red pigment during fermentation, except in P. chinense. The content of monacolin K derived through the 60 and 120 days fermentation of G. uralensis was the highest, which was 41.9-fold and 9.1-fold that in rice, respectively. Thus, the gene expression of monacolin K containing mokA-mokI genes was further analyzed by comparing M. ruber BCRC 31535-fermented G. uralensis and rice. The results indicated that the gene expression levels of monacolin K obtained through the 60-day fermentation of G. uralensis were higher than those in rice. However, monacolin K gene expression level after 120 days G. uralensis fermentation was obviously lower than that in rice. This result was consistent with monacolin K content. The content after 120 days G. uralensis fermentation was only 1.2 times higher than that after 60 days of fermentation. Monacolin K content in rice after 120 days of fermentation was 5.7-fold that after 60 days of fermentation.
Total phenolic content and metabolites analysis of M. ruber BCRC 31535-fermented product
The antioxidant effect of the 60-day fermentation product significantly increased relative to the antioxidant effects of medicinal plants without fermentation. Accordingly, the contribution of total phenolic content to antioxidant capacity was evaluated. The result showed that the total phenolic content was elevated after M. ruber BCRC 31535 fermentation, except in P. cablin, and M. leucadendron (Fig. 7). The highest increase in total phenolic content was found in rice after fermentation, followed by that in G. uralensis. This result was consistent with the DPPH scavenging capacity.
The anti-inflammatory capacities of 3,4-dihydroxyphenylglycol and 3-amino-4-hydroxybenzoate were explored on the basis of cell viability (Fig. 9). The survival rates of Raw264.7 cells treated with 3,4-dihydroxyphenylglycol and 3-amino-4-hydroxybenzoate below 0.1 mg/mL were over 90%, and no cytotoxicity was observed. Therefore, the anti-inflammatory capacities of the two compounds were further measured. The anti-inflammatory capacity of 3,4-dihydroxyphenylglycol was observed in a dose-dependent manner. Over 56% decrease in NO was detected at 0.05 mg/mL. However, the anti-inflammatory effects of 3-amino-4-hydroxybenzoate were not significant in a dose-dependent manner.
Discussion
Monascus species offer various health benefits to human beings. However, hepato-nephrotoxic citrinin of polyketide metabolites is present in Monascus species (de Oliveira Filho et al. 2017). Hence, many studies have focused on improving the ratio of monacolin K to citrinin through optimal cultivation or fermentation. Given that citrinin and monacolin K biosynthetic gene clusters are present in Monascus species (Chen et al. 2008b;Shimizu et al. 2005), considerable effort has been devoted to identifying these genes in different Monascus strains. According to our previous study, pksCT, ctnA, and orf3 genes, which encode citrinin polyketide synthase, a major activator, and oxygenase, were distributed in M. purpureus and Monascus kaoliang, but not in M. pilousus, M. ruber, M. barkeri, Monascus floridanus, Monascus lunisporas, and Monascus pallens (Chen et al. 2008a). Interestingly, in this study, mokA and mokE genes were detected in M. pilousus, M. ruber, and M. barkeri only, and no monacolin K content was detected in M. barkeri. Further investigation into the citrinin and monacoin K genes of Monascus has shown that the distribution of citrinin and monacolin K is not restricted in the specific Monascus species. Some Monascus species simultaneously have two gene clusters, but some have only one (Chen et al. 2008a;Li et al. 2020b;Yang et al. 2015). Information about the evolutionary process of the polyketide gene cluster in fungi is currently limited, and thus gene duplication or horizontal gene transfer is worthy of further study (Wisecaver et al. 2014 monacolin K without citrinin can be safely applied to the food industry. Raw materials, such as rice and grains, provide carbohydrates, protein, and inorganic elements for traditional fermentation. Thus, considerable attention has been devoted to the fermentation of Monascus species with rice and grains (Jin and Pyo 2017;Maric et al. 2019;Srianta et al. 2016;Zhang et al. 2018a). Given that medicinal plants themselves have certain efficacy, fermentation through the biotransformation of microorganisms can improve the original properties of the plants and result in the production of novel chemical components (Li et al. 2020a). Scant attention has been devoted to the relationship between the fermentation of medicinal plants and Monascus species. Hence, several medicinal plants and rice were used in the fermentation of M. ruber BCRC 31535 for 60 and 120 days. Most medicinal plants and their extracts have antioxidant capacities (Chen et al. 2017;Kim et al. 2017Kim et al. , 2019Soheili and Salami 2019;Surh and Yun 2012;Wu et al. 2021Wu et al. , 2022Yuan et al. 2020;Zhang et al. 2018b). In this study, after the 60-day fermentation of M. ruber BCRC 31535 with medicinal plants, the DPPH scavenging rate significantly improved, except in M. leucadendron and L. angustifolia as raw materials. M. leucadendron and L. angustifolia contributed to antifungal activity and probably affected M. ruber BCRC 31535 fermentation (Abd Rashed et al. 2021;Valdes et al. 2008). Additionally, the long-term fermentation of medicinal plants (120 days) decreased antioxidant activity in the DPPH and ABTS assays. This effect may have promoted the formation of acids from antioxidant compounds, such as phenols, through longterm fermentation (Lee et al. 2019). Phenols are responsible for antioxidant activity, and the relationship between total phenolic content and antioxidant capacity was demonstrated (Additional file 4: Table S1, Additional file 5: Table S2, Additional file 6: Table S3). According to the Pearson correlation between total phenolic content and antioxidant activity, the significant effects of total phenolic content on DPPH and ABTS scavenging abilities were observed. Thus, the optimal fermentation time was important to the acquisition of antioxidants and prevention of further degradation by microbes.
Before the anti-inflammatory capacities of fermentations product generated by lipopolysaccharide-induced Raw264.7 cells were determined, cell viability was analyzed with the MTT method. Most medicinal plants were not cytotoxic. The effects of medicinal plants and fermentation products on cell viability across temporal change was evaluated by repeated-measures ANOVAs (Additional file 7: Table S4). The result indicated that cell viability was significantly affected by medicinal plants over time. This result was contrary to our expectation that toxicity would be reduced after fermentation (Li et al. 2020a). However, a survival rate of over 90% was still observed in the fermentation of P. lactiflora, A. oxyphylla, G. uralensis, and rice, which originally had potential anti-inflammatory activities (Okonogi et al. 2018;Xin et al. 2019;Yin et al. 2018;Zhang et al. 2018b). Thus, these medicinal plants and fermentation products were further used in the analysis of anti-inflammatory capacity. Only 60-day G. uralensis fermentation revealed a significant effect on NO decreasing rate. This benefit was consistent with the fermentation of ginger and tea by M. pilosus and M. purpureus, respectively (Chen et al. 2010a;Deng et al. 2021).
Red pigment and monacolin K derived from polyketides were the major characteristics in Monascus species. The amounts of red pigment and monacolin K increased with fermentation time in most fermentation products, and their maximum amounts were obtained after the fermentation of rice and G. uralensis, respectively. The repeated-measures ANOVAs demonstrated that the red pigment and monacolin K were significantly affected by medicinal plants over fermentation time (Additional file 8: Table S5). The trend was different from the trends of antioxidant and anti-inflammatory capacities. The stable production of red pigment and monacolin K was similar to the different studies using agro-industrial residues, cereal, and millet as fermented substrates (Da Silva et al. 2021;Maric et al. 2019;Srianta et al. 2016;Zhang et al. 2018a). In addition, the transcription level of monacolin K biosynthetic genes between the fermentation of G. uralensis and rice was further verified with RT-qPCR. As expected, the expression levels of monacolin K biosynthetic genes in the 60-day G. uralensis fermentation were higher than those in rice. Furthermore, no up-regulation was observed in the 120-day G. uralensis fermentation, and rate of increase in monacolin K content was lower than that in the 120-day rice fermentation. Moreover, MokG encoding HMG-CoA reductase may confer resistance on monacolin K (Abe et al. 2002;Chen et al. 2008b); therefore, its relative gene expression was lower than the expression levels of other genes.
LC/MS was used in analyzing difference between metabolites with or without fermentation of M. ruber BCRC 31533 in G. uralensis. The relative abundance of glutamic acid in G. uralensis without fermentation was 3.1-fold higher than that in 60 days fermentation. This result suggested that the consumption of glutamic acid promoted monacolin K production (Zhang et al. 2019a). In addition, G. uralensis, commonly known as licorice, is usually used in the food industry as a sweetener and contains various bioactive constituents, such as liquiritin, isoliquiritin, liquiritigenin, and isoliquiritigenin (Kao et al. 2014). Liquiritigenin and isoliquiritigenin are the aglycone forms of liquiritin and isoliquiritin, respectively. The relative abundance rates of liquiritin and liquiritigenin were high in G. uralensis and were 632.9and 3.8-fold those in 60-day fermentation, respectively. This result suggested that the bioactive substances were metabolized and consumed by M. ruber BCRC 31,535. This result was different from that of Kim et al., who reported that liquiritigenin and isoliquiritigenin content improved after the fermentation of Monascus albidulus (Kim et al. 2020). Furthermore, 2,3-dihydroxybenzoic acid had the highest content, which increased 143.9fold after 60-day G. uralensis fermentation. According to the previous study, 2,3-dihydroxybenzoic acid displayed good antioxidant and anti-inflammatory capacities (Adjimani and Asare 2015;Alvarez Cilleros et al. 2020). Moreover, 3,4-dihydroxyphenylglycol and 3-amino-4-hydroxybenzoate increased 291.4-and 89.1-fold through the biotransformation of M. ruber BCRC 31,535, respectively. The two compounds had high antioxidant capacities according to the DPPH and ABTS assays. The NO rate of 3,4-dihydroxyphenylglycol decreased in a dosedependent manner. These results implied that 2,3-dihydroxybenzoic acid, 3,4-dihydroxyphenylglycol, and 3-amino-4-hydroxybenzoate improved antioxidative and anti-inflammatory effects in the 60-day fermentation of G. uralensis.
Conclusions
Traditionally, the effective substance, new compound, resource conservation, and toxic reduction of medicinal plants can be enhanced by means of microbial fermentation. In this study, the antioxidant capacity and total phenolic content of medicinal plants increased after 60 days of fermentation by M. ruber BCRC31535. However, cytotoxicity was also elevated but not influenced in P. lactiflora, A. oxyphylla, G. uralensis, and rice. The positive correlation between total phenolic content and DPPH and ABTS scavenging activities was verified by repeated-measures ANOVAs. Long-term fermentation (120 days) decreased antioxidant efficacy and cytotoxicity. Red pigment and monacolin K derived from M. ruber BCRC31535 biosynthesis accumulated over time during fermentation. Among these medicinal plants, M. ruber BCRC31535-fermented G. uralensis had a broad spectrum of efficacy with antioxidative and anti-inflammatory capacities, and monacolin K contributed by the increase in 2,3-dihydroxybenzoic acid, 3,4-dihydroxyphenylglycol and 3-amino-4-hydroxybenzoate levels and glutamic acid consumption rates. To the best of our knowledge, this study is the first to report the antioxidative capacities of 3-amino-4-hydroxybenzoate for DPPH and ATBS scavenging activities. | 6,195 | 2022-07-02T00:00:00.000 | [
"Agricultural And Food Sciences",
"Biology"
] |
Flexible Framework for Real-Time Embedded Systems Based on Mobile Cloud Computing Paradigm
,
Introduction
Many of the advances that are experimenting contemporary societies are based on the development of systems that act as sensors environment (Smart Cities, Ambient Intelligence, eHome, Smart Drive, etc.).These systems usually consist of embedded devices, provide insight, and provide intelligence to the interactions that occur with the environment.One of its common functions is signal processing from sensors that incorporate themselves.This processing occurs with the execution of other tasks of the application that are incorporated.
There exist a variety of examples of embedded systems whose functioning is adapted to this pattern: surveillance cameras with motion detection, RFID packet tracking systems, driver assistance systems, smart thermostats, and so forth.Usually, in most of the above applications, embedded systems are connected to a communications network to coordinate their behaviour with other systems and provide a better customer service.One of the most common types of embedded systems lies in mobile terminals, whose expansion in society has been spectacular in recent years.This context has promoted the proliferation of business strategies for embedded systems in general and especially on mobile devices that aim to leverage its high penetration in society to reach a wider audience as well as open new markets.In this situation there are initiatives such as applications of mobile payment, tracking and tracing, resource monitoring, and so forth.
However, in this situation of technology adoption and deployment of new applications, the fundamental challenge of providing sufficient benefits for the execution of processes in terminals without penalizing user satisfaction is interposed.The performance required for the execution of processes can overflow the resources of most devices, delaying response times and heavily penalizing the expansion of such technologies.Moreover, the limitations on processing capabilities may also come from further additional aspects of the device capabilities.Environmental conditions, the power consumption requirements, or configuration issues, may also impact on the services that is able to offer.Due to the above, we observe the eventual existence of difficulties in meeting 2 Mobile Information Systems with the requirements of productivity and response times that some applications require.
Moreover, one of the most innovative paradigms regarding the adoption of Information and Communications Technology (ICT) by society is Cloud Computing.The advantages of this model of ICT management approach the improving efficiency and reducing costs, while providing resources and services accessible to the whole society.Any progress that occurs in this area has a multiplier effect that will affect many companies and users of these technologies.Therefore, the design of computational models that combine the development of embedded and mobile systems with Cloud Computing paradigms may provide new ways of processing that allow avoid the difficulties related to real time execution of applications in these systems.
The main objective of this research is to study how realtime can be performed in Cloud Computing paradigms.More precisely, how embedded and mobile systems can take advantage of the remote computing resources to meet with real time constraints.
In this way, this work proposes a computational model of processing integrated management for embedded or mobile systems in order to answer the following questions: Is it possible to derive some running processes to the cloud and thus meet response times, productivity, and quality of service desired?Can we predict the behaviour of remote computing resources to develop strategies for dynamic management according to the Cloud Computing paradigm?
We start as a working hypothesis the fact that the conception and development of flexibly processing models based on schemes of Cloud Computing can overcome some drawbacks on this issue.These include the supply of processing capacity in running applications when they are executed on embedded devices with limited performance; and the auxiliary use on demand of cloud computing infrastructure will provide flexibility in order to execute the necessary tasks and mechanisms to support the service quality maintenance, even with process low-capacity devices.
This paper is organized as follows: in Section 2, we review the related work about this issue; in Section 3, some important issues on real time in embedded and mobile systems are highlighted and the contributions of this work about them are exposed; in Section 4, the formal framework of the computational model is introduced; in Section 5, it is exposed how to predict the network delay; next, Section 6 describes an application example in which the model is simulated.The paper is finally concluded in Section 7 and some approaches for future work are also pointed.
Related Work
The research areas related to the topics covered in this paper are experiencing an intense research activity, as evidenced by the number of recent works found.The following briefly describes the current state of knowledge on the different aspects that encompass this research.The conclusions of this related work study are also indicated.
The increasing development of embedded systems and mobile computing systems in recent times has allowed its extension into new business areas.Advanced e-commerce applications, positioning, monitoring and surveillance, health, wellness and leisure, among others [1][2][3], represent opportunities to exploit the high degree of penetration of these devices among the population and its new features.However, to properly continue the development in these areas, it is necessary to make a qualitative leap in design, taking into account the requirements of performance and response time that these applications require.
The quality of service (QoS) is essential to ensure the proper operation of many applications and, for embedded systems, it becomes a critical aspect due to the inherent processing limitations normally shown by devices.
In these applications, embedded systems must provide predictability both in response time and quality of the results.This feature raises them to the status of real-time systems [4].In such systems, the validity of the results is given not only for their correction but also because they are on time.That is, there are some restrictions that limit the time of its operation.Therefore, the layout and design of these systems should propose architectures that address the aspects of correctness, adaptability, predictability, security, and fault tolerance.
There are many works that provide solutions to these issues.The technological evolution of devices currently provides sufficient performance to implement complex planning strategies on them.These strategies delegate to a realtime operating system embedded in devices, the execution planning, and management of tasks to meet the constraints imposed by the applications [5][6][7].In environments involving multiple devices, it is possible to establish planning methods that take into account multiprocessing scenarios in one [8,9] or more embedded elements with heterogeneous characteristics [10,11].Another step further in this strategy is the embedded distributed systems interacting through a communications network.For these cases, proposals also have been made to ensure the quality of service of the results [12,13].
Although such solutions provide significant levels of satisfaction of restrictions, some applications may be temporarily overwhelmed by the characteristics of its execution and may require extra performance that exceed its capacity.In these cases, previous systems should decline to execute the tasks that are beyond excessive response times to ensure compliance with real-time scheduling.However, such decisions may cause service interruptions, unaffordable in some critical applications.For example, e-health systems that monitor and control biometric variables of several individuals simultaneously, may experience increased computing needs due to the increase of the number of individuals to supervise or, for example, a traffic management system in a Smart City in which each vehicle collects and transmits status information to other vehicles, can be equally saturated in dense scenarios with multiple vehicles.
One end in the configuration of distributed systems is systems composed essentially of sensors/actuators that lack processing power to make decisions on their own.These elements, which basically operate as transceivers, transmit the information in order to be treated remotely by a host with sufficient capacity [14,15].However, this approach may underutilize the possibilities of devices themselves, decrease fast response, and require additional infrastructure to maintain permanent communication for proper processing.In those sensor networks scenarios where sensors have absolutely no chance for the execution of these tasks [16,17], they incorporate only the minimum functions to protect sent or received data by using simple techniques, and generally, periodic audit and control strategies are released to check if any device has been compromised [18][19][20].
A computer model to address cases in which the computing needs go beyond the capabilities of the device is Mobile Cloud Computing (MCC) [21][22][23].In this paradigm, the workload is divided between distributed devices and a central element located in the cloud.Thus, devices can move processing needs to the cloud (computation offloading) where they will run as services on Cloud Computing servers [24,25].The most common uses of this paradigm are primarily targeted to extend the battery life of mobile elements [26][27][28], without considering the versatility that the remote computer can provide to facilitate the provision of adequate QoS.Proposals are arranged under two different approaches [29,30]: on one hand, systems that try to adapt existing applications by identifying portions of offloadable code [31][32][33], and on the other hand, new applications having into account this idea in conception and preparing the process code accordingly [34,35].In all those proposals, the influence of environmental conditions in process planning is also twofold: first, works that consider a static scenario in which it is possible to plan the optimal execution strategy [36,37] and, secondly, dynamic environments where communication conditions can be varied [38,39].In these methods, although they offer valid solutions for some contexts and applications, the maintenance of quality of service in the results for realistic application scenarios remains as an open problem.
In Mobile Cloud Computing strategies, the communications management and its role in maintaining response times in systems where it is applied are especially important.By extension, maintaining quality of service in the field of communications is one of the areas of major research intensity.In this field, there have been contributions related to the intelligent adaptive analysis of service times [40] and architectures oriented to meet the QoS requirements have been proposed [41,42].These works not only take into consideration parameters of energy efficiency but they also suggest strategies for compliance with real-time specifications [43] by routing and classification of net traffic.
The joint use of distributed computing infrastructures to ensure QoS is an option that is also being widely discussed recently [44,45].The services that combine the resources of distributed infrastructures of different types (clusters, grids, cloud, etc.) are a mechanism that reinforces QoS commitments for these systems as they can use other computer elements from their immediate environment.Other approaches provide greater communication capacity (bandwidth) to connected devices when required in order to reduce response times in the cloud access [46].Therefore, these strategies facilitate specification of RT restrictions on remote computing elements.
Concern for maintaining response times of cloud computing models is present in many works, where QoS solutions for cloud computing systems have been analyzed and proposed [47][48][49].Although its focus is specifically targeted towards multimedia applications (online games, theater videostreaming) its findings can be transferred to other sectors (business, telemedicine, automotive, etc.) for the provision of remote services [50].However, the results of these works are especially dependent mostly on execution context and communication conditions.
Regarding strategies providing flexibility in computing processes, the application of imprecise computing techniques [51,52] to the tasks execution of application involved can offer satisfactory solutions.With this technique, processes are broken down into two types of tasks, mandatory and optional, for parameterizing restrictions and establishing checkpoints to explicitly manage response times.However, the sacrifice of processing time for a task is at the expense of making a bounded mistake and therefore providing an inaccurate response.Most systems using this model assume that tasks to plan are monotonous and that the error is a function of the amount of work discarded.These algorithms seek a balance between output quality and runtime, based on minimizing objective functions such as average error, total error, maximum error, number of optional tasks eliminated, and average response time.The original imprecise computing model assumes that the input values are precise for each task and the mandatory and optional time can be known a priori [53].Other contributions deal with imprecise computing systems in cooperative tasks in which the results of operations are dependent on each other.When the result of a producer task is partially wrong, the consumer task must somehow compensate for this error.This results in increasing processing time for subsequent tasks and changes at preset times for each individual task [54].The conception of this technique is essentially oriented to process planning in realtime systems and compliance with timing constraints [55,56], but also in maintaining the quality of the results [57][58][59].
Real Time in Embedded Systems Issue
From the study carried out in the previous section, we will highlight some of the major problems obtained in the development of real-time embedded systems as well as major contributions of this research in the resolution of themselves.
Problem Statements. The problems emphasized are as follows:
(a) The embedded and mobile systems with real-time functioning need to properly respond to their design considerations in most cases.Improvements in computer technology as multicore and multiprocessor systems contribute to this effort when they have conveniently handled with appropriate planning methods.However, these new capabilities do not provide mechanisms to eventually increase the processing load beyond a specified level and this limits its application to the established functioning situations.
New applications of cyber-physical systems operating in the real world lack the flexibility to deal with situations of processing demand when interactions excess with the environment are required.Having more powerful systems for these cases which can be unfeasible for many environments due to higher power requirements would require.
(b) The use of remote resources of cloud computing from mobile devices according to the scheme Mobile Cloud Computing can be a strategy to ease devices processing loading based on the needs of each moment.This approach is not sufficiently developed for all cases and their wider use is oriented towards saving consumption in the running applications rather as a strategy with flexible planning of the workload.The lack of adjustment mechanisms in the processing needs produces rigid planning strategies and it can lead to poor judgment on what parts of the application should run on locally and which ones remotely.
(c) In regard to the use of Cloud Computing systems by themselves, there are just a few real-time applications which rely on its performance due to the difficulties to fully predict their response times.In addition, mobile systems which experiment a wide variety of contexts and different situations in bandwidth and cover are the most affected, since the delays caused by the network can be variable depending on a lot of factors.In these cases, it is difficult that the planning strategies take into account the cloud resources and host remote processing in order to meet the requirements for applications with certain satisfying features.
Contributions and Significance of This Work.
Power consumption and processing delay cost are very important aspects to be taken into account in embedded/mobile systems operation (particularly when they are powered by batteries).These two aspects are related because the settings on energy cost have effect on completion time of the tasks.However, we think that the processing delay is a more general aspect than energy cost for the embedded system operation because power mode is a global characteristic of the embedded system and it is no feasible to establish different consumption limits to each individual task; and there are some variables of MCC paradigm that could not be considered within the local model because they occurs outside of the embedded system (e.g., data transmission over the net and task execution on cloud server).So, this research focuses on the response time as a key aspect to provide flexibility to the application processing.
Therefore, the contributions of this paper to solve the problems described in previous subsection are as follows: (a) The configuration of systems with sufficient computing elements to adequately address the most common situations is often the most popular solution to find a balance between installed capacity and consumption needs.In this paper, we aim to leverage the same configuration and support it with remote computing elements hosted into the cloud in order to increase the computing capabilities.Although this idea is not new, the novel approach is geared especially for applications with real-time requirements.The contribution to achieve this objective focuses on developing a computational model which ensures the formal framework to provide expressive capacity needed and to address specific problems with real-time requirements using the Mobile Cloud Computing paradigm.
Several previous works exist proposing MCC operation schemes, but they lack the necessary flexibility to take into account both priority and response time instead focus on other goals such as power efficiency, faster response of the device or routing, and classification of traffic in the network level.
In the following section the details of the specification will be presented: (b) Given the lack of flexibility in the running applications, in this paper the use of imprecise computation techniques is proposed to decide the tasks to be performed on the embedded device.The application of imprecise computation to Cloud Computing paradigm is a novel approach to address the compliance with time constraints.The imprecise computing provides mechanisms for processes scheduling with maintenance response times criteria.Combining these methods with processing schemes Mobile Cloud Computing can provide strategies to accomplish with adequate QoS when the features functioning so require.Besides the above, we propose to use strategies implementation based on stored logic to provide higher prediction to the running operations.This method is based partly on our previous results and work in the design area of arithmetic operators and specialized processors [60,61].
(c) As discussed in the related work, the techniques of maintaining QoS for open networks are producing some advances that may allow their application to Cloud computing strategies for specific functioning scenarios.However, it cannot be extended to systems with real-time constraints.The contribution to this issue that it is carried out in this paper is to implement a hybrid method of monitoring and predicting the performance of communication which will go periodically determining what are the delays introduced by network during the access towards remote processing resources.The novelty of this method lies in the combination of online delay measure with offline historical data depending of aspects such as working environment and running application.The integration of this procedure in the above computation model will allow taking into account the costs associated at any time and take better criteria in making planning decisions.
Flexible Computational Model
In this section, the formal framework of this issue is described in order to state the problem formulation and to define the flexible computing proposals.
General
Framework.The computational model proposed in this section specifies the aspects involved in scheduling the tasks of an application in a multiplatform execution environment with heterogeneous characteristics.For such scenarios, the set of available computing platforms would be defined by The set of tasks will be defined by the application workload.The variety and type of tasks for each workload depend on the scope of each system.Let Γ be the workload of an application environment: where each is a task of the application environment.In a real-time system, each of the tasks will be associated with a deadline from which the result is invalid (hard) or loses value (soft).Therefore, for each task ( ), constraint( ) informs the maximum duration of execution as defined by the following function: In this kind of system, to undertake the scheduling, the delay cost associated with the execution of the tasks of workload on each platform will be known.The delays are defined by the following functions: start, delay, data, and net.Its definition is detailed in Table 1.
The start function indicates the native platform of a task, that is, the platform on which the task is created for execution.Generally this platform will be the interface system with the user's environment or the embedded system itself.
The delay function obtains the execution time, in time units, when running the task on a given platform, without taking account of its current workload, that is, if all processing resources of the platform were dedicated to running that task.In this way, delay ( ) gets the delay of executing the task on platform .In an embedded system, the results of this function may vary due to different operation scenarios or configuration of the system.For example, at low power consumption settings, the processing resources can be reduced (e.g., lowering the clock frequency or disabling cores) and produce higher delays.Other maximum performance scenarios could improve standard delays.
In addition to the computational requirements, in distributed processing scenarios it is necessary to know the amount of data required to run the task.Thus, the data function obtains this size of data required for processing a task and the size of the results produced.It is independent of the platform on which it runs.
Finally, the net function obtains temporary costs associated with data communication between platforms through the communication network.That is, the function net , (data( )) returns both delays caused by the transmissions of data required to run from the platform to and caused to return the results from to .It is assumed that there is no delay to move information on the same platform; namely, net , (data( )) = 0.
As defined by the above functions, the execution cost of a task ( ) on a platform () will be defined by the following expression: TimeCost ( ) = net start(), (data ( )) + delay ( ) .(4) Nevertheless, considering each platform is running a set of tasks, the response time of the task must consider the delay cost on the platform of pending tasks when this new task ( ) arrives.In this way, let ( ) be the aggregate delay cost of the list of tasks assigned to the platform at time , with a deadline less than DeadLine( ).This cost is dependent on the platform and on its internal configuration of processing elements.For example, a platform can be composed of many processing elements capable of parallel computing.
According to this aggregate cost, the expression (4) on the delay cost of executing a task ( ) on a platform () at time is set as follows: The execution of the tasks in this model does not consider accessing to shared resources other than the processor, so that no unnecessary delays occur by blocking resources.This assumption is consistent with many applications composed of many autonomous and noncollaborative tasks.
From the above processing cost expression ( 5), a scheduling method based on Shortest Job First (SJF), or Shortest Remaining Processing Time First (SRPT) in preemptive case, can be implemented to minimize the average delay time of application workload.It is proven that these scheduling algorithms (SJF and SRPT) are the optimal online methods to minimize the average delay time for a single processing platform [62,63].In addition, recent studies about this issue demonstrate that they can be also very competitive even in multiprocessing platforms [63].
However, although simple, these methods do not take into account compliance with the time constraints present in the tasks of real-time systems.
Real Time Embedded-Cloud Scheduling.
The real-time constraints in the execution of tasks imply a temporal restriction or deadline for each task related to the time at which the results must be ready.After that time, the results have a lower or null value.
The following function obtains the remaining time of each task in which the results must be ready: As time passes, the value of DeadLine approaches zero for each task.
It is not the purpose of this work to propose a method of planning as complex as described in the previous scenarios.This problem has been studied in other researches [64,65] and is, in its most general version, of category NP problem.This scheduling can only be resolved by heuristic or search algorithms which consume a considerable part of processing resources [66].
Cloud processing platform
Cloud scheduling queue t i,j , t k,m , . . .Instead, in this section we will focus on a subset of this general problem in which there are only two heterogeneous platforms.This scenario is quite common in many contexts, for example, in a workstation which has central processor unit (CPU) and a graphics accelerator (GPU) installed on it [67].Under this principle, we consider therefore that applications will be launched at an application platform (which corresponds to the embedded system) and there is another additional computing platform (cloud infrastructure) on which move part of the processing work.Processing platforms may have different performance and characteristics.Therefore, in this study, expression (1) shall be defined by the following set of available computer platforms: where ES will correspond to the computing platform of embedded system and Cloud to the processing platform available in the Cloud.Furthermore, we assume that tasks will be created in embedded system as part of the running application.Therefore, start( ) = ES .The Cloud platform may have a nonexclusive use of the application and it can serve many devices corresponding to the same or several different applications, so this approach can be extended to scenarios in which various embedded or mobile systems share the cloud infrastructure to complement its performance.This case corresponds to infrastructure as a service model, where the Cloud platform can execute many tasks when required.Figure 1 illustrates this scenario.
The scheduling method proposed for this case aims maximize compliance with the time constraints reducing the time spent on scheduler management.We do not intend in this work provide the optimal solution in the execution of the tasks of the workload, but to provide a feasible solution for the management of real-time embedded systems valid for many nowadays applications.According to this goal, heuristic used is to schedule tasks on platforms that meet their timing constraints.In each platform, tasks are placed in a dispatcher queue ordered by shorter DeadLine (EFD-Earliest-Deadline Execute (t i ) Loop ∀t j : Prior(t j ) > Prior(t i ) First).This method has been proven effective even in multiplatform systems under certain conditions [68,69].
Imprecise Computation Scheduling.
The main idea of this method to adjust the amount of tasks that are executed depending on the time available and the computational power of the system lies in considering the relative priority of each task to decide the execution order of workload.The priority of each task is given by its importance in the application or in the security of the system.This priority may be related with DeadLine of the task or have a different value in line with how critical is the task in the overall application.
According to the imprecise computation technique, lower priority tasks may be discarded when the time constraints require a partial execution of the application.In such cases, only the critical tasks will be executed in the time available, obtaining a partial functionality.For this purpose, the system will have a function called Priority will determine the priority of each task: The scheduler method is described in Figure 2. As in the previous subsection, first, when the task is created in the embedded system, it is determined what platform can run the task within the available time: (a) if both can, decision about which platform to choose can be established according to system configuration: you can derive all possible processing to the cloud if compliance with the restrictions or select local processing by default; (b) if no platform can deliver on time, the task can be rejected with a warning of out of time.Once decided the platform in which the task will be executed, the management of tasks in scheduling queue is driven by Priority value of the tasks.When a new task arrives to the platform, it will be inserted in the position corresponding to their priority.In this case, the system must ensure that all tasks with lower priority than it can meet their time constraint.Otherwise, the task can be scheduled on the cloud platform if possible, and if not, reject the task execution.During the time in the platform scheduler queue, the conditions of remote execution can change and therefore some tasks can be driven to the cloud.This method is not geared to meet with the deadlines of all tasks but to execute them according to their importance.Therefore, it is only applicable in applications for which it is assumable this type of operation.In this case, a partial execution of the application will be made when the system is not able to perform all tasks on time and an overload exists.In this case, when the most important tasks are processed in first place, the imprecise result is feasible for the user.This behavior is consistent with the assumptions made in the introduction where the possibility of addressing workloads that exceed the theoretical computing capabilities of the devices was proclaimed.
An application example in which implements this method is described in Section 6 of this paper.
Net Performance Prediction Methodology
The proposed framework requires measurement and prediction of the network performance between platforms.First of all, network performance concept must be properly specified according to application requirements.For example, in some real-time applications a maximum delay must be guaranteed; however, in other applications, a stable minimum bandwidth is enough for producing valuable results.
Several tools have been developed and are available to flexibly measure different network performance parameters [70][71][72][73].However, periodically probing the performance between each pair of platforms is a resource intensive task and poorly scalable [74].For this problem, a number of solutions have been proposed, most of them in the context of heterogeneous computing [75].Again, the nature of the application to be implemented will determine the best strategy to take.In addition, most of the networks linking mobile platforms use shared medium among a number of users.This condition often makes network performance very difficult to predict.
In this section, a method for measuring and predicting network performance is experimented.For this purpose, a standard wireless network is used as a test environment and two processes are run.These processes implement task 1 and task 2 shown in Figure 3. Just for the purpose of this test, the first process captures input from a camera device; then, it performs frame selection and sends the relevant frames to the second process; the second process performs some common operations typically involved in signal processing tasks, such as Fast Fourier Transform (FFT).The number of relevant frames exchanged between processes will depend on the data captured (variable workload).
In this scenario, two factors must be taken into account when predicting network performance.First, the transfer rate received by the processes when running on different platforms (Figure 3(a)); this will depend on the application workload, which in turns will change over time (in our experiment, random frame selection at different rates has been implemented).Second, the available network bandwidth when the processes are run on the same platform (Figure 3(b)); this will also change over time depending on the network usage by other users or applications.In other words, if the two processes are run on the same network node (platform), the network link performance between that node and other candidate platforms must be estimated for possible offload; otherwise, the network suitability can be deduced from the transfer rate shown by the communicating processes.
In order to evaluate the network performance, a number of multiplatform tools are available.One simple solution is the Iperf (http://iperf.fr)tool.It allows to set target nodes (servers, in Iperf terminology), by running an Iperf process in server mode on each platform.With a period of 20 seconds, an Iperf process is launched in client mode, in order to measure the bandwidth by sending random data during one second over a TCP connection.The best period and test duration are highly dependent on the application and network characteristics, and therefore it will require further setting.In addition, other network performance parameters could be considered if required by the application; for example, the Iperf tool can also measure average delay (although this could be also achieved by a standard ping) and packet loss.
For testing purposes a framework script has been developed.It is in charge of running the network performance tests.In addition, it acts as a proxy between processes and the network devices, so it can monitor the effective transfer rate between offloaded processes over time.Moreover, it logs the measures taken for further prediction.In summary, the proposed method consists of three parts: (1) periodic measure of relevant performance parameters provided by the network, (2) communicating tasks monitoring, for current shown performance, and (3) analysis of past performance data for reasonable prediction.
For part (1), the aforementioned tools for network performance measuring has been used.By using those tools, each platform constructs a history database that can be used later in order to help to find stability periods from the network performance point of view.
In part (2), the provided framework is in charge of checking that processes are running under affordable network conditions.Otherwise, tasks must be rescheduled for execution on the same platform (if it is possible, according to current system workload, priorities, etc.), avoiding network communication.
The result of the analysis performed in part (3) can be used in conjunction with current provided performance values, in order to support the decision about the platform on which the task should be run.The past performance data can be dynamically configured depending on aspects such as working environment and running application.To adequately compose this function it is necessary define the former operation details of the embedded system.
Figure 4 shows different aspects of the transfer rate evolution in the experiment.The grey line shows the required transfer rate between the running processes, in order to fit real-time requirements.This changes over time, because it depends on application workload.In the conducted experiments, the processes exchange selected frames, and therefore the required transfer rate will vary depending on captured data.
When the processes are run on the same platform (sections of the chart labelled as local in Figure 4), the framework is measuring the network conditions.In the experiment, this is the maximum transfer rate that could be achieved if the process were offloaded.It is also changing over time, because of the effect of different factors as mobility or other applications sharing data through the network.The result of these measures is drawn with a dashed line.As shown in the chart, a sequence of three consecutive periods of increasing available bandwidth is considered a stability condition, so it is decided to switch to offloaded mode.In other words, the processes are run on different platforms when the network conditions history is good enough to reasonably guarantee required transfer rate.In a real scenario, this decision would depend on a number of factors, as the application nature and the scheduling policy.
When the processes are run on different platforms (sections of the chart labelled as offloaded in Figure 4), the framework is checking if the transfer rate is enough to fit process requirements.The result of these measures is drawn with a solid line.When a risk condition is detected (point marked with a red circle in the chart), the system is switched back to local mode.Again, the particular definition of risk condition would depend on the application nature in a real scenario.
Application Example
In this section, an application example is exposed in order to test the operation of the system in a today's context with a realistic workload.The proposed scenario is a Smart-Drive application for autonomous decision-making aimed at providing increased security and convenience to the user.This application is part of the SmartCity concept and the construction of vehicular networks in which several vehicles intercommunicate with each other and with city infrastructure to exchange information [76].
In this application, the vehicle is equipped with a variety of sensors that capture the state of the environment to inform the driver and adapt driving to traffic and context circumstances (see Figure 6).These features entail the execution of signal processing tasks from the data collected by these sensors.Typically these tasks have real-time constraints, since the processing results have to be on time to make decisions on the fly.Figure 5 illustrates a real scenario in which several vehicles interact.
To provide processing capabilities that this sensing requires several configurations exists: (a) each sensor subsystem can have its own embedded system; (b) the system has a central processor built-in-vehicle running all tasks; (c) the processing device may be provided by the user and integrate with the vehicle sensors to collect data.This last option provides greater flexibility to the user and allows the installation of applications (apps) on mobile devices carried by the user (phones, tablets, etc.).This scenario is increasingly common in many applications that can be downloaded to mobile user devices to interact with sensors and actuators of our home, work or transport vehicles.In this case, the capabilities that can be offered by the application depend on the performance of the device used.
There are many signal processing elements in the Smart-Drive application with real-time constraints that can be addressed with the proposed model; however, to illustrate a glaring example of how it works, in the example analyzed, it is considering only 3 kinds of tasks which analyze different types of signals collected by sensors.Task 1.It is responsible for analyzing the signal from the radar sensors to obtain the following information from other vehicles: distance, relative speed, and risk of collision.
Task 2. Analyzes the signal from a set of sensors that measure the physical environment of the vehicle to obtain the following information: type of pavement, asphalt status, static friction, moisture, rain, and so forth.
Task 3. Analyzes the traffic signs and posters from digital images captured by the car's cameras to identify the type of each signal and interpret the texts they contain.
For example, Figure 6 shows a schematic of the vehicle with various sensors and what tasks supply the signal collection [77].
Results obtained by Task 1 enable vehicles to know where the vehicles in its vicinity are and what they are doing and to perform actions such as the following: forward collision warning, automatic braking if there is a risk of collision, intersection movement assist, not passing warning, and so forth.Results from task 2 will allow calibrate the operation of dampers, distance to the ground, brake pedal feel, traction control, ABS, and so forth.Results obtained by task 3 inform the driver of road signals, speed limits warning, road departure warning, and so forth.
Tasks are started in the vehicle as a result of events produced by the sensors when capturing information (Start( ) = car).Thus, when sensors located another vehicle the task 1 is started, when a traffic signal or a sign is identified starts a task 3. Task 2 runs periodically to check the physical condition of the environment.
According to the proposed computational model, each task type ( ) requires a processing time known through Delay SE ( ) function, needs a data size measured by Data( ), and its results must be ready before DeadLine( ) to be useful for SmartDrive application.
With this simple example described, the system performance can present problems in certain situations when the processor cannot provide sufficient computational power and/or when the frequency of arrival of the tasks overflows its processing capacity.These situations could occur if the user does not have a sufficiently powerful device to perform all tasks or if the vehicle is in intensive scenarios with many circulating elements and traffic signs (e.g., city centers).
In realistic implementation of this idea, requirements on computational capabilities of such devices must be set by the manufacturer according to criteria that guarantee the safety of the driver.for example, devices able to run realtime type-1 tasks (to avoid collisions) in dense traffic contexts.Specifically, it means ∀ over time, Delay SE (task 1 ) < DeadLine (task 1 ) .(9) From this minimum restriction, all other features of "SmartDrive" system can be provided by the manufacturer, as value-added services, depending on the power of the mobile device and/or under the possibilities for communication with the cloud.
In situations in which the user's system cannot meet the time constraints in the execution of all tasks, the integration with Mobile Cloud Computing paradigm can provide the necessary performance offloading some of the processing work to the cloud.Furthermore, this solution offers to the application signal processing capacity for an extensive fleet of vehicles and sharing computing infrastructure in the cloud for it.With this configuration a multitude of embedded systems (vehicles) can share the same cloud platform to collaborate with the necessary processing work as shown in the figure scheme (Figure 7).
The flexible framework proposed in this paper offers an approach to the problem of tasks scheduling and decisions about upload execution to the cloud to achieve the best quality of service.To do this, the application must be composed by a set of task and a priority associated with each type of task must be provided in order to be used as a criterion for planning.In our example, the order of priority of tasks is as follows (high to low priority): 1, 2, and 3.That is to say, identify closest vehicles to avoid collisions has priority over environmental analysis and traffic signals interpretation.Figure 8 depicts a scheduling example simulating a workload for this application with two operating contexts and different task arrivals.Table 2 shows the workload of this example.
The MCC paradigm is implemented between automotive embedded system provided by the mobile device of the user A-method driven only by deadline and B-method based on previous priority level described.
The task operation conditions are the following: task 1: deadline = startTime + 1; task 2: deadline = startTime + 1; task 3: deadline = startTime + 2; delay Cloud = 1/2delay Car for all tasks.When the embedded device cannot meet time constraints, the tasks are sent to the cloud infrastructure if they can be processed on time according its TimeCost function.
The simulation results show that none of the methods can schedule all tasks of the application on time when cost exceeds the capabilities of embedded device.Nevertheless, the imprecise computation (B-method) allows less important tasks are the remaining unexecuted in that case.The example shows that with A-method three tasks of type 1 have been lost, while the B-method only lost two tasks of type 2 and none of type 1. Obviously, the results depend on the simulations made and the cadence of work that comes; however, the same behavior has been observed in all cases.
Table 3 shows the scheduling results of task by type.Statistical data in Table 3 clearly shows that B-scheduling method does not produce loss of completion time of most priority tasks and improves turnaround and wait time for them.
With this priority criterion, the system ensures that in case of not having enough processing power, the lower priority tasks will run last.Analysis of the internet coverage and bandwidth of the device can move their execution to the cloud to be processed in parallel with the work that runs on the device and offer their additional service.In addition, some kind of tasks (e.g., task 3 of image analysis) may have faster execution in the cloud where it can take advantage of powerful computing resources and where will not be subject to restrictions of power consumption or silicon size.
The simulation results about throughput and utilization of computing resources are shown in Table 4.Although the utilization of the embedded system in the car is the same in both scheduling algorithms, with the same environmental conditions the B-method uses more resources of the cloud.
Conclusions and Future Work
The future is coming.The number and variety of applications for embedded systems and mobile devices are growing.It is getting more necessary to provide methods and higher performance to execute applications with real-time constraints on these devices.Many of these applications are characterized by need of signal processing-intensive as a result of processes of sensing the environment in which they run to provide services to the user.Mobile Cloud Computing is the key paradigm that provides the necessary processing power.It is referred to as the infrastructure where both the data storage and the data processing happen outside of the embedded system or mobile device.Therefore, cloud-based mobile apps can scale beyond the capabilities of any embedded or smartphone system.
One of the biggest challenges of this paradigm is the integration between the two infrastructures.That is to say, scheduling processes for execution considering the many aspects involved, especially when tasks are real-time constraint and cloud resources are accessible through the public communication infrastructure.
In this paper, we have presented a solution to this challenge that uses computational techniques to determine the most appropriate scheduling design between the local device and the cloud.To do this, a computational model based on imprecise computation method is proposed that provides flexibility for running applications over embedded systems.A method to know the delay induced by the network has been designed.This method is used to predict the bandwidth and the delay costs associated with communication and remote processing in the cloud.Thus, this type of apps has the power of a server-based computing infrastructure accessible through an embedded or mobile device, in which is taken into account, for the proper scheduling process, the extra delays associated with the remote access.With this model, specification and processing applications as a series of tasks with timing constraints are allowed.It is possible to prioritize their execution based on priority parameters, so that in the event of being unable to meet the computational requirements to processing all tasks, the most important ones have been satisfied and user satisfaction has been maximized.
A simple application example has been developed to show a real scenario that meets the points raised in this research.It is also made a simulation which shows the simplicity of the proposed model and its ease to design scheduling tasks.As a result, it is found that model gives preference to compliance with the time constraints of critical tasks in first place.
Based on the current outcomes, our future work will be unfolded along two directions: one is extend the proposed model to consider more complex application scenarios: for example, composed by a collection of tasks with timing constraints that maintain precedence relationships and share the use of other system resources.The other direction is going further into the net delay prediction issue, because this is the key aspect in scheduling decisions that take into account the remote resources in an MCC context.
Table 3 :
Scheduling results of tasks.
and the cloud.The figure shows two scheduling methods:
Table 4 :
Throughput and utilization of computing resources.This vehicle does not use the Cloud Computing resources in this simulation context.However, Cloud server can be used at this time by other vehicles in the system. | 10,789.8 | 2015-06-16T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Generating individual intrinsic reward for cooperative multiagent reinforcement learning
Multiagent reinforcement learning holds considerable promise to deal with cooperative multiagent tasks. Unfortunately, the only global reward shared by all agents in the cooperative tasks may lead to the lazy agent problem. To cope with such a problem, we propose a generating individual intrinsic reward algorithm, which introduces an intrinsic reward encoder to generate an individual intrinsic reward for each agent and utilizes the hypernetworks as the decoder to help to estimate the individual action values of the decomposition methods based on the generated individual intrinsic reward. Experimental results in the StarCraft II micromanagement benchmark prove that the proposed algorithm can increase learning efficiency and improve policy performance.
Introduction
Many real-world tasks are cooperative multiagent problems, in which all agents work together to achieve a common goal, such as distributed logistics, 1 crewless aerial vehicles, 2 autonomous driving, 3 and network packet routing. 4 Multiagent reinforcement learning (MARL) holds considerable promise to deal with such tasks.
However, the sparse reward is a long-standing problem in the reinforcement learning (RL) field. Worse still, this problem will be more server in the cooperative MARL tasks, in which all agents usually share a global reward. Specifically, the sparse global reward not only reduces the learning efficiency but also may lead to the lazy agent problem in the multiagent field, 5,6 which means that it is difficult for each agent to confirm its contribution to the team's success (i.e. the shared global reward). As a consequence, if an agent learns the decentralized policy based on the global reward directly, it will take the global reward originated from its teammates' behavior as its contribution, thus encouraging its current meaningless action.
To address the lazy agent problem, decomposition methods 5,7,8 first learn a joint action value (JAV) for all agents based on the global reward and the joint experiences of all agents (i.e. joint observations and the joint actions). Then, each agent learns the individual action value (IAV) implicitly from the JAV decomposition rather than from the global reward directly. Finally, the per-agent decentralized policy can be determined based on its IAVs. To this end, value-decomposition networks (VDN) 5 additively decomposes the JAV into IAVs across agents, QMIX 7 replace the additivity with the monotonicity, and QTRAN 8 is free from the additivity/monotonicity structural constraints to make the decomposition method more general. Thereby, the IAVs are trained end-to-end with the optimization of the JAV.
Another approach to address the lazy agent problem is assigning each agent an individual reward. Reward shaping 9 aims to manually design individual reward functions for each agent. However, it needs heavy and careful manual work, which is hard to ensure that the optimal policy will not be changed even in the single-agent field. 10,11 Thus, a group of recent single-agent RL methods aims to learn parametrized intrinsic reward functions to replace the manually designed reward functions. [12][13][14] Inspired by the concept, learning individual intrinsic reward (LIIR) 15 learns each agent an intrinsic reward function and then combines the intrinsic reward and the global reward to optimize the policy via the actor-critic 16,17 algorithm.
In this article, we propose a generating individual intrinsic reward (GIIR) algorithm in MARL, which constructs a connection between the learned individual intrinsic reward (IIR) and the decomposition methods. We assume that there exists an individual reward function for each agent, thus introducing an intrinsic reward encoder to generate the IIR r i t based on per-agent local experience (i.e. local partial observation and local action). Then, the generated IIR is used to participate in learning the IAVs of the corresponding agent i. The key insight of our work is that by ensuring the goal of the IIR is increasing the expected global reward, we can make true that the decentralized policy determined according to the IIR-based IAVs can obtain greater teamwork success, that is, gain more global reward. To this end, the parameters of the intrinsic reward encoder are optimized by maximizing the expected sum of the global reward. Besides, similar to LIIR, 15 the generated IIR does not have any physical meanings and is only used to learn the IIR-based IAVs. Thus, we take the generated IIR as the input of hypernetworks 18 to output the parameters of the agent network and use the agent network to output the IAVs based on local experience. We test the performance of the proposed GIIR in the benchmark environment StarCraft multiagent challenge (SMAC), 19 and the experimental results prove that the proposed GIIR can outperform both decomposition methods and LIIR.
Related work
The naive MARL method to address the cooperative multiagent tasks is joint action learning (JAL), 20 which takes all agents as an agent and learns a JAV based on the joint experience of all agents. However, the number of actions increases exponentially with the number of agents, which makes it intractable. Thus, individual learning views other agents as parts of the environment and learns an IAV based on per-agent local experience. Then, each agent performs the decentralized policy without considering other agents. However, because of ignoring the strategy changes of other agents, it fails to coordinate with other agents efficiently. Besides, all agents often share a global reward in cooperative multiagent tasks. Under such circumstances, it is hard for each agent to confirm its contribution to the team's success. Thus, each agent may view the rewards that originated from its teammates' behavior as its contribution, which may lead to the lazy agent problem. 5,6 One approach to cope with the lazy agent problem is the decomposition method, 5,7,8 which aims to learn IAV implicitly from the JAV decomposition rather than from the global reward directly. Another approach is providing each agent an individual reward. Reward shaping 9 manually designs the individual reward for each agent, but it requires heavy handwork to accurately assign rewards to each agent and is difficult to handle in practice. LIIR 15 adopts the actor-critic algorithm 17,16 to address the MARL tasks and learns each agent an IIR, then the actor uses the IIR to learn each agent's policy. Th emergence of individuality (EOI) 21 gives each agent an intrinsic reward, but it aims to learn the individuality based on the intrinsic reward to drive agents to behave differently. Thus, the intrinsic reward is outputted from a classifier in EOI and the role is to distinguish different agents. Our work is closely related to the decomposition methods and LIIR. We learn a parameterized intrinsic reward and introduce it into the decomposition methods to help to improve the learning efficiency.
Our work is also related to the single-agent works about the intrinsic reward. Most works take curiosity as the intrinsic reward either to encourage the agent to explore novel states 22,23 or to encourage the agent to reduce the uncertainty in predicting the consequence of its actions. 24,25 Besides, Zheng et al. 12,14 and Bahdanau et al. 13 learn parameterized intrinsic reward to help achieve learning goals.
Background
In this work, we focus on the setting of Decentralized Partial Observation Markov Decision Process (Dec-POMDP). 26 It models the fully cooperative MARL task as a tuple G ¼< n; S; U ; P; r; O ; O; g >, where n is the number of agents, s t 2 S is the true global state in the environment, m i t 2 U is the individual action chosen by each agent, m t 2 U U n is the joint action of all agents, Pðs tþ1 js t ; m t Þ : S Â U Â S ! ½0; 1 is the transition function to determine a next global state s tþ1 after that all agents perform the joint action, rðs t ; m t Þ : S Â U ! R is the global reward shared by all agents, and g is the discount factor. Besides, Decentralized Partial Observation Markov Decision Process (Dec-POMDP) considers the partial local observation setting, in which each agent can only access the local observation o i 2 O according to the observation function Oðs; iÞ.
To handle the partial observability, there is a technique 27 in MARL that applying the recurrent neural network 28 to estimate the IAV Q i ðt i ; u i Þ and the JAV Q jt ðt; uÞ based on the local observation history and the local action history, where is the local action history of an agent i, t is the joint local observation history, and u is the joint local action history of all agents.
Individual Q-learning
Individual Q-learning (IQL) views other agents as part of the environment, in which each agent uses the global reward to learn an IAV Q i ðo i t ; m i t Þ based on its local experience (i.e. local observation and local action). Recent works 6,29 extend it to the deep reinforcement field, which applies the neural network with parameters q i to estimate the IAV, and the parameters are optimized by the loss function where q À i represent the parameters of the target networks and they are not optimized in Equation (1) but updated by the periodic copy from the parameters q i of the online networks.
Note that the global reward may originate from its teammates' behavior, but each agent uses the global reward r t to update its IAV in Equation (1), which may lead to the lazy agent problem. Besides, because the dynamics of the environment will change as its teammate changes its behavior strategy, the learning process of IQL may be a nonstationary problem.
Decomposition method
Decomposition methods 5,7,8 apply the global reward to learn a JAV based on the joint experience (i.e. joint observations and joint actions), which is similar to the JAL. 20 Then, they decompose the JAV into per-agent IAV Q i ðo i t ; m i t Þ, thus each agent can perform decentralized policies according to the IAV based on local experience. Most importantly, the IAV of each agent can be learned implicitly through end-to-end training from the JAV decomposition rather than from the global reward directly.
The key condition of decomposition methods is individual-global-max (IGM), 8 which can make true that the optimal joint actions based on the JAV are equivalent to the collection of individual optimal actions of each agent based on the IAVs by ensuring that the global argmax operation on the JAV is the same with a collection of simple individual argmax operations of each IAV Thus, we can determine decentralized policies based on the optimal IAVs of each agent while the goal of the training is optimizing the JAV.
To satisfy IGM, VDN 5 decomposes the JAV based on the additivity where Q jt ðt; m Þ is mixed as the sum of each Q i t i ; u i ð Þ. QMIX 7 decomposes the JAV based on the monotonicity where Q i t i ; u i ð Þ is mixed into the Q jt ðt; m Þ by a mixing network rather than directly summing together in Equation (3), and the parameters of mixing network are generated by separate hypernetworks 18 to ensure positive. Note that VDN can be regarded as a special case of QMIX, in which g i ¼ 1.
QTRAN 8 transforms the JAV into the sum of the IAVs and a state value, which can be free from the additivity/ monotonicity structural constrains , and V jt ðt Þ is the parameterized state value function.
Thereby, the IAVs Q i ðo i t ; m i t Þ can be optimized end-toend by optimizing the JAV Q jt ðo t ; m t Þ in the above decomposition methods. LIIR 15 learns each agent an IIR r in i;t and combines the IIR and the global reward r t to obtain a proxy value function for each agent
Learning individual intrinsic reward
where m i;t is the local action of an agent i at timestep t, o i;t is the local observation of agent i at timestep t, and l is a hyperparameter to balance the extrinsic global reward and the IIR. Next, the proxy value function is used to optimize the policy of each actor (i.e. each agent) where p i m i;t jo i;t À Á is the policy of agent i, i represents the parameters of the actor-network, and r t þ lr in i;t þ V proxy o i;tþ1 À Á À V proxy o i;t À Á is the advantage function. 17 Thereby, LIIR builds a connection between the learned IIR and the actor-critic algorithm to address the lazy agent problem.
Method
In this section, we propose a GIIR algorithm, which constructs a connection between the IIR and the decomposition methods in the cooperative MARL field.
The main idea is that through optimizing by maximizing the expected global reward, the learned IIR can guide each agent to perform the action that can obtain a greater global reward by participating in the estimation of the IAV.
We assume that there exits an IIR function R i t ðs t ; m i t Þ for each agent. Hence, we introduce an intrinsic reward encoder Eðr i jt i ; q r Þ with parameters q r to generate the IIR. Note that the IIR function R i t ðs t ; m i t Þ gives the reward r i t by performing action m i t in the state s t , which should be based on the state and action. Because each agent can only access the partial local observation in the Decentralized Partial Observation Markov Decision Process (Dec-POMDP) setting, we apply a Gate Recurrent Unit Recurrent Neural Networks (GRU) 30 to process the local observation history t i in the intrinsic reward encoder, which is a technique to handle the partial observability. 27 Hence, the s t of the IIR function is replaced by the local observation history t i . Besides, to perform decentralized policies during execution time, we only take the local observation history as the inputs of the intrinsic reward encoder but generate M IIR distributions for M actions (i.e. M-dimensional average v ¼ fv 1 ; v 2 ; Á Á Á ; v M g and variance s 2 ¼ fs 2 1 ; s 2 2 ; Á Á Á ; s 2 M g of distribution), where M is the size of individual action space. In other words, the intrinsic reward encoder will generate a single intrinsic reward distribution for each action. Then, we apply the reparameterization trick 31 to sample M-dimensional IIR from M IIR distributions. Specifically, as shown in Figure 1(b), we sample a random variable r from the norm distribution N ð0; 1Þ and then compute the generated IIR for each action where j 2 f1; 2; Á Á Á ; Mg represents the number of actions in action space and i 2 f1; 2; Á Á Á ; N g is the agent ID.
Finally, all generated one-dimensional r i;j t are stacked in a M-dimensional r i t . By the means of the reparameterization trick, the parameters q r can be trained end-to-end, which is shown in Figure 1(a).
Similar to LIIR, 15 the generated IIR r i t does not have any physical meanings. To apply r i t to guide the estimation of the IAV, we introduce hypernetworks 18 with parameters q h as the intrinsic reward decoder, which takes r i t as the inputs and outputs the parameters q a of agent networks. Then, the agent network can estimate the IAV Q i ðt i ; u i ; q i Þ based on the local observation history, where q i ¼ ðq a ; q h ; q r Þ.
Next, the IAVs of all agents will be mixed into a JAV Q jt ðt; u Þ by a mixing network with parameters q m Q jt ðt; u; q m ; q i Þ ¼ which can be abbreviated as MIX Q i ðt i ; u i ; q i Þ ð Þ . Note that to speed the training by sharing parameters, all agents use the same agent network but with different inputs. 6 Besides, we use the mixing network that is the same as that in QMIX 7 in this work. Specifically, the parameters of mixing network are outputted by hypernetworks based on the global state s t , and the parameters are restricted to be positive to ensure monotonicity.
Then, we can use the global reward r t to optimize the JAV where q À m and q À i are the parameters of target networks that are similar to Equation (1).
Since the IAV of the decomposition methods can reflect each agent's contribution to the global reward to some extent, 5 we can apply the IAV to train each agent's IIR r i where r i t is parameterized by q r that it is generated by the intrinsic reward encoder, and N is the number of agents. Note that only the parameters q r of the generated IIR r i t are optimized in Equation (11). In other words, Q i t i ; u i ; q À i À Á and Q i t i ; u i ; q i ð Þare used as the constant and the parameters of them will not be optimized with the L t ðq r Þ, although Finally, the gradient of the overall loss function is as follows r q m ;q i Lðq m ; q i Þ ¼ r q m ;q i L tot ðq m ; q i Þ þ r q r L r ðq r Þ (12)
Experiment
In this section, we conduct the experiments on a benchmark named SMAC 19 to show the performance improvement of the proposed GIIR.
Environmental setup
SMAC provides a set of fully cooperative multiagent scenarios, which focus on the decentralized micromanagement of the real-time strategy game StarCraft II. Namely, all units (ally units) are controlled by MARL agents to defeat another group of units (enemy units). The enemy units are controlled by the built-in game AI with difficulty from very easy to cheat insane, and we set the difficulty as very difficult in this work. Besides, SMAC considers the partial observability setting by introducing the sight range, in which each unit can only access the local observation with the field of the view.
To evaluate the policy performance of the proposed GIIR, we conduct the comparison experiments in three homogeneous scenarios (5m_vs_6 m, 8m_vs_9 m, and 10m_vs_11 m) and three heterogeneous scenarios (1c3s5z, MMM, and MMM2). The list of scenarios considered in our experiment is presented in Table 1, and the screenshots of the six scenarios are shown in the Online Appendix. We evaluate the proposed method and the comparison method across 10 independent runs with different seeds and run 20 independent test episodes every 20,000 timesteps training to calculate the percentage of winning episodes as the win rates, in which the winning episodes are those that all enemy units are defeated within a time limit. All methods are evaluated after 10 million training timesteps, and the exception is the MMM, which is an easy scenario and only be evaluated after 6 million training timesteps. More experimental details are presented in the Online Appendix.
Comparison results
We compare the proposed GIIR with decomposition methods (i.e. VDN, QMIX, and QTRAN) and LIIR. Besides, we also compare it with the IQL, which learns the IAV based on the global reward directly.
The learning curves are shown in Figure 2 and the experimental statistic results are shown in Table 2. Because ignoring the coordination among agents and employing the global reward directly as the individual reward to learn peragent IAV, the performance of IQL is almost the worst in all scenarios. Decomposition methods learn IAV implicitly through end-to-end training from the JAV decomposition, the performance has gain a great improvement, in which the performance of QMIX is slightly better than that of VDN, and QTRAN indeed performs poorly in most SMAC scenarios according to Wen et al. 32 The experiments of Du et al. 15 have shown that LIIR can outperform decomposition methods in many easy scenarios, which are similar to the Figure 2(d) and (e). This proves that learning each agent an IIR can help to improve performance. However, when the scenarios become more difficult, the performance of LIIR is poor, and even is worse than that of the decomposition methods, which are shown in Figure 2(a) to (c). Compared with that, GIIR combines the advantages of the decomposition methods and LIIR, which introduces the IIR into decomposition methods. The experimental results show that it can almost obtain better performance than all comparison methods, in which the performance of GIIR (89+8%) is only slightly worse than that of QMIX (88+9%) in scenario 1c3s5z. Most importantly, GIIR can
Conclusion
In this article, we propose GIIR algorithm, which constructs a connection between the decomposition and the learned IIR to address the lazy agent problem. The proposed GIIR generates an IIR for each agent to help to confirm per-agent contribution to the global success. Besides, the generated IIR is optimized by maximizing the expected global reward, thus it can help to obtain greater teamwork success. Our experimental results in SMAC prove that GIIR improves the final performance over both the decomposition methods and LIIR in cooperative tasks.
In future work, we will focus on studying other methods to better utilize the generated intrinsic reward to estimate the IAVs. Furthermore, we will conduct additional experiments on other SMAC scenarios with a larger number and greater diversity of units. Moreover, we will apply the proposed method to actual multiagent system scenarios, such as the logistics robot task, and conflict resolution in the air traffic control task.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Supplemental material
Supplemental material for this article is available online. | 5,182.8 | 2021-09-01T00:00:00.000 | [
"Computer Science"
] |
Multidimensionality of Spirituality: A Qualitative Study among Secular Individuals
: This study examines the multidimensionality of spirituality by comparing the applicability of two models—the five-dimensional model of religiosity by Huber that we have extended with a sixth dimension of ethics and the three-dimensional spirituality model by Bucher. This qualitative study applied a semi-structured interview guideline of spirituality to a stratified sample of N = 48 secular individuals in Switzerland. To test these two models, frequency, valence, and contingency analysis of Mayring’s qualitative content analysis were used. It could be shown that Bucher’s three-dimensional model covers only about half of the spirituality codes in the interviews; it is especially applicable for implicit and salient spiritual aspects in general, as well as for spiritual experience in specific. In contrast, the extended six-dimensional model by Huber could be applied to almost all of the spirituality-relevant codes. Therefore, in principle, the scope of this six-dimensional model can be expanded to spirituality. The results are discussed in the context of future development of a multidimensional spirituality scale that is based on Huber’s Centrality of Religiosity by extending the religiosity concept to spirituality without mutually excluding these concepts from each other.
Introduction
The term 'spirituality' and individuals' self-description as 'spiritual' have found growing popularity in public discourse during the last few years, inspiring the social-scientific research of religion and religiosity accordingly. Especially in Western democratic countries, individuals increasingly detach themselves from institutional forms of religion, while continuing to call themselves spiritual (Hood et al. 2009;Houtman and Aupers 2007). In social-scientific research, this often resulted in the tendency to contrast religiosity as institutional, rigid, and negatively connoted with spirituality as individual, flexible, and positively connoted or even to a replacement of religiosity with spirituality (see Zinnbauer et al. 1997;Zinnbauer and Pargament 2005). Later approaches conceptualize religiosity and spirituality as overlapping constructs without pitting the constructs against each other (Yamane 1998;Zinnbauer et al. 1997). This is especially due to the scientific result that most individuals identify themselves as "religious and spiritual" (Streib and Hood 2016). Some social scientists consider the overlap to be so large that they reject spirituality as a separate concept and continue working with the term religiosity (Streib and Hood 2011). A critical objection against this latter approach is that a growing proportion of individuals consider themselves to be 'spiritual, but not religious' or to be 'more spiritual than religious' (Carey 2018;Stausberg 2015;Tong and Yang 2018). Such a self-description can express a protest against or a rejection of a religious tradition and can, therefore, take on a rebellious nature (Chandler 2008;Hackbarth-Johnson and Rötting 2019;Hood 2003;Vincett and Woodhead 2016).
Spirituality has been a much-studied phenomenon in the field of psychology and sociology of religion over the past two decades, however it lacks conceptual, structural, and functional clarity and, for these reasons, is a rather uncomfortable subject for social scientists (Johnstone et al. 2012;Zinnbauer and Pargament 2005). Thus, the conceptualization of spirituality is an open and frequently discussed question. This does not only refer to the relationship between religiosity and spirituality, but also to the inner structure of spirituality and its psychological functions. While some researchers refer to spirituality as a personality dimension (Emmons 1999;Piedmont 1999), others consider spirituality to be an intelligence dimension (King and DeCicco 2009) or a more general cognitive orientation (MacDonald 2000), and yet others conceptualize spirituality as an attitude towards purpose and meaning of life (see Koenig 2008). In recent years, emic approaches have been attracting scientific attention in the way that spirituality is defined by study participants (Altmeyer et al. 2015;Berghuijs et al. 2013;Eisenmann et al. 2016;Keller et al. 2013;Steensland et al. 2018) with inconsistent results depending on the religious-cultural background of the respondents that urge for meta-analytic studies.
From our psychological perspective, everyday lived spirituality, especially of those individuals who define themselves as non-religious or atheist, is more significant than querying subjective semantics of spirituality. Therefore, our study follows a different approach as it wants to shed light on the structure of spirituality by applying two already existing multidimensional theories of religiosity (Huber and Huber 2012) and spirituality (Bucher 2014) to in-depth interviews conducted among a sample of non-religious or atheist adults. According to the best of our knowledge, neither model has been applied to qualitative interview data yet. As our sample of non-religious individuals goes far beyond the definition of spirituality as a part of religiosity (e.g., Pargament 1999;Wulff 1997), we agree with the majority of current research that spirituality is the broader concept that can, but does not necessarily have to include religious elements (e.g., James 1902;Stifoss-Hansen 1999; see Zinnbauer and Pargament 2005).
An additional challenge in research on spirituality and its relationship to religiosity is that both can take on explicit or implicit forms (Luckmann 1967;Schnell 2009). Vincett and Woodhead (2016) outline that spirituality takes on social forms that make a spiritual lifestyle more implicit since the 2010s than ever before. They state that spirituality "becomes increasingly mainstream, [ . . . ] fused with many manifestations of popular culture, and established in a range of social spheres, including healthcare and education" (p. 342). Therefore, individuals are becoming increasingly unaware of spirituality in their own lives, such as mind/body/spirit practices and underlying beliefs (e.g., alternative healing; see Chandler 2008). Consequentially, we combine an emic (explicit) and an etic (implicit) approach when defining spirituality and religiosity: If an individual refers to a religious tradition and is at the same time aware of the religious-traditional connotation, we define the utterance as explicitly religious. If the person is not aware of his/her reference to a religious tradition or denies such a reference (e.g., 'This has nothing to do with religion'), we define the utterance as implicitly religious. Explicit spirituality is again emically defined, whereas implicit spirituality is etically defined by using Tillich's (1957) concept of ultimate concern, which is reflected in Yinger's (1970) functional definition of religion and in Bailey's (2001) definition of implicit religion. In differentiation to proximate concerns that focus on our close demands and goals in life (e.g., having children, making a career), ultimate concerns include a spiritual component of self-transcendence: either towards the current status of the own self (e.g., expansion of self), towards the material world (e.g., emphasizing connectedness to humans, animals, or nature), or towards an immaterial sphere (e.g., emphasizing connectedness to a transcendent being). Working with such a broad definition of spirituality allows us to classify individuals who are religiously institutionalized as well as individuals who clearly distance themselves from religious institutions into multiple dimensions of spirituality rather than pitting religiosity and spirituality against each other (see Zinnbauer and Pargament 2005). At the same time, we react to the criticism against the concept of implicit religiosity/spirituality (e.g., Pollack and Pickel 2007) by introducing the criterion of ultimate concerns that resists the tendency of implicit spirituality to bogart all life aspects as spiritual.
First Model: Huber's Centrality of Religiosity-Applicable to Spirituality?
Our approach to spirituality as encompassing religiosity should be reflected in the application and a possible extension of a significant multidimensional model of religiosity to spirituality. Religiosity has been conceptualized and operationalized as a multidimensional construct in the social sciences since the 1960s (Grom 2009;Hill and Hood 1999). Following this line of research, (Huber 2003(Huber , 2009Huber and Huber 2012) developed the model of religiosity from which the internationally accepted and worldwide used 'Centrality of Religiosity Scale (CRS)' was derived 1 . This interdisciplinary model combines two highly significant theories of religiosity: First, the sociological theory of Stark and Glock (1968), which distinguishes five relatively autonomous dimensions of religiosity: public practice (e.g., church service), private practice (e.g., prayer), intellect (e.g., religious knowledge), experience (e.g., feeling God's presence), and ideology (e.g., belief in God). As stressed by Huber (2003), these dimensions simultaneously encompass the whole spectrum of psychological basic modes: cognition, beliefs, perceptions, emotions, and behaviors that might be an explanation as to why Stark and Glock proved its intercultural and interreligious validity. Secondly, Huber uses the psychological theory of intrinsic religiosity (to be religious for its own sake) and extrinsic religiosity (to be religious for social or personal comfort) by Allport and Ross (1967). In Huber's model, the five dimensions of Stark and Glock are conceptualized according to their intrinsic importance and salience in the individual's life (low, medium, high); the sum of this intrinsic importance of all five dimensions is called centrality of religiosity in the individual's personality (not religious, religious, highly religious). Figure 1 summarizes Huber's model.
In order to apply Huber's approach to spirituality, we want to add the ethical/consequential dimension to this model. This dimension was part of the multidimensional model by Glock (1962). However, it was later excluded by Stark and Glock (1968) and Huber (2003). Which values and ethical beliefs people who define themselves as 'spiritual, but not religious' or 'rather spiritual than religious' consider important in their lives is controversially discussed (Chandler 2008;Comte-Sponville 2009). Some approaches regard an ethical dimension to be the center of the spirituality concept (e.g., Carey 2018;Eisenmann et al. 2016), while others locate it at the margins of it (e.g., Berghuijs et al. 2013;Steensland et al. 2018). We want to raise the question if non-religious or atheist individuals report an orientation toward values in their everyday lives that can be explicitly reported or etically defined as spiritual.
Second Model: Bucher's Three-Dimensional Model of Spirituality
Inspired by conceptual considerations (Hill et al. 2000) and analyses of the importance of spirituality in the general population (Skrzypińska 2014), multidimensional concepts of spirituality are increasingly introduced into the discussion (e.g., Büssing 2017;Johnstone et al. 2012;Zwingmann and Klein 2012). Bucher (2014) proposed one of the most relevant multidimensional concepts of spirituality in recent years. Bucher does not only use a comprehensive range of relevant qualitative empirical results on spiritual experience to induce his model but also claims that his model is particularly adaptable to qualitative research on spirituality, wherefore it plays a central role in the present study. The core of spirituality is connectedness, which is the feeling of unity with oneself, with the immanent environment, or with a transcendent sphere. Therefore, Bucher defines three dimensions of spirituality: • connectedness to transcendence (vertical dimension; subdimensions: to a higher spiritual being/God), • connectedness to immanence (horizontal dimension; subdimensions: to the universe, nature, social environment, greater whole), and
Second Model: Bucher's Three-Dimensional Model of Spirituality
Inspired by conceptual considerations (Hill et al. 2000) and analyses of the importance of spirituality in the general population (Skrzypińska 2014), multidimensional concepts of spirituality are increasingly introduced into the discussion (e.g., Büssing 2017;Johnstone et al. 2012;Zwingmann and Klein 2012). Bucher (2014) proposed one of the most relevant multidimensional concepts of spirituality in recent years. Bucher does not only use a comprehensive range of relevant qualitative empirical results on spiritual experience to induce his model but also claims that his model is particularly adaptable to qualitative research on spirituality, wherefore it plays a central role in the present study. The core of spirituality is connectedness, which is the feeling of unity with oneself, with the immanent environment, or with a transcendent sphere. Therefore, Bucher defines three dimensions of spirituality: • connectedness to transcendence (vertical dimension; subdimensions: to a higher spiritual being/God), • connectedness to immanence (horizontal dimension; subdimensions: to the universe, nature, social environment, greater whole), and • connectedness to self (depth dimension; subdimensions: connectedness to self, body).
The benefit of this broad definition of spirituality is the incorporation of more secular forms of spirituality (see Schnell 2009), i.e., the horizontal and the depth dimensions of connectedness. Therefore, Bucher does not only use the prominent two dimensions of vertical and horizontal spirituality (e.g., Piedmont 2004;Steensland et al. 2018;Streib and Hood 2016) but also adds a third depth dimension. He even defines self-transcendence as the ability of humans to go beyond themselves, which is a precondition to feel connected on a horizontal and vertical dimension, and as a result to expand and actualize the own self. Figure 2 summarizes Bucher's model. • connectedness to self (depth dimension; subdimensions: connectedness to self, body).
The benefit of this broad definition of spirituality is the incorporation of more secular forms of spirituality (see Schnell 2009), i.e., the horizontal and the depth dimensions of connectedness. Therefore, Bucher does not only use the prominent two dimensions of vertical and horizontal spirituality (e.g., Piedmont 2004;Steensland et al. 2018;Streib and Hood 2016) but also adds a third depth dimension. He even defines self-transcendence as the ability of humans to go beyond themselves, which is a precondition to feel connected on a horizontal and vertical dimension, and as a result to expand and actualize the own self. Figure 2 summarizes Bucher's model.
Study Aims
The present study aims at further clarification of the multidimensional construct of spirituality by applying two established models of religiosity and spirituality to qualitative interview data: the model of religiosity by Huber (2003); (Huber and Huber 2012) and the model of spirituality by Bucher (2014). Moreover, we ask if Huber's and Bucher's multidimensional models should be extended by adding additional (sub-)dimensions by induction from the empirical data.
Specifically, the research questions for the extended model by Huber are the following: 1. Do the six dimensions of religiosity-public and private practice, intellect, experience, ethics, and ideology-cover the utterances and the personal significance of spirituality in individuals' lives? 2. Can these six dimensions of religiosity and/or their subdimensions be extended by inductions from the empirical material?
Analogously, we raise the following research questions regarding Bucher's model: 3. Do the three dimensions of spirituality by Bucher-vertical, horizontal, and depth dimensioncover the utterances and the personal significance of spirituality in individuals' lives? 4. Can these three dimensions of spirituality and/or their subdimensions be extended by inductions from the empirical material?
Beyond the criteria to test the applicability of both models to spirituality in its life-related aspects, the criterion of parsimony is of importance, in which Bucher's model has the upper hand, as it is the less complex model with only three instead of six dimensions.
Study Aims
The present study aims at further clarification of the multidimensional construct of spirituality by applying two established models of religiosity and spirituality to qualitative interview data: the model of religiosity by Huber (2003); (Huber and Huber 2012) and the model of spirituality by Bucher (2014). Moreover, we ask if Huber's and Bucher's multidimensional models should be extended by adding additional (sub-)dimensions by induction from the empirical data.
Specifically, the research questions for the extended model by Huber are the following: 1. Do the six dimensions of religiosity-public and private practice, intellect, experience, ethics, and ideology-cover the utterances and the personal significance of spirituality in individuals' lives? 2.
Can these six dimensions of religiosity and/or their subdimensions be extended by inductions from the empirical material? Analogously, we raise the following research questions regarding Bucher's model: 3.
Do the three dimensions of spirituality by Bucher-vertical, horizontal, and depth dimension-cover the utterances and the personal significance of spirituality in individuals' lives? 4.
Can these three dimensions of spirituality and/or their subdimensions be extended by inductions from the empirical material?
Beyond the criteria to test the applicability of both models to spirituality in its life-related aspects, the criterion of parsimony is of importance, in which Bucher's model has the upper hand, as it is the less complex model with only three instead of six dimensions.
Procedure and Sample
With specific attention to the findings that spirituality is becoming increasingly implicit (Luckmann 1967;Schnell 2009;Vincett and Woodhead 2016) we want to focus on a sample of seculars-i.e., individuals who consider themselves non-religious or atheists-and apply an interview to them that asks about spirituality directly or indirectly. The here presented data is a part of the mixed-method research project on seculars in Switzerland 2 . It uses the quantitative data of the Religion Monitor from 2013, which was collected among a representative sample (N = 1003) that represents all three language areas of Switzerland (German, French, Italian). During the applied telephone interview in November and December 2012 using a standardized questionnaire on religiosity (Pickel 2013), n = 341 participants define themselves as nonreligious or atheist and were categorized as 'seculars'-nevertheless, they can be religiously affiliated or unaffiliated. Among them, n = 113 agreed to an additional face-to-face interview and from those a stratified sample according to age, gender, language, and religious affiliation of n = 83 participants was randomly drawn.
The current study is based on the German-speaking subsample (n = 48). Thereby, we simplified the analyses by keeping the variables of language and culture constant. The following socio-demographic and socio-religious data of our subsample is derived from the earlier questionnaire study. The age of the sample ranges from 18 to 83 years old (M = 51.46, SD = 14.32), a majority of 72.9% are male. Regarding marital status, half of the sample reports to be married or to live with a partner, 20.8% do not, while the rest left the question unanswered. Our sample is highly educated: 64.6% have a university or college degree, 12.5% have an A-level, and only 22.9% hold a secondary or upper secondary education as their highest qualification. Moreover, the self-assessed economic situation is above the scale average (scale ranges from 1-4, higher scores indicate higher economic status) with M = 3.19 (SD = 0.53). Regarding socio-religious variables, a majority of 52.1% report that they are religiously non-affiliated. 43.8% were religiously socialized during their childhood, 31.3% report that they were at least partly religiously socialized, and the rest was non-religiously socialized. In summary: our sample reflects a typical European 'secular' sample as it is predominantly male, highly educated, and of high economic status (Keller et al. 2013). Additionally, on average the sample scores higher on spirituality (M = 2.13, SD = 1.20) than on religiosity (M = 1.63, SD = 0.96; both answer scales range from 1-5) and can consequentially be labeled as 'more spiritual than religious'.
A semi-structured interview guideline was applied to this sample by trained interviewers that asked about the following aspects related to three main topics of the interviews: These in-depth interviews lasted between 28 and 233 min (average: 77 min).
Data Analysis
According to Barton and Lazarsfeld (1979), the main aim of qualitative research is a classification or dimensioning of empirical material related to a certain phenomenon. As our goal is to examine multidimensional approaches of spirituality, we apply Mayring's (2014) qualitative content analysis to the transcribed interviews with the support of MAXQDA software. Content analysis is defined as a systematic (i.e., according to explicit rules), theory-based analysis of fixed communication material that is able to reveal not only manifest but also latent semantics. A system of categories is central for qualitative content analysis that assigns categories to text passages in an interpretative way. Therefore, categories must be precisely defined, transformed into coding rules, explained by anchor examples, and differentiated from similar categories. These categories are deduced from one or several theoretical models, which guide the research questions and the analysis, but the whole procedure allows explicitly inductive category formations. This is of significance since a qualitative-inductive approach is appreciated as an outstanding opportunity for research on conceptualization of spirituality (Hood et al. 2009;Miller and Worthington 2013;Streib and Hood 2011). Mayring defines three sub-techniques of qualitative content analysis: • frequency analysis ("to count certain elements in the material and compare them in their frequency with the occurrence of other elements", (Mayring 2014, p. 22)); • valence analysis ("procedures which accord a value to certain textual components on an assessment scale of two or more gradations", ibid., p. 26); and • contingency analysis (to examine whether particular categories occur with particular frequency in the same context with the target to "extract from the material a structure of text elements associated with one another", ibid., p. 27).
In our study, the model of religiosity by Huber (Huber and Huber 2012) and the model of spirituality by Bucher (2014) are the theoretical foundation from which the analytical categories are deduced. Moreover, our research aims ask for the extension of the models, which allows for the induction of additional dimensions that might not be covered by the models yet.
Every meaningful proposition in the interviews that meets our previously introduced definition of explicit and implicit spirituality and religiosity is coded in a four-step process:
1.
A code of one of the six dimensions by Huber is applied; if not applicable, a new dimension should be formed; the numbers of the given codes are counted and compared with each other (frequency analysis); 2.
Every code derived from Huber is given a code of the individual importance/salience (not important-ambivalent-important: valence analysis, related to the six dimensions: contingency analysis); 3.
A code of one of the three dimensions by Bucher is applied; if not applicable, a new dimension should be formed; the numbers of the given codes are counted and compared with each other (frequency analysis); 4.
In order to compare both tested models, we examine whether dimensions of Huber and the dimensions of Bucher are contingent-i.e., whether they are connected with each other (contingency analysis).
The definitions of our deduced categories of analysis, including coding rules and anchor examples, are displayed in the comprehensive coding guideline in Appendix A.
Every step of the coding procedure (for an overview of the step-by-step model see (Mayring 2014, p. 61)) was discussed in an interdisciplinary research team consisting of psychologists and sociologists of religion, religious scholars, and theologians. These team discussions of text parts and whole cases concluded after each round with (preliminary) category definitions, rules of differentiation to similar categories, and anchor examples. This intersubjective approach did not only solve discrepancies of categorization but also incorporated definitions of categories and related coding rules that were induced from the empirical material into the coding guideline. After a first trial whereby 10 cases were encoded, the coding guidelines were revised for the first time, applied again to the previously coded cases and new cases, and whenever necessary, revised by the researcher team again. During the whole coding procedure, we went through this back-and-forth process four times. The induced categories, called generic codes, can be also found in Appendix A.
Testing Huber's Model: Frequency Analysis
All K = 1625 given codes of implicit or explicit spirituality and religiosity were completely absorbed by the six dimensions by Huber (2003). Except for religious experience, subdimensions were formed within every dimension, which are displayed with their frequencies in Table 1. The dimension of 'public practices' (covering 11.69% of the total codes) includes rites of passage, religious service, holiday celebrations, and public meditation. The rest category 'others' contains all public practices that were reported less than 5% within the public practice dimension (those were occult/esoteric practices, art, donating, pilgrimage, prayer, exorcism, confession, alternative healing, blessing rituals).
Similarly, 'private practices' (covering 9.60% of the total codes) can most often be observed in prayer, meditation, and alternative healing practices. Private practices under the 5% cut-off value within this dimension were grouped into 'others' (consisting of wishing, mourning rituals, spiritual care, visiting energy places, occult/esoteric practices, art, mandala painting, make the cross sign, lighting a candle, actions in nature, mind-spirit training, everyday rituals).
The dimension 'intellect' reflects almost one-fifth of the given codes and its subdimensions primarily cover specific information channels of spirituality (e.g., discussions, books/TV, formal education), but also unspecific general interest in religion(s). Additionally, three subdimensions could be uncovered that represent central cognitive challenges with a spiritual connotation, i.e., questioning the meaning of life (without a definite answer), theodicy (the question of why there is evil in the world), and reflexivity (reasoning about own spiritual convictions with regard to their plausibility and logical consistency, see (Huber 2009)). The subdimension 'others' reflects the codes that were reported less than 5% within the intellectual dimension, which were public debates and questioning of alternative healing practices (regardless of their practice).
The dimension of spiritual 'experience' covers about 8% of the total codes and was not divided into subdimensions as such experiences are independent of their post-hoc interpretations since they share an identical structure (Hood 2003).
In the 'ethical dimension', which consists of the smallest share of total codes with 4.92%, the four most frequently reported subdimensions were 'love your neighbor', tolerance/sympathy, sustainability towards nature, and truthfulness/honesty. Other ethics cover less than 5% of the codes within this dimension and were grouped into the subdimension 'others' (consisting of thankfulness, altruism, justice, responsibility, loyalty/trust).
The most comprehensive dimension is 'ideology', covering almost half of the given codes. It consists of 19 subdimensions, which reflect attitudes and beliefs related to • all other five dimensions (belief in private practice such as meditation; belief in public practice such as religious service; attitude towards intellectual approaches such as formal education; attitude towards ethics; the desire for experience, see Appendix A); • all big world religions (Christianity, Buddhism, Hinduism, Islam, Judaism, religions in general); • central topoi (meaning of life, life after death, destiny, higher spiritual being, human being/teleology, society, natural environment, religioid ideologies such as esotericism).
In Table 1, the most commonly reported ideology subdimensions are displayed, while the subdimensions that were reported less than 5% in this dimension are summarized in the subdimension 'others'.
Testing Huber's Model: Valence and Contingency Analysis
After this frequency analysis, the text propositions on spirituality were coded a second time in a valence analysis according to their individual importance/salience according to Huber and Huber (2012). Only four text propositions (0.25%) could not be coded as no/low, medium, or high importance according to the coding guideline (Appendix A), which was due to missing information in the interview data regarding the participant's individual motivation, importance, or evaluation (e.g., "I am baptized"; "he told us something about their religious celebrations"). Figure 3 represents the results of this analysis that tests whether the dimensions of spirituality of the first frequency analysis are contingent with the individual importance of the single dimensions. While participants report relatively high importance when they address 'ethics' (86.25%) and 'experience' (74.80%), the dimensions of 'private practice' and 'intellect' are still evaluated as important but with a higher share of medium importance (28.21% and 41.59%, respectively). While the weight shifts to medium importance in the 'public practice' dimension (46.32%), the evaluations of not/low important are highest in the 'ideology' dimension (43.86%). Although the ideology dimension is reported with the highest quantity in the earlier frequency analysis, it is of relatively low importance for the participants. On the contrary, experience and especially ethics, which covered the lowest share of codes in the frequency analysis, show the highest importance rates.
Religions 2019, 10, x FOR PEER REVIEW 10 of 25 of not/low important are highest in the 'ideology' dimension (43.86%). Although the ideology dimension is reported with the highest quantity in the earlier frequency analysis, it is of relatively low importance for the participants. On the contrary, experience and especially ethics, which covered the lowest share of codes in the frequency analysis, show the highest importance rates. Figure 3. Results of the contingency analysis, displaying relative frequencies of the three levels of importance to the dimensions. Note. Public practice k = 2 n.a., intellect k = 1 n.a., ideology k = 1 n.a. Ethics are reported with a no/low importance in only 1.25% of the codes.
The extension of Huber's model of religiosity to spirituality by adding the sixth dimension of spiritual ethics is displayed in Figure 4. (k = 190) no/low importance medium importance high importance Figure 3. Results of the contingency analysis, displaying relative frequencies of the three levels of importance to the dimensions. Note. Public practice k = 2 n.a., intellect k = 1 n.a., ideology k = 1 n.a. Ethics are reported with a no/low importance in only 1.25% of the codes.
The extension of Huber's model of religiosity to spirituality by adding the sixth dimension of spiritual ethics is displayed in Figure 4. Table 2 represents the results of the frequency analysis of the dimensions and subdimensions of spirituality according to Bucher's (2014) model. Although we added three subdimensions by inductive category building (Mayring 2014; see Appendix A), which are connectedness to an immaterial sphere (preliminarily categorized to vertical connectedness), to all living entities, and to decedents (both categorized to horizontal connectedness), the full model could only cover 48% of the codes (K = 1625). Table 2 represents the results of the frequency analysis of the dimensions and subdimensions of spirituality according to Bucher's (2014) model. Although we added three subdimensions by inductive category building (Mayring 2014; see Appendix A), which are connectedness to an immaterial sphere (preliminarily categorized to vertical connectedness), to all living entities, and to decedents (both categorized to horizontal connectedness), the full model could only cover 48% of the codes (K = 1625). 'Vertical connectedness' makes up the greatest share with more than half of the connectedness codes, including connectedness to a higher spiritual being/God as the most often mentioned subdimension of connectedness. 'Horizontal connectedness' makes up a share of one-third of the connectedness codes, while the 'depth dimension of connectedness' shows the smallest frequency with only about 14% of the connectedness codes. The extension of Bucher's model of spirituality by adding three subdimensions is displayed in Figure 5. 'Vertical connectedness' makes up the greatest share with more than half of the connectedness codes, including connectedness to a higher spiritual being/God as the most often mentioned subdimension of connectedness. 'Horizontal connectedness' makes up a share of one-third of the connectedness codes, while the 'depth dimension of connectedness' shows the smallest frequency with only about 14% of the connectedness codes. The extension of Bucher's model of spirituality by adding three subdimensions is displayed in Figure 5. As Bucher's model can only capture half of the spirituality codes that were given to our previously introduced definition of explicit and implicit spirituality (including religiosity), we ask in a last step, whether Bucher's dimensions are contingent to some of Huber's dimensions and contingent to spiritual/religious codes as well as implicit/explicit codes. Table 3 shows the results of this contingency analysis. The central column of this symmetric table displays the number of all given codes. While the left part summarized the absolute number of the given codes related to Huber's model as well as the implicit/explicit religious/spiritual codes, the right side of Table 3 displays the number of given codes to Bucher's model related to the codes on the left side. The right side of the table is the focus of this contingency analysis and we start to explain it from rear to front. The last column shows that 'public practice' (22.63%), 'intellect' (23.49%), and 'ideology' (48.22%) are reflected in less than 49% of Bucher's three dimensions. When we have a look at the second column from the front, public practice, intellect, and ideology are at the same time the three dimensions that are strongly affiliated with religious traditions by the participants. As Bucher's model can only capture half of the spirituality codes that were given to our previously introduced definition of explicit and implicit spirituality (including religiosity), we ask in a last step, whether Bucher's dimensions are contingent to some of Huber's dimensions and contingent to spiritual/religious codes as well as implicit/explicit codes. Table 3 shows the results of this contingency analysis. The central column of this symmetric table displays the number of all given codes. While the left part summarized the absolute number of the given codes related to Huber's model as well as the implicit/explicit religious/spiritual codes, the right side of Table 3 displays the number of given codes to Bucher's model related to the codes on the left side. The right side of the table is the focus of this contingency analysis and we start to explain it from rear to front. The last column shows that 'public practice' (22.63%), 'intellect' (23.49%), and 'ideology' (48.22%) are reflected in less than 49% of Bucher's three dimensions. When we have a look at the second column from the front, public practice, intellect, and ideology are at the same time the three dimensions that are strongly affiliated with religious traditions by the participants.
Testing Bucher's Model: Frequency and Contingency Analysis
The other three dimensions-private practice, experience, and ethics-are not as strongly linked to religious traditions by the participants and are reflected by Bucher's model in more than 60% of the codes (last column). Interestingly, the experiential dimension is almost completely covered by Bucher's connectedness dimensions. This tendency of the connectedness model to reflect spirituality codes better than religiosity codes is supported by the second last column that displays that the codes covered by Bucher's model encompass more than 84% of the spirituality codes. The same tendency accelerates when it comes to implicit spirituality (see the third last column): all these codes are covered by Bucher's three dimensions. There is however one minor exception (private practice-religious-explicit, 81.01%). The fourth-last column consistently shows that Bucher's model is more applicable the higher the individual importance is-it does not catch rejecting answers by the participants as well as Huber's model.
Discussion
The current study was conducted with the aim to examine the multidimensionality of spirituality by comparing the applicability of two different models of religiosity and spirituality-the model by Huber (2003); (Huber and Huber 2012) and the model by Bucher (2014)-to comprehensive qualitative interview data. According to significant researchers in that field, qualitative methods are the method of choice when it comes to examining multidimensionality (Barton and Lazarsfeld 1979), especially of spirituality (e.g., Hood et al. 2009;Miller and Worthington 2013). In our study, the used semi-structured interview guideline strongly focused on spirituality and was applied to a stratified sample of N = 48 secular individuals in Switzerland. In order to test the two models, frequency, valence, and contingency analysis of Mayring's (2014) qualitative content analysis were conducted. Besides the criterion of applicability, the criterion of parsimony was also taken into account, in which the three-dimensional model by Bucher has an advantage compared to the six-dimensional approach of Huber.
Despite the criterion of parsimony, Bucher's three-dimensional model of spirituality covers only around half of the given codes in our study, whereas Huber's model extended with the ethical dimension covers almost all given codes: the six dimensions reflect the given codes completely, the individual importance is a 99.75% match with the given codes. The results of four text propositions, which do not show any personal valence at all, reflect a methodological problem of data collection (the interviewer failed to ask the participants about the evaluation of spiritual aspects) and not a drawback of Huber's model itself.
Interestingly, within the multidimensionality of spirituality Huber's dimensions ideology and intellect are in the foreground, which can be interpreted as a strong cognitive weight of the concept of spirituality (Koenig 2008;MacDonald 2000). On the contrary, this could also reflect a methodological shortcoming as interviews trigger mainly cognitively and less behaviorally or emotionally-laden utterances by the participants. Whether spirituality is mainly located on the cognitive level should be proven in a future questionnaire or experimental studies. Thereto related, ideology was evaluated as the least important among all dimensions by our secular sample. On the other hand, ethics, which was the least frequently reported dimension, was simultaneously the dimension of the highest individual importance. This result of a low frequency and high importance at the same time could clarify prevailing contradicting views on spiritual ethics (Carey 2018;Chandler 2008;Eisenmann et al. 2016versus Berghuijs et al. 2013Steensland et al. 2018). Consequently, the concurrent consideration of frequency and importance, which lies at the heart of Huber's Centrality concept and scale, is of significant interest to spirituality research. Bucher's model had to be extended during inductive coding: the subdimensions of immaterial sphere (preliminary vertical), all living entities, and decedents (both horizontal) were not well captured by the initial subdimensions that Bucher suggested (see Figure 5). Regarding the connection to an immanent sphere, which includes "the imagination of a world 'behind'" (Streib and Hood 2016, p. 12), e.g., paranormal or esoteric aspects, and alternative healing, its preliminary location on the vertical level remains questionable. Besides the more traditional utterances about the afterlife or spiritual helpers such as saints that point to a vertical connection, many participants reported a parallel dimension (e.g., residence for the dead, ghosts, angels), energies that are located in this world (e.g., healing), or experiences of synchronicity, which could be located on a horizontal level, too. Therefore, more research is necessary related to this newly formed subdimension of connectedness to a transempirical sphere, especially nowadays as mind-body-practices, which are a significant part of this form of connectedness are becoming more and more mainstream (Chandler 2008;Vincett and Woodhead 2016).
A remarkable result here is, that within Bucher's model the highest emphasize is on vertical connectedness, especially to a higher spiritual being/God, and on horizontal connectedness, especially to others. Hence, our data rebuts the claim that spirituality is highly focused on the self or even an egocentric form of religiosity (e.g., Hackbarth-Johnson and Rötting 2019; see Carey 2018; Chandler 2008) but rather reflects a feeling of being connected to a divine being and fellow humans. This is supported by results from earlier studies that found that a vertical connection to a higher spiritual being remains important within the spirituality concept (Piedmont 2004;Steensland et al. 2018).
The results of the last contingency analysis that tries to relate both models with each other, revealed that Bucher's approach is especially applicable to spirituality codes that do not refer to a religious tradition. This reflects that Bucher's model latently follows the hypothesis that religion and spirituality exclude each other, which should be avoided according to later inclusive approaches (see Yamane 1998;Zinnbauer et al. 1997;Zinnbauer and Pargament 2005). Moreover, Bucher's model covers the experience dimension almost completely, which is at the same time the dimension on which Bucher heavily relied when conceptualizing his model. Surprisingly, implicit spirituality is completely reflected by Bucher's model. This finding strongly supports our initial definition of implicit spirituality by using Tillich's (1957) concept of ultimate concern (see Yinger 1970;Bailey 2001). Additionally, it shows that this three-dimensional approach of connectedness could become important in future studies, which take up highly topical research desiderata when focusing on implicit forms of spirituality (Vincett and Woodhead 2016). Finally, Bucher's approach is more applicable the higher the individual importance/salience of a dimension is, i.e., the model of connectedness is not very applicable to negative replies by participants, in contrast to Huber's model.
In conclusion, the results support that Huber's (2003); (Huber and Huber 2012) multidimensional model of religiosity can be used to examine spirituality as well, which supports the approach of spirituality including religiosity (Stifoss-Hansen 1999; see Zinnbauer and Pargament 2005). The results of the present study can give a conceptual direction of spirituality (and can therefore shed light on the ongoing conceptual considerations, e.g., Hill et al. 2000;Johnstone et al. 2012;Zinnbauer and Pargament 2005), which can also lay the foundation for the construction of a multidimensional spirituality scale. Building upon our empirical results, we suggest the development of a 'Centrality of Religiosity and Spirituality Scale (CRSS)' for future studies. This could be possible by enriching the CRS with some new items.
In a further step, the here-presented variable-centered approach should be supplemented by a person-centered approach, which examines the stability of the dimensions of spirituality among a secular sample in a longitudinal design. Currently, the remaining participants of our sample are being re-interviewed with a similar interview guideline. A comprehensive analysis of this data is going to take place in 2020. The question for long-term effects of different dimensions of spirituality among secular individuals who report spirituality as relevant in their lives is highly disputed (Pollack and Rosta 2017) and it would be interesting to test the hypothesis that this is only a "step on the path between religion and non-religion" (Marshall and Olson 2018, p. 503).
Funding:
We thank Swiss National Science Foundation (SNF) for funding this project.
Conflicts of Interest:
The authors declare no conflict of interest. "every day we had these spiritual sessions with lying down and visualizing in another world, focusing on kidney functions, listening to heart sounds"; "in common prayers: It makes me very quickly very uncomfortable"
Appendix A
Private practice a religious/spiritual practice in private space (not observable from outside). Is not coded if the practice takes place with or without others in public (then: public practice); is not coded when person reports attitude towards a private practice (then: ideology). If participant also reports a religious/spiritual experience, then experience is coded, too.
"well, daily prayer or something like that is unnatural to me now, what I would not do it now on my own"; "That I do that, say, with mindfulness. That is a certain kind of meditation" Intellect processes or results of intellectual activities that are related to religion/spirituality. It is not coded when the belief in these results is emphasized (then: ideology). Is not coded if participant reports an attitude towards an intellectual approach (then: ideology).
"Then the Jehovah's Witnesses come by regularly. I always discuss with them a bit"; "I read esoteric books"; "Churches were for me the most beautiful monuments in a city. That's why I like to go in there. You visit churches, that's part of the cultural program" Experience a religious/spiritual experience. Is not coded when participants desires such an experience (then: ideology).
"That was probably the moment when I stayed on the brakes. Ideology an attitude towards, (un)belief in, or pattern of plausibility related to a religious/spiritual issue.
"I think there's something higher that people have grasped"; "And that's why I believe that religion is a big problem. And I am hostile to religion"; "as soon as I scratch something, something happens. Action reaction. And that is not for nothing, that does not come out of nothing. That somehow needs a reason" "Because it's just very attached to Buddhism and then that didn't fit to me again"; "We had the death of my mother. I do not think she is in heaven now"; "No, it was never prayed. Nothing like that" medium that the previously reported dimension is of medium or ambivalent important/salience, and/or is extrinsically motivated; and/or does not trigger behavior outside the religious/spiritual frame, and/or was important in the past but is currently not important anymore.
"It's just these things where you go when someone invites you. But I do not do it on my own."; "I have been to India twice and maybe you meet a special person who somehow sets you in motion. And you think, yes, somehow. It's probably not black and white"; "We got married in the church. Simply because it is so customary" high that the previously reported dimension is highly important/salient to the individual, and/or is intrinsically motivated; and/or triggers behavior outside the religious/spiritual frame.
"We're actually talking about religion a lot"; "After all, meditation would be that you are actually careful in everyday life"; "That's also what I'm interested in and that's where my projects are" Dimensions of spirituality ** Vertical connectedness to higher spiritual being/God that the previously reported dimension is connected to God, the divine, or another higher spiritual being.
"I have handed it over, given to the Lord God, prayed about it, and now I can still enjoy every day"; "I say there must be a higher force somewhere, but for me that is not a cloud with the old man with a white beard" to immaterial sphere (generic) that the previously reported dimension is connected to an immanent sphere, such as the afterlife, saints, transempirical relations (paranormal, esoteric, alternative healing); is not coded when a higher spiritual being is emphasized (then: connectedness to higher spiritual being) "So when it comes to pendulum dowsing it's just that way, you have to have a question. [ . . . ] And the pendulum then gives me the answer"; "I also made a lot of laying on of hands with me"; "I look at the calendar, then it was exactly two years after this event.
And there I was shocked. That's when I started shaking" to nature that the previously reported dimension is connected to nature: Is not coded, when universe is emphasized (then: connectedness to universe); is not coded, when living entities in general are emphasized (then: connectedness to all living beings).
"nature most likely. I am very fond of nature. That's a bit of my spirituality"; "Then you can have that feeling too. In great nature.
In that sense you can say that it is a spiritual experience"; "a morning walk, in the morning at six in a snowy forest, then suddenly the sun comes through the branches, then that's another moment, which I call spiritual" to other humans that the previously reported dimension is connected to fellow human beings that is beyond utilitarianism (e.g., trust, love your neighbor, responsibility, against social conventions, define own identity via other humans, give meaning of life). Is not coded, when nature in general is emphasized (then: connectedness to nature); is not coded when it is connected to decedents (then: connectedness to decedent) or to humanity as a whole (then: connectedness to living entities).
"On our last trip to India, suddenly there were all these many people. I was overwhelmed, or. And we all belong together"; "just my husband, that you can feel so connected to a human that you can have someone who can be so close to you and where you can go together like this" to a greater whole that the previously reported dimension is connected to a general greater whole/an all-embracing entity; is not coded, when universe, nature, or other humans are explicitly emphasized (then: connectedness to universe, nature, or other humans); is not coded when higher spiritual being is emphasized (then: connectedness to higher spiritual being/God) "Simply connectedness with the greater whole of which we are part"; "Tears came to my eyes: For a brief moment, I realized that we are part of somehow a whole" to all living entities (generic) that the previously reported dimensions is connected to all living entities, such as humanity, fauna, flora, and/or the planet; is not coded when only the universe, the nature, other humans, or an unspecific greater whole is emphasized (then: connectedness to universe/nature/other humans/greater whole).
"It's very important, for example, to realize that every dachshund on the street is just as happy as I am. And also has the right to be happy"; "Whether tree or horse, ultimately it is the life that is in all of us. It connects us and at the same time it demands a responsible and sustainable coexistence" . ] centered on one's own person, when it comes to personal enlightenment. So, on the one hand, it is the responsibility of the individual to work on oneself and find one's own way" to body that the previously reported dimension is connected to the own body; is not coded when a more non-physical connection to the own self is emphasized (then: connectedness to self). Note. * All definitions follow Huber and Huber (2012). ** Definitions follow Bucher (2014), except of generic codes, which were induced from the empirical data. | 10,946.6 | 2019-11-06T00:00:00.000 | [
"Philosophy"
] |
Occurrence of some stink bug species (Hemiptera: Pentatomidae) associated with rice fields in Argentina
: Stink bugs (Hemiptera: Pentatomidae) are a group of about 5,000 species distributed worldwide, many of them phytophagous with economic implications as crop pests. Rice ( Oryza sativa L.) is one of the leading agricultural products for human consumption. In neotropical rice fields, hemipterans are the primary pests, with stink bugs being the worst affecting crop productivity, standing out the genus Tibraca Stål in terms of economic damage. In addition, rice crops may represent important feeding and mating sites for other stink bug species taxonomically related to Tibraca , which could play the role of potential pests, making it necessary to study the pentatomids associated with this crop in the Neotropics. This work aimed to report the presence of Glyphepomis adroguensis Berg, Hypatropis inermis (Stål) and Paratibraca spinosa (Campos & Grazia) associated with rice in the main rice-growing areas of Argentina. Material collected during 2017-2018 from commercial fields in north-eastern Argentina, the central rice-producing region, was identified. The rice variety on which the specimens were collected, crop status (growing season - post-harvest) and crop phenology were considered. As a result, the association of the mentioned species with rice in the provinces of Chaco and Corrientes, Argentina, is reported. Moreover, the genus Paratibraca Campos & Grazia and the species P. spinosa are reported for the first time in the country.
INTRODUCTION
Stink bugs (Hemiptera: Pentatomidae) are a worldwide distributed group that includes around 5000 species (Schuh & Weirauch, 2020), of which 279 are represented in Argentina (Dellapé, 2021). Except for asopines (predators), most stink bugs are phytophagous, feeding on non-cultivated and economically important cultivated plants. These insects can feed on leaves, stems, and roots; however, they are most often associated with developing seeds, fruits or growing shoots (McPherson, 2018;Panizzi et al., 2021). Therefore, they may have economic implications as agricultural pests, and many species of agricultural interest are mainly associated with rice and other grasses .
Rice (Oryza sativa L.) is one of the most important agricultural commodities produced for human consumption, providing 20% of the world's total vegetable calorie intake and being the primary nutritional source for more than half of the global population (Seck et al., 2012;Zeigler & Barclay, 2008). This cereal also provides a large number of calories per hectare cultivated, being a vital food resource within the plans developed to contribute to global food security (FAO, 2013;Gnanamanickam, 2009). In neotropical rice fields, hemipterans are the primary pests (Schaefer & Panizzi, 2000), with the Pentatomidae family containing the most economically important ones, such as the stink bugs Tibraca limbativentris Stål, Oebalus poecilus (Dallas) and O. ypsilongriseus (DeGeer), which are widely distributed in rice fields in the region and represent a serious challenge for pest management (Didonet et al., 2001;Kruger & Burdyn, 2015;Pantoja et al., 1997). These three species are significant because they generate large losses in irrigated rice cultivation, reducing yields and causing the low quality of commercial rice (Pantoja et al., 1997(Pantoja et al., , 2000Santana et al., 2018).
The rice crops may represent important feeding and mating sites of other stink bug species taxonomically related to Tibraca Stål (Barros et al., 2020a); as is the case for Hypatropis inermis (Stål), and several species of Paratibraca Campos & Grazia and Glyphepomis Berg in Brazil (Campos & Grazia, 1998;Pantoja et al., 2005;Farias et al., 2012;Klein et al., 2013;Krinski et al., 2015). According to Farias et al. (2012) and Krinski et al. (2015), further studies are needed to determine the presence of these species in rice and to assess whether they could be pests of this crop in the future.
This work aimed to report the presence of Hypatropis inermis, Glyphepomis adroguensis Berg and Paratibraca spinosa (Campos & Grazia) associated with rice in the main rice-growing areas of Argentina. As mentioned above, the occurrence of these species in rice fields is relevant due to their potential role as crop pests. On the other hand, the genus Paratibraca and the species P. spinosa were reported for the first time in Argentina.
MATERIAL AND METHODS
The study was conducted in twelve commercial rice fields in northeastern Argentina (Chaco and Corrientes provinces: 26°44'S to 27°50'S, 58°50'W to 57°20'W), the main rice-producing region (BCSF et al., 2021). The irrigation system in selected rice fields uses water extracted from the Paraná River, one of the largest river systems in the Neotropics, whose floodplain supports a vast drainage area that includes natural wetlands and rice paddies (Benzaquén et al., 2017;Neiff, 1996). The samplings were carried out during 2017-2018, throughout the whole rice growing season: tillering, stem elongation (vegetative phenology), flowering and ripening (reproductive phenology) (Degiovanni et al., 2004;Kruger & Burdyn, 2015). Also, qualitative post-harvest sampling of rice stubble was carried out in the same plots. The specimens were collected manually at each site in 250cm 3 containers, and using an entomological net. The rice cultivar planted in each studied area, Fortuna INTA (Doble Carolina rice variety, tall plants) and short variety IRGA 424 (long thin rice variety, lower plants), were also recorded.
All collected specimens were preserved in 96% ethanol, and hemipterans were separated from the other orders. Pentatomidae specimens were identified using appropriate keys and literature (Grazia & Schwertner, 2008;Rolston et al., 1980;Rolston & McDonald, 1981. All the specimens studied were deposited in the entomological collection of the Museo de La Plata, Buenos Aires, Argentina. Digital photographs were taken using a Leika EZ4 stereomicroscope, and images were processed with CorelDraw© X7 graphic suite software. The map was created with the Google Maps web mapping platform (https://www.google. com/maps) and edited with CorelDraw© X7.
RESULTS
The species of economic importance and main pests of rice fields are T. limbativentris, O. poecilus and O. ypsilongriseus (Dellapé et al., 2022;Kruger & Burdyn, 2015). However, in this work, we report the occurrence of three other stink bug phytophagous species in Argentine rice fields: Glyphepomis adroguensis, Hypatropis inermis, and Paratibraca spinosa, which are relevant given their role as potential rice pests in other countries such as Brazil (Fig. 1). The following key includes these six species of economic importance for the crop.
Key to the stink bug species, both pests and potential pests of rice, from Argentina (Fig. 1C) The species of Pentatomidae reported for the first time in Argentinean rice crops are presented below. The authors undertake to notify the authorities of the Servicio Nacional de Sanidad y Calidad Agroalimentaria (SENASA), through the "SINAVIMO" network of the Dirección Nacional de Protección Vegetal -SENASA (DNPV). (Fig. 1A) This species is distributed in Brazil, Uruguay and Argentina (Dellapé, 2021;Dellapé et al., 2022). Along with other species of the genus, such as G. setigera Kormilev & Pirán and G. pelotensis Campos & Grazia, it has been reported on rice crops in Brazil (Campos & Grazia, 1998;Farias et al., 2012, Bianchi et al., 2016. While in Argentina, G. adroguensis was collected hibernating on Paspalum quadrifarium Lamb. (Poaceae) (Kormilev & Pirán, 1952), and here it is reported on rice fields in the country for the first time.
Paratibraca spinosa (Campos & Grazia)
( Fig. 1C) The genus Paratibraca is distributed in Central and South America (Grazia et al., 2022) and is reported for the first time in Argentina through this work.
Given the economic relevance of rice crops and the potential role of these three species of stink bugs as crop pests in neighboring countries such as Brazil, we recommend more exhaustive monitoring and field studies to determine these species' abundance in Argentinean rice fields and to assess whether they could be pests of this crop in the future. | 1,762.6 | 2023-01-01T00:00:00.000 | [
"Agricultural And Food Sciences",
"Biology"
] |
Application of the Queuing Theory in Characterizing and Optimizing the Passenger Flow at the Airport Security
This paper presents mathematics models that describe and optimize the passenger flow at the airport security checkpoints by applying the queuing theory. Firstly, a Poisson process is used to estimate the flow of passengers waiting for going through the security. Then, the Poisson distribution is combined with a multiple M/M/s model. Following that, an arrival model (passengers’ arriving at the checkpoints preparing for security examination and departure) with Gumbel extreme value estimation is described that predicts the busiest time in the busiest airport. Real case data collected from several major airports worldwide is used for creating a hybrid Poisson model to generate the simulation of passenger volume. At last, Markov Chain theory is applied to the analysis to randomly simulate the flow of enplaned passengers again, and the results of these two simulations are compared and discussed, revealing that the hybrid Poisson model is the more accurate one. After successfully characterizing the passenger flow mathematically, two methods for optimizing the passenger flow are then provided in two different respects: one is bypassing passengers and creating an express pass; while the other one promotes Pre-Check service application.
Introduction
The unnecessary waiting time before the security checkpoints at the airports is a well-known issue.Provided by Bureau of Transportation Statistics [1], there were more than 36,285,000 passengers enplaned in 2015 from Chicago O'Hare Airport.Suggested by Dailymail [2], more than 400 passengers missed their flights at merely one night because of the extremely long queue.Rigorous and thorough security screening is recognized as a significant importance role in guaranteeing safety, especially in reducing hijacking and explosion.However, the necessary safety screening causes unnecessary delays for passengers, wasting their time and increasing the risk of missing the flights.In order to identify the bottlenecks of current situation, the flow of passengers through a security checkpoint needs to be mathematically characterized.Therefore in this paper, we use a multiple M/M/s model to stimulate the queuing problem of passengers within worldwide large international airports, and try to explore optimization methods to reduce the length of the queues.
Overview of the Queuing Model and Arrival Model
The full simulation consists of two parts, the queuing part and passenger's arrivals part.The queuing part explains how security zones handle incoming passengers.The arrival part, more specifically, people's arrivals at those checkpoints, is a stochastic simulation of passengers' behavior about how they choose to appear in front of security check queues.
The following assumptions are used throughout the analysis in this paper: • We assume that there are several security checkpoints in different terminals and all these checkpoints are connected.This means that passengers choose whichever the terminal or the checkpoints and they can always get to their boarding gate.Even if their gates are in terminal 3, they can pass through the checkpoints in terminal 1, 2, or 4 to get there.
• Based on data from London Heathrow Airport, half of the passengers will departure from Terminal 5, so hereby we assume that half of the passengers will enplane through one terminal [3].
• Based on data in major airports in the world, we assume that the opening hours of an airport are 16 hours per day.
• All the lanes are operated asynchronously without any occurrence of emergency.Staffs are fully prepared so that they can get to work immediately when a new lane has been opened.
• All the screeners, guides and officers provide homogeneous service quality and this quality is more than acceptable.Staffs are well trained and professional so no human errors will appear and have an adverse impact on the processing time.
• This research will only focus on passengers taking economy class.Queues for first class and business class are out of our consideration.
• Further assumptions will be made to clarify each model and will be discussed later in this paper.
Introduction to the Multiple Asynchronous M/M/s Queuing Model
Our queuing model is based on an asynchronous multiple M/M/s queue model which is composed by many single asynchronous M/M/s queues.We first explain how to cater the M/M/s queue model for our needs and then move to the multiple version.
Single Asynchronous M/M/s Queuing Model
A single asynchronous M/M/s queuing model is formed by a series of asynchronous servers denoted by Si where i∈N, (i.e. 1, 2, 3, 4, 5, …) and a lane of passengers waiting for being checked.Asynchronous servers handle passengers at different times, (i.e., every server in this queue model is independent to other servers).When a passenger finished his screening by the server, another passenger from the lane will take over that position to keep the server operating without pausing.
Figure 1 shows a typical single M/M/s queuing system fully loaded with costumers.For each server, ti denotes the time interval between two travelers moving through the gate.As soon as this passenger finishes this step, he moves to the following examination step and next costumer will come to the server and repeat this process, while ti varies for distinct passengers.To model ti, we need to analyze its components first.
Figure 2 shows a typical passenger divides his personal belongings into 3 baskets for x-ray examination.Each basket takes tb j seconds to prepare.To generalize t i for each passenger with n baskets, we have: According to most airline regulations, the luggage allowance for economy-class passengers is small.Therefore, we assume for all economy-class passengers, they can only separate their items into smaller than 6 baskets.
We model the number of baskets per passenger by a Poisson process.In addition, the Poisson distribution we employed is slightly modified to fit the case.
Since the Poisson distribution with λ = 1 starts from 0, we add 1 to every Poisson random number to account for the fact that there is no one with no basket for examination.Furthermore, we cut all Poisson random numbers above 4, since we add 1 to every Poisson random number and assume no passenger will have more than 5 baskets.
Multiple Asynchronous M/M/s Queue Model and Multinomial Decision
The multiple asynchronous M/M/s queue model is composed of n single M/M/s queue models.The only difference is that, multiple asynchronous M/M/s queue
Introduction of the Arrival Model and Additional Assumption Based on Gumbel Maximum Estimation
We evaluate the busiest case of world's leading airports with Gumbel maximum estimation [5].
Based on our earlier assumption, half of departure passengers enplane from the largest terminal.We evaluate the daily average to be around 38,400 [6] economy class departure passengers per day in that terminal, with a standard deviation of 3,900 passengers [7] [8].We find 44,993 passengers departure from the largest terminal within a day at 99.5% level and we set that as the extreme value for further test and simulations on daily basis.
Hybrid Poisson Arrivals with Multinomial Distribution
In our earlier investigation, most security checkpoints open from 4 am to 10 pm (16 hours).The arrivals are modeled as two stages [9].The first stage is generated by a Poisson distribution which simulates with sampling interval down to 15 minutes.There are clusters of people arriving within each quarter.We calibrate the size of these clusters to 400 people per cluster to fit real world data.Every simulation of this Hybrid model will deliver an average people arrival rate of around 38,000 passengers per day.In second stage we use multinomial distribution to reduce sampling interval to 1 second, with data carried out from stage 1, and we uniformly distribute these clusters of people per 15 minutes to several people per second.We make this intensive sampling in order to plug passenger flows into our queue system for test purposes.Figure 6 illustrates how people arrive in different measurements.Figure 6(a) adds passenger flows in every quarter into an hour basis.
Markov Chain Arrivals with First-Hitting-Time Model
The second model is implemented by Markov chain.We can plot the state diagram as Figure 7 and define transition matrix by M.
The first passage hitting time for each state is calculated empirically from realworld data.
A typical day starts from state 1 and each hour is dependent on stochastic.
The probability to reach states onward decreases over time.ever State 4 is more like an idle state since we design this state with a first passage hitting time of 15, so we somehow expected it to be the state at the end of the day.
In our simulation (Figure 8), the Markov chain model gives 42,750 passengers to departure in a day.Although state 4 tends to be the last hour of the day in this simulation, state 4 is not necessarily to be the case if we run it for a number of times.It can also illustrate the case when there are only a few passengers arriving at the airport in a day.
Arrival Model Comparison
Both models illustrate large variance within a day.Markov chain model runs faster since it has fewer states, while hybrid Poisson model has much more states compared to the Markov chain model and closer to real situations.
Swift Queuing System
Set express bypass Swift for those people with one light luggage.These people, who only have few wait-for-checking items, can quickly pass through the Swift pass.This not only saves their time but also reduces the length of the regular pass.We set the threshold as no more than 5 kg luggage, which is the typical weight for basic traveling items for a business trip, a laptop (2 kg), a wallet, a backpack (1 kg), etc.A scale can be set at the entrance of this Swift pass so that this will not influence people using the regular pass.Figure 9 illustrates how swift queuing system works.
Promote Pre-Check and Adjust the Passenger to Lanes Ratio for This Group
Another way to reduce the queue length is to encourage more passengers to apply for Pre-Check.According to TSA, approximately 45% passengers enroll in a program called "Pre-Check for trusted travelers".These passengers pay an additional $85 to receive a background check and enjoy a separate screening process with few modifications to save time for five years [10].There is a survey showing that 97% of TSA passengers only wait 5 minutes or less.We can promote this service by increasing more price options such as $20 per year vs. $85 per five year, which will attract more potential passengers such as international students who will not stay for five years in the U.S.
Conclusions
Numerous and unceasing complaints about long waiting time at the airport security check have been frustrating amongst all people around the world.In spite of creative and cumulative adjustments which attempt to relieve the terrible up-
Figure 3 Figure 4
Figure 3 illustrates the histogram of our modified Poisson distribution, which suggests 1 or 2 baskets per passenger are the most likely case and 4 baskets case is comparatively rarer than other cases.The time tbj takes by basket j is modeled by χ2 distribution with 2 degree of freedom.The number of degree of freedom is empirical based on an experiment.To be specific, in real world, a fully-loaded single M/M/s queue with 4 servers can handle a maximum of 1000 passengers in 1 hour (3600 seconds) [4].We run the model of a single 4 servers M/M/s queue which handles 1000 people in 100 times.The mean of time to process these people is 3590 seconds while the standard deviation is 25.8017 seconds.The Figure 4 also suggests the results are quite steady around 3600 seconds.
Figure 4 .
Figure 4. Simulations of a single M/M/s queue.
Figure 8 .
Figure 8. Simulation of Markov chain model.(a) Time series of states; (b) Time series of number of passengers arrival.
Therefore we use each state to represent a randomly selected hour in a day.State 1 stands for least busy, i.e. a small number of passengers' arrivals in this state.State 2 stands for less busy, i.e. a normal number of passengers' arrivals within this state.State 3 is the busiest state, i.e. a massive number of passengers arriving in the state.How-
Table 2 .
First passage hitting time of each state. | 2,787.4 | 2017-09-15T00:00:00.000 | [
"Computer Science"
] |
Human prion disease with a G114V mutation and epidemiological studies in a Chinese family: a case series
Introduction Transmissible spongiform encephalopathies are a group of neurodegenerative diseases of humans and animals. Genetic Creutzfeldt-Jakob diseases, in which mutations in the PRNP gene predispose to disease by causing the expression of abnormal PrP protein, include familial Creutzfeldt-Jakob disease, Gerstmann-Straussler-Scheinker syndrome and fatal familial insomnia. Case presentation A 47-year-old Han-Chinese woman was hospitalized with a 2-year history of progressive dementia, tiredness, lethargy and mild difficulty in falling asleep. On neurological examination, there was severe apathy, spontaneous myoclonus of the lower limbs, generalized hyperreflexia and bilateral Babinski signs. A missense mutation (T to G) was identified at the position of nt 341 in one PRNP allele, leading to a change from glycine (Gly) to valine (Val) at codon 114. PK-resistant PrPSc was detected in brain tissues by Western blotting and immunohistochemical assays. Information on pedigree was collected notably by interviews with family members. A further four suspected patients in five consecutive generations of the family have been identified. One of them was hospitalized for progressive memory impairment at the age of 32. On examination, he had impairment of memory, calculation and comprehension, mild ataxia of the limbs, tremor and a left Babinski sign. He is still alive. Conclusion This family with G114V inherited prion disease is the first to be described in China and represents the second family worldwide in which this mutation has been identified. Three other suspected cases have been retrospectively identified in this family, and a further case with suggestive clinical manifestations has been shown by gene sequencing to have the causal mutation.
To date, about 55 mutations associated with or directly linked to human TSEs have been identified [3]. Here we report a Chinese family with a mutation at codon 114 (G114V) of the PRNP gene. The index case had clinical features, electroencephalogram (EEG) and magnetic resonance imaging (MRI) findings similar to sporadic CJD. We also present data on four suspected cases of fCJD in five consecutive generations of the family.
Clinical features
A 47-year-old Han-Chinese woman was hospitalized with a 2-year history of progressive dementia, tiredness, lethargy and mild difficulty in falling asleep. The initial complaint was tiredness and loss of sleep. Several months after the onset, she developed difficulty in communication and was unable to work. This was followed by a gradually progressive dementia and emotional lability. The family described increased appetite, and complex visual halluci-nations. About 17 months after onset, the patient was first hospitalized and CSF 14-3-3 was negative at that time. On this admission, the patient was bedridden. On neurological examination there was severe apathy, spontaneous myoclonus of the lower limbs, generalized hyperreflexia and bilateral Babinski signs. An EEG displayed slow waves at 5 to 6 Hz, which were marked bilaterally in the frontal lobes and precentral regions. MRI of the brain showed bilateral atrophy of the cerebellar cortex, brainstem and cerebellum ( Figure 1A and 1B). On diffusion weighted imaging (DWI), there were high signals in the caudate nucleus, the putamen and the periventricular regions (Figure 1C). Biochemistry of the cerebrospinal fluid (CSF) was normal; however, CSF 14-3-3 protein was not performed. One week later, the patient was discharged [from hospital and she died at home about 75 days later at the age of 47.
Epidemiologic data
Information on pedigree was obtained by interviews with family members. A total of 49 family members (including spouses) were retrospectively or directly investigated (Figure 2A). Thirty-three of the family members belonged to the proband's mother's lineage and 14 belonged to her uncle's (her mother's brother) lineage. The proband's maternal grandmother was said to have died with similar clinical symptoms. The proband's elder brother developed neurological symptoms at the age of 45 years and http://www.jmedicalcasereports.com/content/2/1/331 limited intellectual ability from childhood and discontinued his education at grade 4 of elementary school. He was hospitalized for investigation of progressive memory impairment 2 years ago at the age of 32 years. On examination, he had impairment of memory, calculation and comprehension, mild ataxia of the limbs, tremor and a left Babinski sign. He is still alive.
PRNP analysis
Brain autopsy of the proband was performed shortly after death with informed consent. Genomic DNA was extracted from the brain using Qiagen's DNA purification kit according to the manufacturer's instructions. The PRNP open reading frame was amplified by polymerase chain reaction (PCR) using a protocol and primers described elsewhere [4]. The genotype at codon 129 of PRNP was determined by digestion with the restriction endonuclease Nsp I. Analysis of PRNP sequences was performed by direct sequencing in a MacBAC sequencer (Pharmacia, USA). A missense mutation (T to G) was identified at the position of nt 341 in one PRNP allele, leading to change from glycine (Gly) to valine (Val) at codon 114 ( Figure 2B). No other nucleotide exchange was found in the rest of the PRNP sequence. Nsp I digestion and direct sequencing of the amplified product revealed a methionine homozygous genotype at codon 129 of PRNP. To identify the distribution of this point-mutation in the family, blood samples of five other family members, including the son of her first cousin (IV 2), were collected and the PRNP genes were sequenced. As suspected, the same G114V mutation was observed in the PRNP gene of IV 2. In addition, two other health family members, the proband's daughter (IV 10, age of 22) and the mother of the second case (III 1, age of 61), were found to have the same missense mutation. The son of the proband case (IV 9, age of 24) and the son of IV 2 (V 3, age of 9) were confirmed to have a wild-type PRNP sequence without such mutation. All tested family members were homozygous for methionine (M/M) at codon 129 of PRNP as in the profile of Han Chinese [5].
Proteinase K (PK)-resistant PrP assays
Western blotting was performed to identify the presence of PrP Sc in the brain tissue of the patient. The brain tissue sample was homogenized in 9 volumes of lysis buffer (100 mM NaCl, 10 mM EDTA, 0.5% Nonidet P-40, 0.5% sodium deoxycholate, 10 mM Tris, pH 7.5) according to the protocol described elsewhere [6]. An aliquot of the homogenate from cerebrum was incubated with PK (at a final concentration at 50 μg/ml) at 37°C for 1 hour. Three PK-resistant PrP Sc bands ranging from Mr 21 to 27 kDa were detected with predominance of monoglycosylated PrP Sc indicating type 1 PrP Sc ( Figure 3A). To examine the distribution of PrP Sc in different brain regions, aliquots of 10% tissue homogenates were prepared from various brain regions and analyzed by Western blot. PK-resistant PrP was detected in the midbrain, thalamus, cerebellum, frontal lobe, temporal lobe, parietal lobe and occipital lobe, but not in the medulla oblongata, pons or corpus callosum. These findings seemed to be closely related to the level of the total PrP signal before PK-digestion in each homogenate ( Figure 3A). The electrophoretic pattern of PrP Sc was the same in all preparations, with predominance of monoglycosylated PrP Sc .
Histological and immunohistochemical (IHC) assays
Paraffin sections of occipital lobe (5 μm in thickness) were subjected to conventional staining with hematoxylin and eosin (HE) and severe and extensive vacuolation was identified in the tested tissues ( Figure 3B). To identify PrP Sc in brain tissues, slices of occipital lobe were immunostained using a protocol described elsewhere [7]. Briefly, the slices were treated with 4 M guanidine hydrochloride (GdnHCl) at 4°C for 90 minutes, followed by microwave irradiation in distilled water for 25 minutes. The slices were exposed to the PrP-specific monoclonal antibody 3F4 at a dilution of 1:500 overnight at 4°C. For visualization of immunostaining, the slices were developed with a commercial ready-to-use system (Beijing Zhongshan Golden Bridge Biotechnology, China). The slices were counterstained with hematoxylin, dehydrated, and mounted in glycerolvinyl alcohol. Positive PrP Sc immunoblots were found in many of the tested tissues, especially in the region of the gray matter. The deposits of PrP Sc were restricted mostly to the neuronal cytoplasm.
No obvious PrP Sc deposits were observed in extracellular areas ( Figure 3B).
Conclusion
This family with G114V inherited prion disease is the first to be described in China and represents the second family worldwide in which this mutation has been identified. The patient presented with clinical features similar to sporadic CJD, including a progressive neuropsychiatric disturbance, dementia, myoclonus and pyramidal signs. Cerebellar signs were observed relatively later, but became marked. MRI revealed findings consistent with those often seen in sporadic CJD, but the EEG did not show the typical periodic complexes of sporadic CJD. The CSF 14-3-3 was negative 1 year after onset. Typical spongiform degeneration and PrP Sc deposits were observed in the brain and Type-I PrP Sc was detected in various brain regions. Three other suspected cases have been retrospectively identified in this family, and a further case with suggestive clinical manifestations has been shown to have the causal mutation by gene sequencing. The age at clinical onset in this pedigree ranges from 32 to 45 years, which is somewhat later than cases in a Uruguayan family [3], which was the first to be described with a G114V mutation. However, the duration of illness and other clinical manifestations are (A) Western blotting analysis of brain tissue from the proband | 2,179.8 | 2008-10-17T00:00:00.000 | [
"Medicine",
"Biology"
] |
Analysis of Heat Transfers inside Counterflow Plate Heat Exchanger Augmented by an Auxiliary Fluid Flow
Enhancement of heat transfers in counterflow plate heat exchanger due to presence of an intermediate auxiliary fluid flow is investigated. The intermediate auxiliary channel is supported by transverse conducting pins. The momentum and energy equations for the primary fluids are solved numerically and validated against a derived approximate analytical solution. A parametric study including the effect of the various plate heat exchanger, and auxiliary channel dimensionless parameters is conducted. Different enhancement performance indicators are computed. The various trends of parameters that can better enhance heat transfer rates above those for the conventional plate heat exchanger are identified. Large enhancement factors are obtained under fully developed flow conditions. The maximum enhancement factors can be increased by above 8.0- and 5.0-fold for the step and exponential distributions of the pins, respectively. Finally, counterflow plate heat exchangers with auxiliary fluid flows are recommended over the typical ones if these flows can be provided with the least cost.
Introduction
Counterflow plate heat exchangers are widely used in various engineering applications especially preheat, chemical, pharmaceutical, and food processing applications [1]. This is because both hot and cold fluids within the plate heat exchanger are exposed to a much larger surface area per unit volume than that in the conventional (double pipe) heat exchanger [2]. Also, plate heat exchangers can have hydraulic diameters smaller than 2 mm. This can lead to having larger heat transfer coefficients. Thus, plate heat exchangers have larger effectiveness compared to conventional counterflow heat exchangers. Additionally, many of the passive heat transfer enhancement tools like fins and rough surfaces [3][4][5] can easily be installed in the plate heat exchanger as compared to the conventional heat exchanger. This is why finned plate heat exchangers [6] and gasketed plate heat exchangers [7] are widely spread in many industrial applications.
The most recent literature reviews on passive heat transfer enhancements in heat exchangers [8,9] show that the major analyzed enhancement methods are the following: (1) twisted tape, (2) wire coil, (3) swirl flow, (4) conical ring, and (5) ribs. All of these devices augment heat transfer because they tend to disturb the fluid flows [3,10]. Therefore, it can be concluded that enhancing heat transfer in plate heat exchangers under laminar flow conditions did not receive much attention by researchers. Perhaps the most recent proposal for heat transfer enhancement in heat exchangers under laminar flow conditions is the use of nanofluids [11][12][13]. However, not all nanofluids can be adequate for processing special products like pharmaceutical and food products. This is because the commonly used nanoparticles can be harmful to human body [14,15]. Consequently, the present work aims to propose and analyze a new method for enhancing heat transfer in plate heat exchanger without altering either the velocity profiles or compositions of both hot and cold fluids.
The proposed plate heat exchanger is composed of hot and cold fluid channels separated by an auxiliary fluid channel. This auxiliary channel may contain as many passive enhancement tools as possible. Accordingly, both the velocity profile and the composition of the hot and cold fluids are preserved. The heat transfer enhancement in the proposed system is due to the following combined effects: (1) convection of the auxiliary fluid and (2) passive enhancement mechanisms in the auxiliary channel. In the present work, transverse pins connecting the facing boundaries of both 2 The Scientific World Journal hot and cold fluid channels are considered as one of passive enhancement mechanisms [16,17]. Moreover, the auxiliary fluid is considered to flow in the direction cross to both hot and cold fluid flow directions. Accordingly, the auxiliary channel length (hot/cold channels width) can be selected to be small enough to have boundary layer flows [18,19]. Hence, convection thermal resistances between the auxiliary fluid and both hot and cold fluids are minimized. The heat transfer rates within the present system are expected to be higher than those in conventional system for specific auxiliary flow conditions. Accordingly, the present work additionally aims to identify some trends of parameters that cause enhancement ratios to be above unity. Modeling laminar flow and heat transfer inside two dimensional channels including auxiliary channels is well established in the literature [18][19][20][21][22].
In the present work, heat transfer inside plate heat exchanger with auxiliary fluid channel separating the hot and cold fluid channels is modeled and analyzed. Both hot and cold fluid flows are considered to be laminar under hydrodynamically fully developed condition. The energy equations of the hot and cold fluids are coupled with the energy equations of the auxiliary fluid boundary layers. The solution of the momentum and energy equations within the boundary layers is well established [18][19][20]. Accordingly, both coupled hot and cold fluid energy equations are solved numerically using finite difference methods. Approximate analytical solutions for the heat transfer rates under the fully developed flow and very long pins conditions are derived. A number of heat transfer performance ratios including the heat exchanger effectiveness ratios are computed. A parametric study for heat transfer enhancement is made to recognize the conditions of controlling parameters that produce favorable enhancement factors.
Modeling of Flow and Heat Transfer inside the Hot and Cold Fluid Channels.
Consider two parallel channels of length and width . The first channel confines the hot fluid flow while the second one contains the cold fluid flow. The solid boundaries of these channels facing each other are perfectly connected together via cylindrical pins of diameter and length as shown in Figure 1. These pins are surrounded by an auxiliary fluid stream of free stream temperature ∞ and convection heat transfer coefficient ℎ . The convection heat transfer coefficient between the auxiliary fluid stream and the channel boundaries facing that stream is ℎ . Accordingly, heat transfers between the channels and the auxiliary fluid are due to convection over the unfinned surfaces and conduction via the connecting pins. In contrast, the outermost solid boundaries of the heat exchanger are considered to be adiabatic so that heat transfer rates in the heat exchanger are maximized.
The dimensionless momentum and energy equations of the hot and cold fluids are [21] 2 ℎ, 2 ℎ, where ℎ and are the dimensionless axial velocity fields for the hot and cold fluids, respectively. ℎ and are the hot and cold fluid dimensionless temperatures, respectively. ℎ and are the dimensionless axial positions of the hot and cold fluids, respectively. ℎ and are the dimensionless transverse positions of the hot and cold fluids, respectively. The channels aspect ratios ℎ, as well as the hot and cold flow Péclet numbers Pe ℎ and Pe are given by where ℎ and are the heights of the hot and cold fluid channels, respectively. ( ℎ , ), ( ℎ , ), and ( ℎ , ) are the density, specific heat, and thermal conductivity pairs of the hot and cold fluids, respectively. ℎ and are the mean axial velocities of the hot and cold fluids, respectively.
where ℎ and ℎ are the axial and transverse positions of the hot fluid, respectively. and are the corresponding positions of the cold fluid, respectively. ℎ and start from zero at the fluid inlets while ℎ and start from zero at the adiabatic boundaries. ℎ and are the axial velocity fields for the hot and cold fluids, respectively. ℎ and are the hot and cold fluid temperatures, respectively. ℎ1 and 1 are the inlet temperatures of the hot and cold fluids, respectively.
The boundary conditions of (2) are given by where ℎ and are the conduction heat fluxes through the pin bases at the hot and cold surfaces, respectively. However, ℎ and are the convection heat fluxes at the unfinned portions of the hot and cold surfaces, respectively. The local pins base area concentration denoted by can be calculated from the following expression: where / is the local axial gradient of the number of pins ( ).
Modeling of the Pins Conduction and Auxiliary Fluid
Convection Heat Fluxes. The one-dimensional fin equation [18,19] can be used to model the conduction heat transfer through the pins. This fin equation has the following dimensionless form: where the dimensionless pin distance, , the dimensionless pin local temperature, , and the dimensionless pin thermal length, * , are given by where Bi = ℎ / is the pin Biot number and = / is the pin aspect ratio. The boundary conditions of (9) are given by where ℎ, = ℎ ( ℎ , ℎ = 1) and , = ( = 1 − ℎ , = 1) are the dimensionless temperatures of the hot and cold boundaries, respectively. is the dimensionless cold excess temperature. It is equal to where 0 ≤ ≤ 1 as auxiliary fluids are usually hotter than the cold reservoir. The solution of (9) is Therefore, the conduction heat flux at the pin bases is equal to The Scientific World Journal Note that ℎ = cond | =0 and = − cond | = . Recall , where ℎ and are the temperatures at the hot and cold boundaries facing the auxiliary fluid, respectively. As such, (7c) and (7d) change to where Bi ℎ = ℎ ℎ / ℎ is the hot fluid Biot number.
The Heat Transfer Rates through the Heat Exchanger.
The heat transfer rate per unit width from the hot fluid ( ℎ ) and that to the cold fluid ( )can be calculated from where ℎ2 and 2 are the mean bulk temperatures at the hot and cold fluid exit ports, respectively. ℎ2 and 2 are the dimensionless values of ℎ2 and 2 , respectively. In terms of the dimensionless parameters, ℎ and are given by The integral form of (2) can be expressed as 2.4. Modeling of the Pin Base Area Distribution over the Channel Boundary. Different distributions for pins will be analyzed in this work. These distributions allow having concentrated pin distribution either far from the middle section of the channels or about this section. Two families of distributions are considered. They are the step function and exponential distributions. The step function distribution that has concentrated pins far from the channels midsection (see Figure 2) has the following mathematical form: where 0 ≤ ( / ) ≤ ( / ) = 1 − ( / ). is the maximum value of that produce 99% of the upper limit of pins base area concentration ( ).
The exponential distribution of the pins has the following functional form: The Scientific World Journal 5 where | | < . is the upper limit value that makes = . It can be accurately correlated to through the following correlation: The pins are more concentrated near the channels inlet/exit sections when > 0. However, they are more concentrated around the channels midsection when < 0. These trends are seen in Figure 2.
Hot and Cold Fluid Flow Nusselt Numbers.
The convection heat transfer coefficient for hot and cold fluid flows ℎ ℎ and ℎ , respectively, is defined as Thus, the local Nusselt numbers Nu ℎ and Nu are equal to
Heat Exchanger Effectiveness
Ratios. The maximum heat transfer rate per unit width from the hot fluid ℎ Max and that to the cold fluid Max are obtainable when ℎ2 = 1 and 2 = ℎ1 . Using (16), ℎ Max and Max are equal to Define the heat exchanger effectiveness factor ℎ as the ratio of heat transfer rate from the hot fluid to ℎ Max . Also, is defined as the ratio of heat transfer rate to the cold fluid to Max . As such, ℎ and are mathematically equal to
Heat Exchanger Second Set of Performance Ratios.
Let the reference case for the second performance ratios be the counterflow heat exchanger with perfect indirect contact between the hot and cold fluids. For this case, the boundary conditions given by (7c) and (7d) change to The heat transfer rate between the two fluids for this case, , is equal to where [ ℎ2 ] and [ 2 ] are the dimensionless exit mean bulk temperatures of hot and cold fluids, respectively, for the reference case. Define the heat exchanger performance indicators ℎ and as the ratio of heat transfer rate from the hot fluid and that to the cold fluid, respectively, to the reference heat transfer rate. Mathematically, they are equal to . (29)
Heat Exchanger Set of Performance Ratios due to Stratified Pin Distribution.
The last set of performance indicators for the present heat exchanger denoted by ( ℎ , ) are defined as the ratios of the heat transfer rate from the hot fluid and that to the cold fluid to the corresponding quantities when = , respectively. Mathematically, they are equal to 2.6. Analytical Model. Utilizing (15a), (15b), and (24), it can be shown that where ℎ ℎ, ≪ ℎ . This condition is necessary as the aim of introducing the auxiliary fluid flow is to enhance heat transfer 6 The Scientific World Journal inside the hot and cold fluid channels. The coefficients 1 and 2 are given by Using (31), the heat transfer rates at the hot and cold differential boundary elements are given by where is equal to (34) Both sides of (33) can be arranged by separation of variables to the following differential equation: Integrating (35) over the heat exchanger length results in the following solution: Using (26), ℎ and can be shown to be equal to where (Bi ℎ ) eff and (Bi ) eff are equal to The solutions of (2) for this case under fully developed flow condition can be numerically obtained with high accuracy by following the methodology described in Section 3 of this work. For this case, it can be shown that the following correlations of Nusselt numbers where ℎ = ( / ℎ )( ℎ / ) and = ( / )( / ).
The Scientific World Journal 7
Upper Limits of Heat Fluxes in Absence of Pins at
Fully Developed Condition. The dimensionless maximum convection heat fluxes in absence of pins can be obtained when ( ℎ , ) = ( ℎ1 , 1 ) and → 0. They are equal to where Bi = Bi ℎ ( ℎ / )( / ℎ ).
Modeling of Auxiliary Fluid Flow Convection Coefficients and Minimum .
Let the auxiliary stream be laminar flow along the direction of the channels width axis. Therefore, the convection heat transfer coefficients for the fin and unfinned surfaces can be computed using the following correlations [19]: where , Pr , and ] are the auxiliary fluid thermal conductivity, Prandtl number, and kinematic viscosity, respectively.
∞ is the auxiliary free velocity. Re = ∞ /] and Re = ∞ /] are the Reynolds numbers for the streams across the pins and along the unfinned surface, respectively. Thus, Bi ℎ , * and the relationships between the latter Reynolds numbers are equal to The minimum requirements for average pin base area concentration can be obtained when the heat transfer rates between (hot, cold) fluids and auxiliary stream are equal to those between the hot and cold fluids under perfect indirect contact between the fluids. These quantities can be obtained analytically for the ideal cases: (1) ℎ Pe ℎ and are very large, while Pe is very small and (2) Pe and are very large, while ℎ Pe ℎ is very small. For these conditions, the minimum average pin base area concentration denoted by ℎ Min, Min , respectively, can be shown to be equal to
Numerical Methodology and Results
Equation (2) is coupled via the boundary conditions given by (15a) and (15b). These equations can be solved by iterations using the implicit finite difference method discussed by Khaled and Vafai [23]. Equation (2) was discretized by employing three-point central differencing quotients for the first and second derivatives with respect to ℎ, directions. Furthermore, two-point backward differencing quotients were used in the discretization of the first derivatives with respect to ℎ, directions. For (15a) and (15b), three-point central differencing quotients were used to discretize the first derivatives with respect to ℎ, directions. The finite difference equations of (2) are given by The pairs ( , ) and ( = − + 1, ) represent the location of the discretized points in the numerical grids of the hot and cold fluid domains, respectively.
is the total number of either or sections. ℎ and are the total number of the discretized points per and sections, respectively. , ℎ , and were taken to be equal to = 1001 and ℎ = = 201. The applications of (46) for all discretized points at given and sections result in ℎ and tridiagonal systems of algebraic equations. These equations can easily be solved using the Thomas algorithm [24], if internal boundary temperatures are known. The numerical solution procedure is summarized in the following steps.
8
The Scientific World Journal (4) The corrected cold boundary temperatures ( ) corrected were found using the finite difference equation of (15b).
(5) Steps (2)-(4) were repeated by replacing ( ) assumed with the corrected ( ) corrected until the following condition is satisfied: Using doubled mesh sizes results in less than 0.3% error in the calculated parameters for moderate ℎ Pe ℎ and Pe values. This ascertains the grid-size independent results. The numerical results shown in Figures 3-12 were generated for the hot, cold, and the auxiliary stream fluids shown in Table 1. These selections correspond to an important application which is oil cooling cold water stream augmented by an air stream. The numerical results were compared with the analytical solution given by (35), (36), and (40) as shown in Table 2.
The numerical results are seen to have a good agreement with the analytical solution as ℎ, Pe ℎ, ≪ 1. The latter constraint is the major assumption used to generate (40).
The Role of Internal Flow Reynolds Numbers in the Performance Ratios.
As Re ℎ increases, both hot fluid mean bulk and heated boundary temperatures decrease due to the increase in advection and the widening effect of the thermal entry region. As a result, the convection heat transfer rate to the auxiliary stream and conduction through the pins increase due to the increase in the heated boundary and pins excess temperatures ( ℎ − ∞ , ℎ − ), respectively. These excess temperatures increase as decreases. Accordingly, the heat transfer rate from the hot fluid increases which causes ℎ to increase as both Re ℎ and (1− ) increase as seen in Figure 3. This figure shows that ℎ decreases as Re increases. This indicates that the increase in the reference case heat transfer rate ( ) is larger than that for the present system ( ℎ ). Also, it is noticed from Figure 3 that ℎ can be larger than one at both smaller and Re values and larger Re ℎ values. Similarly, can be larger than one for smaller (1 − ) and Re ℎ values and larger Re values.
The increase in Re ℎ decreases the hot fluid effectiveness ℎ as shown in Figure 4, since it majorly results in reduction of the hot fluid mean bulk temperature. However, a decrease in is noticed to cause an increase in ℎ due to the associated Table 2: A comparison between the numerical solution and the analytical one given by (37) increase in ℎ − ∞ . Figures 7 and 9 show that an increase in Re causes an increase in ℎ . This is because both ∞ − , ℎ − increase as Re increases; thus, the conduction through the pins is enhanced. The latter enhancement cannot be clearly identified from Figure 4 as the pin aspect ratio is very small for this figure which is = 0.05. As indicated earlier, when → 0 the heat transfer rate in both channels will be uncoupled. In a similar manner, it can be concluded that decreases as Re increases and as (1 − ) decreases and it increases as Re ℎ increases as noticed in Figure 4.
The Role of Pins Aspect Ratio in the Performance Ratios.
As increases ( decreases), the pin surface area increases causing the fully developed maximum fluxes Θ ℎ and Θ to increase until they reach their asymptotic values as shown in Figure 5. This causes ℎ to increase as decreases when = 0.25 as shown in Figure 6. When = 0.5, Θ ≫ Θ ℎ ; thus, sharply increases causing sharp reduction in ℎ − . Therefore, ℎ decreases as decreases when = 0.5. The cases considered in Figures 6 and 7 have ℎ Pe ℎ ≫ Pe > 1. Thus, the hot fluid flow is dominated by the thermal entry region. For this condition, ℎ is less sensitive to , while increases apparently as increases. As a result, ∞ − , and ℎ − decrease as increases. Since pins conduction is linearly dependent on ∞ − , and ℎ − while it is less sensitive to , decreases as decreases as seen in Figure 6. Due to the previous analysis, ℎ and increase as increases except when = 0.25 where ℎ is noticed in Figure 7 to decrease as increases. Also, it is shown from this figure that ℎ > 1 when = 0.25 and
The Role of Pins Base Area Concentration in the Performance Indicators.
Two limiting cases can be encountered in the present heat exchanger. They are as follows: (1) pure convection between the channels and the auxiliary stream when → 0 and (2) pure conduction between the channels when → . As seen in Figure 5, (Θ ℎ , Θ ) > (Θ ℎ , Θ ) when = 0.2 which represents the condition for Figures 8 and 9. Accordingly, ( ℎ , ) pair is expected to decrease as increases. This is shown in Figure 8 except for the ℎ plot when = 0.25. For this case, ℎ − ∞ , and ℎ − are very close to their upper limit ( ℎ1 − 1 ) while this limit is reduced to ( ℎ1 − ∞ ) for the pure convection condition as → 0. And since the heat flux is linearly proportional ℎ − ∞ , and ℎ − as can be seen in (15a) and (15b), ℎ is increased when is increased for = 0.25. Because of the previous facts, ( ℎ , ) pair decreases as increases except when = 0.25 where ℎ is noticed in Figure 9 to increase as increases. Furthermore, it is shown from this figure that ℎ > 1 when = 0.25 and Re = 10 while > 1 when = 0.5. This demonstrates the superiority of the present heat exchanger over the conventional counterflow plate heat exchanger.
The Role of Pins Distribution in the Performance
Indicators. Since = 0.25 and ℎ Pe ℎ ≫ Pe > 1 in Figures 10, 11, and 12, ℎ − ∞ , and ℎ − are very close to the maximum value ( ℎ1 − 1 ). These excess temperatures become closer to that limit near the hot fluid inlet. Far from this region, they tend to apparently decrease as decreases since Θ ℎ > Θ ℎ . Consequently, the uniform distribution of pins reveals the maximum ℎ in which ℎ = 1 as seen from these figures. On the other hand, ∞ − turns out to be much smaller than ( ∞ − 1 ) particularly near the cold fluid exit Increases in ℎ Pe ℎ , and Pe reduce ℎ , and , respectively. As a result, ℎ − ∞ , and ∞ − increase causing {( ℎ ) max , ( ) max } to increase as ℎ Pe ℎ , and Pe increase, respectively. These trends are seen clearly in Figures 13 and 14. These maximum ratios are obtained based on {Nu * ℎ , Nu * } approximations. Some critical parameters that produce the maximum quantities {( ℎ ) max , ( ) max } are listed in Table 3. Finally, {( ℎ ) max , ( ) max } as seen from Figures 13 and 14 can be much larger than the one indicating that properly distributing the pins is an efficient mechanism for heat transfer enhancement under fully developed laminar flow condition.
Conclusions
Heat transfer inside counter-flow plate heat exchanger subject to internal convections with an auxiliary fluid was investigated in this work. The auxiliary fluid passage is surrounded by the hot and cold fluid channels and it is supported by highly conductive pins connected to both channels. Good agreement was noticed between the numerical solution and an approximate analytical solution based on fully developed flow and very long pin conditions. The results of the current study can be summarized by the following concluding remarks.
(1) The heat transfer rate from/to the hot/cold fluid of the present system can be higher than that for the conventional counterflow plate heat exchanger under the following conditions: (1) large hot/cold flow Reynolds number, (2) small cold/hot flow Reynolds number, and (3) large hot/cold fluid excess temperature. (2) Increasing either the number pins or pins length may increase the hot/cold heat transfer rate above that for the conventional counterflow plate heat exchanger under the following conditions: (1) effective hot/cold fluid Biot number due to only pins conduction being larger than that due to only convection with the unfinned surface and (2) large hot/cold fluid excess temperature as when having large hot/cold flow Reynolds number. | 5,934.6 | 2014-02-25T00:00:00.000 | [
"Engineering",
"Physics"
] |