id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
197664958
pes2o/s2orc
v3-fos-license
Channel selection in motor imaginary-based brain-computer interfaces: a particle swarm optimization algorithm The number of electrode channels in a brain-computer interface affects not only its classification performance, but also its convenience in practical applications. How-ever, an effective method for determining the number of channels has not yet been established for motor imagery-based brain-computer interfaces. This paper proposes a novel evolutionary search algorithm, binary quantum-behaved particle swarm optimization, for channel selection, which is implemented in a wrapping manner, cou-pling common spatial pattern for feature extraction, and support vector machine for classification. The fitness function of binary quantum-behaved particle swarm optimization is defined as the weighted sum of classification error rate and relative number of channels. The classification performance of the binary quantum-behaved particle swarm optimization-based common spatial pattern was evaluated on an electroencephalograph data set and an electrocorticography data set. It was subsequently compared with that of other three common spatial pattern methods: using the channels selected by binary particle swarm optimization, all channels in raw data sets, and channels selected manually. Experimental results showed that the proposed binary quantum-behaved particle swarm optimization-based common spatial pattern method outperformed the other three common spatial pattern methods, significantly decreasing the classification error rate and number of channels, as compared to the common spatial pattern method using whole channels in raw data sets. The proposed method can significantly improve the practicability and convenience of a motor imagery-based brain-computer interface system. The number of electrode channels in a brain-computer interface affects not only its classification performance, but also its convenience in practical applications. However, an effective method for determining the number of channels has not yet been established for motor imagerybased brain-computer interfaces. This paper proposes a novel evolutionary search algorithm, binary quantumbehaved particle swarm optimization, for channel selection, which is implemented in a wrapping manner, coupling common spatial pattern for feature extraction, and support vector machine for classification. The fitness function of binary quantum-behaved particle swarm optimization is defined as the weighted sum of classification error rate and relative number of channels. The classification performance of the binary quantum-behaved particle swarm optimization-based common spatial pattern was evaluated on an electroencephalograph data set and an electrocorticography data set. It was subsequently compared with that of other three common spatial pattern methods: using the channels selected by binary particle swarm optimization, all channels in raw data sets, and channels selected manually. Experimental results showed that the proposed binary quantum-behaved particle swarm optimization-based common spatial pattern method outperformed the other three common spatial pattern methods, significantly decreasing the classification error rate and number of channels, as compared to the common spatial pattern method using whole channels in raw data sets. The proposed method can significantly improve the practicability and convenience of a motor imagerybased brain-computer interface system. Keywords Brain-computer interface; motor imagery; common spatial pattern; channel selection; binary quantum-behaved particle swarm optimization Introduction A brain-computer interface (BCI) is a type of communication system that establishes a no-muscular pathway between the brain and the outside world (Wolpaw et al., 2002). As such, BCIs can help people with motor disabilities communicate with external environments or control an external device. BCI systems can be divided into non-invasive and invasive types (Leuthardt et al., 2004). Non-invasive human BCIs currently use electroencephalography (EEG), magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), and functional near-infrared spectroscopy (fNIRS) as brain imaging techniques. Among them, EEG and fNIRS have been attractive due to their low cost and portability (Arvaneh et al., 2011;Aydemir et al., 2018;Blankertz et al., 2008;Ehrsson et al., 2003;Hong et al., 2017Hong et al., , 2015Khan and Hong, 2017;Naseer and Hong, 2015;Shin et al., 2012). In contrast, invasive human BCIs are primarily based on electrocorticography (ECoG) signals recorded from the cortical surface (Lal et al., 2005;Leuthardt et al., 2004). Various paradigms are used for building a BCI system. One of them utilizes motor imagery (MI) to generate distinguishable brain signals (Ehrsson et al., 2003;Pfurtscheller and Neuper, 2001). For example, imagination of a limb movement results in two neurophysiological phenomena -event-related desynchronization or event-related synchronization (ERD/ERS) (Pfurtscheller and da Silva, 1999;Toro et al., 1994). This is a decrease/increase in power of EEG signals in the frequency bands of µ rhythm (8-14 Hz) and β rhythm (16-28 Hz) over the motor and sensorimotor lobes. Common spatial pattern (CSP) is an effective algorithm for extracting ERD/ERS features from EEG data (McFarland et al., 1997). As a powerful spatial filtering algorithm, CSP can detect the oscillatory characteristic of EEG signals in specific brain areas, thus facilitating its use for discriminating between the two classes of EEG patterns (Muller-Gerking et al., 1999). However, these specific brain areas may vary between people based on differences in physiology and anatomy. One approach then is to apply as many as channels as possible to record data from BCI systems. However, this introduces significant noise in the data and can cause an overfitting problem for the CSP algorithm (Blankertz et al., 2008). Furthermore, using a large number of electrodes impedes the convenience of practical applications. To balance the need for both performance and convenience in a BCI, it is crucial to remove task-irrelevant channels using a channel selection method (Arvaneh et al., 2011). So far, the methods for channel selection can be divided mainly into three cat- Figure 1. (a) The placement of the 118 electrodes in the EEG data set according to extended international 10/20 system; (b) The placement of the 8 × 8 electrode grid used for recording ECoG data of the second patient in the ECoG data set. It was placed on the right hemisphere. egories, namely filtering, wrapping, and embedded (Alotaiby et al., 2015). Filtering methods are independent of the subsequent learning algorithm, and rely on certain criteria to evaluate candidate channel subsets. For example, Arvaneh et al. (2011) formulated a sparse common spatial pattern (SCSP) algorithm as an optimization approach to reduce the number of channels (Arvaneh et al., 2011). Filtering methods can reduce the number of channels at high speed, but usually at the cost of classification accuracy. Conversely, wrapping methods employ a different strategy, whereby channel selection is combined with a classification algorithm. Candidates are assessed by classification accuracy, and can therefore yield more robust results, but are also more computationally expensive than filtering methods. Aydemir et al. (2018) recently presented such a sequential forward search method (SFSM) for channel selection. Finally, in an extension of wrapping techniques, embedded methods select channels based on criteria generated during the learning process of a specific classifier. For example Lal et al. (2004) embedded feature selection algorithms, recursive feature elimination (RFE), and zero-norm optimization into support vector machines (SVM) to recursively eliminate the channels that yield the worst classification results. Together, all these methods can reduce the number of channels to a considerable degree. Despite the volume of research on channel selection, accurately determining the number and position of channels is still a big challenge for MI-based BCIs. This study employs the idea of a wrapping method to construct a channel selection process, and attempts to answer the following two research questions: 1) What is the degree of improvement in the performance of a BCI system using only selected channels for classification, as compared to using all recording channels? And 2) What is the minimum number of channels required to achieve satisfactory classification performance (classification accuracy of approximately 90%)? We selected the wrapping method for its higher classification performance than the filtering method, and lower complexity of compu-tation than the embedded method. To address our research questions, we propose a novel evolutionary search algorithm, the binary quantum-behaved particle swarm optimization (BQPSO) (Xi et al., 2010), for channel selection in MI-based BCIs. The BQPSO evaluates all candidate channel subsets under the guidance of the fitness value, and continuously updates the candidate subsets until the maximum number of iterations is reached. Based on the chosen channels, the CSP algorithm is used for feature extraction, and support vector machine (SVM) for classification. The fitness function of the BQPSO is defined as the weighted sum of classification error rate and the relative number of channels, so that the number of channels can be reduced as much as possible, on the premise that the classification performance meets the need of BCI applications. Another evolutionary search algorithm, binary particle swarm optimization (BPSO) (Kennedy and Eberhart, 1997), has been used for channel selection of MI-based BCIs by Kim et al. (2015). To demonstrate the advantages of BQPSO for channel selection, we evaluated the performance of BQPSO-based CSP on an EEG data set and an ECoG data set, in comparison with the performance of BPSO-based CSP, CSP with all recording channels, and with manually chosen channels, according to prior knowledge of neurophysiology. We report that BQPSO-based CSP achieved superior performance compared to the other three CSP methods. Experimental data In this study, two data sets were used for evaluating the performance of the proposed CSP method. The first is a publicly available EEG data set -IVa of BCI Competition III (Blankertz et al., 2006). The other is an ECoG data set provided by the authors of Lal et al. (2005), and used in their study. These data sets were employed owing to their use of many recording electrodes. The two data sets differ primarily in their signal-to-noise ratios (SNR) and sizes (i.e. numbers of total trials). Figure 2. (a) The timing scheme of each trial for the EEG dataset. In each trial, the duration of motor imagery was 3.5 s and the next 1.75-2.25 s was the time for a subject to relax; (b) The timing scheme of each trial for the ECoG dataset. In each trial, the duration of motor imagery was 4 s and the next 2 s was the time for a subject to relax. The EEG data set The EEG data set was originally provided for classifying EEG data with small training sets and is widely employed in BCI studies to compare different classification algorithms. It consists of five data subsets derived from five healthy subjects (aa, al, av, aw and ay). Each subject participated in a MI-based BCI experiment, in which they were required to conduct mental tasks of imagining left hand, right hand or right foot movements, following a given visual cue denoted by a letter (L, R, or F). Starting from the visual cue, the subjects carried out the corresponding MI task for 3.5 s. These visual cues were presented intermittently with random lengths ranging from 1.75 s to 2.25 s, during which the subject could relax. EEG signals were collected using a BrainAmp amplifier and a 128-channel Ag/AgCl electrode cap. 118 electrodes were used for recording experimental data, according to the extended international 10/20 system. The EEG data were digitalized at 1000 Hz by the amplifier and re-sampled to 100 Hz by the competition organizers for offline analysis. A total of 280 trials per class were performed by each subject. Only the data from MI tasks of right hand (R) and right foot (F) were provided for the competition. The electrode placement for recording EEG data and the timing scheme of each trial are illustrated in Fig. 1 (a) and Fig. 2 (a), respectively. The ECoG dataset The ECoG data set was recorded from three epileptic patients (AM, JS, SS) with intracranial electrodes. All patients suffered from focal epilepsy and had to undergo surgical operation to have their foci resected. Prior to the surgery, the localization of epileptic foci required placing electrodes onto the surface of the cortex and into deeper regions of the brain. After several days of recovery and follow-up examinations due to implantation surgery, the BCI experiments were carried out in the hospital. For the experiment, each subject was asked to repeatedly imagine two different limb movements according to the visual cue. Each trial started with a fixation cross displayed in the center of screen and lasted for 7 s. At second 1, a visual cue appeared on the screen indicating the MI task to be performed. The cue for patient SS was an arrow pointing to the left or right hand, whereas that for patients AM and JS was a picture showing either a tongue or a little finger. The imagination phase lasted 4 s. In the final 2 s of each trial, the patient could relax. All three patients had grid electrodes implanted, but patients JS and SS had additional strip electrodes. The electrode grids were placed on the cortex under the dura master, covering the primary motor and premotor areas as well as the frontotemporal region of either the right or left hemisphere. The electrodes were connected to an EEG amplifier by cables. The ECoG signals were recorded at a sampling rate of 1000 Hz and re-sampled to 100 Hz for offline analysis. The number and positions of implanted electrodes, the tasks performed, and the number of trials recorded from each patient are listed in Table 1. The electrode placement for recording ECoG data of the second patient and the timing scheme of each trial are illustrated in Fig. 1 (b) and Fig. 2 (b), respectively. Methods The channel selection algorithm based on the wrapper is illustrated in Fig. 3. As shown in Fig. 3 (a), raw EEG data are first temporally filtered in 8∼15 Hz, and then subjected to channel selection via BPSO/BQPSO, and finally classified by SVM. To accurately assess classification performance, 10-fold cross-validation is applied, i.e. the whole data set is divided into 10 equal parts, with each part being used for testing set once and the other parts for training set. Measurements of average error rate and number of channels are employed to calculate the fitness value at each iteration. To realize the 10-fold cross validation, the training data were divided into 10 equal-size parts. Each part was used for testing set once and the other nine parts were used for training set; (b) Feature extraction and classification of EEG/ECoG signals based on CSP and SVM using selected channels in one fold of the 10-fold cross validation process. to filter both training and testing data. Feature signals are extracted based on spatially filtered data. Feature signals from training set are employed for training an SVM classifier model, which then classifies feature signals from testing set. Channel selection The particle swarm optimization (PSO) developed by Kennedy and Eberhart (1995) is a population-based evolutionary search method. The main idea of the algorithm comes from the social behavior of animals, such as bird flocking, fish schooling, and animal herding. The original PSO designed for continuous search space was modified to be applicable to discrete binary search space, thus termed binary PSO (BPSO) (Kennedy and Eberhart, 1997). From the perspective of quantum mechanics, Shin et al. (2004) adapted the PSO algorithm to develop a novel quantum-behaved particle swarm optimization (QPSO), using the quantum uncertainty principle to describe the motion state of particles. Subsequently, they further generalized the QPSO algorithm to discrete binary search spaces, developing the binary QPSO (BQPSO) (Xi et al., 2010). Binary particle swarm optimization In PSO, a particle swarm consists of M particles that denote potential problems, X = {X 1 , X 2 , · · · , X M }. A potential solution to a problem is expressed as a particle flying in a D-dimensional space having the position vector X i = {X i1 , X i2 , · · · , X iD } and the velocity vector V i = {V i1 ,V i2 , · · · ,V iD }. Each particle maintains a record of the position of its previous best performance (i.e. the position with the best fitness value) in a vector, pbest i = {pbest i1 , pbest i2 , · · · , pbest iD }. At each iteration, each particle competes with others in the population for the best position, denoted as gbest i = {gbest i1 , gbest i2 , · · · , gbest iD }. Thereby, particles move in the search space according to the following: where i = 1, 2, · · · , M and d = 1, 2, · · · , D. The w is the inertia weight introduced for accelerating the convergence speed of PSO. φ 1 and φ 2 are two random positive numbers generated for each i and d. At each iteration, the value of V id is confined to [-V max , In a discrete binary space, the velocity of a particle can be described by the number of bits changed per iteration, or the Hamming distance of a particle, between time t and t + 1. A particle with zero bits flipped does not move, while it moves the "farthest" with all bits flipped. Accordingly, the velocity of a particle can be defined in terms of the probabilities that a bit will be in one state or the other. That is to say, a particle moves in a state space with each dimension confined to 0 and 1, where each V id denotes the probability of bit X id taking the value 1. In summary, the particle swarm Eqn. (1) remains unchanged in BPSO, but now pbest id , gbest id , X id and X id are taken as integers in {0, 1}. Since V id is a probability, it must be confined to [0.0, 1.0]. This can be implemented by a sigmoid limiting transformation function S(v) = 1/(1+exp(-v)). Thus, the main difference between PSO and BPSO is that formula (2) is replaced by the following Eqn. (3): limits the ultimate probability that bit X id will take on a binary value. Binary quantum-behaved particle swarm optimization Quantum-behaved particle swarm optimization (QPSO) is a novel variant of PSO and outperforms PSO in search abilities. In QPSO, there are no the concepts of velocity and trajectory, but those of position and distance. A particle moves in the continuous search space according to the following equations: where mbest is the mean best position among the particles. p id is a stochastic point between pbest id and gbest d , i.e. the dth coordinate of the local attractor of the ith particle pi. φ and µ are two random numbers distributed in [0, 1], and α is a parameter of QPSO called contraction-expansion coefficient. Since the iteration equations of QPSO are far different from those of PSO, the methodology of BPSO does not apply to QPSO. Because the position of a particle in a discrete space is expressed as a binary string, the key problem of designing BQPSO is to define the distance between two positions and the corresponding transformation. In BQPSO, the distance is defined as the Hamming distance between two binary strings X and Y, i.e. |X-Y| = d H (X,Y), where d H () is the function for computing the Hamming distance, i.e. the sum of different bits between the two strings. In BQPSO, the variable X id stands for the dth substring (i.e. dth decision variable) of the ith particle, rather than dth bit of a binary string. Let the length of X id be l d , then the length of X i can be calculated as The remaining problem for BQPSO design is in adapting the continuous evolution Eqns. (4)-(6) to discrete binary spaces. In QPSO, the mean best position of all particles (mbest) is derived from Eqn. (4), whereas in BQPSO, the jth bit of mbest is determined by the states of jth bits of all particles' pbests. The jth bit of m best is 1 if mbest j > 0.5, 0 if mbest j < 0.5, and randomly taken as 1 or 0 if mbest j = 0.5. In BQPSO, P i can be generated through crossover operation of pbesti and gbest, which can be divided into one-point operations and multi-point operations. The update Eqn. (6) for QPSO can be rewritten as |X id -pid| = α |mbestd -X id | ln(1/µ), µ = Rand(). It can be further adapted for use in BQPSO as follows where b = αd H (X id , P id )ln(1/µ). Function ⌈ ⌉ is a round sign used for rounding towards nearest decimal or integer. According to the above equations, a new substring X id can be calculated with time complexity O(bl d ). To reduce the computation cost, X id is generated by mutating each bit of P id with the mutation probability, Fitness function When BPSO/BQPSO is used for channel selection, individuals in a population are represented in terms of n-bit binary strings, corresponding to n channels used for data recording. The BPSO/BQPSO operates on a population of binary strings and chooses channels by optimizing a fitness (or objective) function. There are two goals in channel selection: improving classification accuracy, and reducing the number of channels. Accordingly, the fitness function, f(z), can be defined as the weighted sum of Table 3. Classification error rates (%) and the number of channels yielded respectively by BQPSO-CSP and BPSO-CSP at weight coefficients w 1 = w 2 = 0.5, the CSP methods using all channels and the 18 channels around the electrodes C3 and C4, and the best channel selection method (SCSP-1) (Arvaneh et al., 2011) two decision variables, the error rate of 10-fold cross-validation, f 1 (z), and the relative number of channels, f 2 (z), for a minimization problem (Hasan et al., 2010;Reyes-Sierra and Coello, 2006) where the weights w i are normalized, i.e., tained from the given channel subset denoted by an individual, z, and f 2 (z) is derived by dividing the number of channels chosen in the individual z by the total number of channels in raw data set, i.e. the length of the binary string, n. Since the numerical value of f 1 (z) and f 2 (z) ranges from 0 to 1, so does that of f(z). Several important steps for BPSO/BQPSO based channel selection are explained below: 1) Coding. Each particle in a population is coded as a binary string, whose length is equal to the total number of channels in a raw data set. When any bit of the binary string is 1, the corresponding channel is retained; otherwise the corresponding channel is removed. Thus, each particle denotes a different subset of channels, which is a candidate solution to the problem of channel selection. 2) Initialization. An initial population with i particles (i = 20 in this study) is randomly generated, and each bit of binary string for every particle is randomly set to 1 or 0. 3) Selection. Channel selection is equivalent to finding a global minimum of the fitness function. At each generation, the particle with the smallest fitness value is found. After each iteration, the positions of all particles are updated, and the current best position of each particle is compared with that of the best particle of the previous generation to find the global optimal position, i.e. the best particle at the current generation. The best particle at the final generation includes the channels selected by BPSO/BQPSO. Feature extraction 3.2.1 Data preprocessing Prior to channel selection, both the EEG and ECoG data sets were preprocessed with respect to time windowing, temporal filtering, and electrode referencing. In the EEG dataset, the raw data in a time window of 1-2 s after the visual cue were segmented from each channel for classification (Shin et al., 2012). The windowed EEG data were band-pass filtered between 8 to 15 Hz to extract µ rhythm signals associated with MI (Shin et al., 2012). In the ECOG dataset, the raw data in a time window of 0.5-2.5 s following the visual cue were segmented from each channel for classification. The data segments used for classifying the two data sets were not optimized, but were determined experimentally and heuristically. Common average reference (CAR) was used to re-reference the windowed data to reduce sensitivity to artifacts (Ludwig et al., 2009). Re-referenced ECoG data were band-pass filtered between 8 and 30 Hz to extract both µ and β rhythm signals associated with MI (Wei et al., 2007). Common spatial pattern Common spatial pattern (CSP) is a powerful algorithm for spatial filtering, which has been successfully employed in MI-based BCIs for discriminating between two classes of EEG data. By spatially filtering multi-channel EEG signals, CSP maximizes the variance of one class while minimizing the variance of the other class, making subsequent classification more effective (Blankertz et al., 2008;Lotte and Guan, 2011;Muller-Gerking et al., 1999). The purpose of CSP is to extract task-related signal components and suppress task-unrelated components and noise. Assume that there are two-class EEG signals evoked by two mental tasks, e.g. MI of left hand and right hand. Let X 1 and X 2 respectively denote a single-trial EEG signal of the two classes 1 and 2, with the dimension of N(channels)T (sampling points). Two normalized spatial covariance matrices, R 1 and R 2 , are calculated with X 1 and X 2 , respectively, as where superscript T denotes the transpose operation, and trace(A) stands for the trace operation, i.e. the sum of diagonal elements of matrix A. The averaged spatial covariance matrix,R i across all training trials can be obtained for each class. Subsequently, the composite spatial covariance matrix R c can be calculated as Since R c is a real and symmetric matrix, it can be factored as R c = U c Σ c U T c , where U c is the eigenvector matrix and Σ c is the diagonal eigenvalue matrix. U c and Σ c can be used for calculating the whitening transform matrix as P = √ Σ −1 c U T c , which trans-formsR i as follows Consequently, S 1 and S 2 will share the same eigenvector. If S 1 is factored as S 1 = BΣ 1 B T , S 2 will be factored as S 2 = BΣ 2 B T , and Σ 1 + Σ 2 = I, where I is the identity matrix. Given that the sum of two eigenvalues corresponding to the two-class EEG signals is always equal to one, eigenvectors with the largest eigenvalues for S 1 will correspond to those with the smallest eigenvalues for S 2 , and vice versa. This property is extremely important for classification of EEG signals, because it means that when the signal variance for one class is maximized, that for the other class will be minimized. The CSP algorithm leads to a spatial filter matrix as follows: where W ∈ R N×N . In general, the first and the last m rows are used as two spatial filters W 1 and W 2 for the two mental tasks respectively. The two spatial filters are optimal in the sense that they extract task-related components and eliminate common components. Feature definition The last step of feature extraction is to define feature signals for classification. Suppose that task 1 causes a relatively increased EEG variance over a specific area of the brain, and the variance of the EEG component filtered by W 1 is greatly enhanced compared with that filtered by W 2 , and vice versa. Given a single-trial spatiotemporal signal matrix, X, with unknown label, two runs of spatial filtering by W 1 and W 2 are applied. Then, features f 1 and f 2 are defined as follows: where f i takes value between 0 and 1 before logarithmic operation. In theory, f 1 takes value 0 for trials from task 2 and takes 1 for trials from task 1. Contrary results will be yielded for f 2 . The logarithmic operation is adopted to make the distribution of elements in f i more normal. Ultimately, the feature vector used for classification can be structured as Classification A linear support vector machine (SVM) was used as the classifier in this study. Proposed by Cortes and Vapnik (1995), SVM is a superior classification algorithm in the field of pattern recognition and machine learning. In the field of BCI studies, SVM has exhibited robust classification performance (Blankertz et al., 2003;Kaper et al., 2004;Schlgl et al., 2005). The purpose of SVM is to design a hyperplane g (V ) = w T V +w 0 = 0, which maximizes the margin between two classes of training data, where w is a weight vector and w 0 is an offset. Due to this characteristic, the generalization performance of the classifier is guaranteed. A linear SVM can be summarized as the following optimization problem: where i is the index of training trials, ζ i is a slack variable and C is a regularization parameter. The role of ζ i is to slack the requirement of linear separability, whereas that of C is to make a compromise between the bias and variance of classification results. A linear SVM classifier (Mller et al., 2003) is trained with the function fitcsvm in Statistics and Machine Learning Toolbox TM . Table 4. Classification error rates (%) and the number of channels yielded respectively by BQPSO-CSP and BPSO-CSP at weight coefficients w 1 = w 2 = 0.5, the CSP method using all channels, and the channel selection method (RCE cross-val.) (Lal et al., 2005) Usually, a model selection procedure is required for determining the regularization parameter C, in order to improve classification accuracy. Since the purpose of this research is to evaluate the search algorithm for channel selection, we adopted the default parameter in fitcsvm, i.e. C = 1. Results The efficacy of the CSP algorithm depends heavily upon the number and positions of channels used for classification. Hence, before feature extraction was conducted by CSP, different channel sets were applied, including i) channel subsets chosen by BQPSO and BPSO, ii) whole/total channels contained in raw data sets, and iii) 18 benchmarked channels around electrodes C3 and C4 for the EEG data set. These four CSP methods are hereafter labelled: BQPSO-CSP, BPSO-CSP, Basic-CSP-1, and Basic-CSP-2, respectively. The performance of BQPSO-CSP was tested and compared with the other three CSP methods on the two data sets, EEG ad ECoG. For the EEG data set, three pairs of most important spatial filters (i.e. m = 3 in the Eqn. (14)) were used, according to their contribution to classification. For the ECoG dataset, only one pair of the most important spatial filters (i.e. m = 1 in the Eqn. (14)) were used. The parameters used for channel selection in BQPSO and BPSO algorithms are listed in Table 2. BQPSO/BPSO for channel selection In this study, both the error rate and the number of chosen channels yielded by BQPSO-CSP and BPSO-CSP were the results of 10-fold cross-validation averaged across 5 independent executions (Xi et al., 2016). The setting of w 1 and w 2 in the fitness function (10) is a dynamically changing process, i.e. one weight coefficient changed with the other. We tested 9 combinations of w 1 and w 2 in which w 1 increased from 0.1 to 0.9 in increments of 0.1, while simultaneously, w 2 decreased from 0.9 to 0.1 in sequential reductions of 0.1. Thus, for each subject or patient, both the BQPSO-CSP and the BPSO-CSP had 9 sets of classification results. Fig. 4 and Fig. 5 depict the classification error rate and the number of channels yielded by BQPSO-CSP and BPSO-CSP at the nine pairs of weight coefficients on the two data sets. Each mark (cycle or asterisk) in each subplot represents the error rate and the number of channels achieved at one pair of weight coefficients. When the weight of error rate (w 1 ) was assigned the maximum value (0.9), the two methods for channel selection produced the least (or near least) classification error rates, by excluding the most redundant channels. On the contrary, when the weight of channel number (w 2 ) was assigned the maximum value (0.9), the two methods retained minimal (or near minimal) number of channels, without increasing the error rates as compared to the CSP method using all channels. From each subplot of Fig. 4 and Fig. 5, it can be observed that the changing curve of error rate with the number of channels yielded by BQPSO-CSP is always located on the left of that yielded by BPSO-CSP. This means that to obtain a roughly equal error rate, the latter needs to select many more channels than the former. Examining data from subject al as an example, a 2.17% error rate required an average of 9.6 channels for the former, whereas a 2.42% error rate required an average of 27.6 channels for the latter. Therefore, the proposed BQPSO-CSP method outperformed the BPSO-CSP method, particularly when the number of channels is small. Fig. 6 and Fig. 7 display the topological analyses of spatial patterns yielded by the three methods for channel selection in representative subjects in the two data sets. The spatial patterns are derived from the CSP filters, i.e., the inverse of the CSP filter matrix (Eqn. 14). The first and the last columns of the inverse matrix constitute the most important spatial patterns. In the two figures, the first row plots the topological maps obtained from all channels, whereas the second and third rows show the topological maps obtained from channels selected by BPSO-CSP and BQPSO-CSP, respectively. The dots in each topological map represent the positions of total channels or chosen channels. Spatial patterns It can be observed from Fig. 6 that the spatial patterns obtained from all channels (1st row) have large weights scattered in several locations irrelevant to the MI tasks. Especially for subject aa, spatial patterns yielded from MI of foot movement appear messy, displaying no clear focus. After BPSO-or BQPSO-based channel selection, the focus of spatial patterns (2nd and 3rd rows) is clearer than that using all channels. Moreover, the foci of these patterns are moved to (nearby) locations related to the MI tasks. With respect to these two methods for channel selection, while BPSO could reduce the number of channels employed, the positions of the chosen channels were relatively scattered. In addition, some channels outside the focus area were also selected, raising the potential to introduce noise into data used for classification. By contrast, BQPSO selected fewer channels and these channels were concentrated mainly on the focus area. The positions and number of chosen channels explained the decrease in error rate compared to that of BPSO and total channel method. Moreover, channels selected by BQPSO were almost identical to those selected in the focus area by BPSO, especially for subject AM in Error rate and the number of channels As shown in Fig. 4 and Fig. 5, the nine combinations of weight coefficients resulted in nine pairs of error rate and number of channels. Thus, channels in a BCI system can be configured according to these results and the requirement of the error rate for a specific application. As an example, the error rate and the number of channels yielded by BQPSO-CSP and BPSO-CSP at the weight coefficients of w 1 = w 2 = 0.5 on the EEG and ECoG data sets are listed in Table 3 and Table 4, respectively. As a comparison, the error rate and number of channels yielded by Basic-CSP-1 and Basic-CSP-2 are listed in Table 3, and those yielded by Basic-CSP-1 only are listed in Table 4. (As Basic-CSP-2 contains results from the 18 benchmarked channels around electrodes C3 and C4 for the EEG data set, there are no Basic-CSP-2 values for the ECoG data set in Table 4). To compare the proposed method with previously presented methods for channel selection, the error rate and number of channels yielded by sparse CSP for channel selection (SCSP-1) (Arvaneh et al., 2011) and recursive channel elimination (RCE cross-val.) (Lal et al., 2005), are also listed in Table 3 and Table 4, respectively. It is observed from Table 3 that BQPSO-CSP yielded the lowest error rate for each subject among the four CSP methods. In particular, subject av demonstrated a substantial drop in error rate from 32.79% (yielded by the full complement of 118 channels) to 18% (by an average of 14.5 channels selected by BQPSO). On average, BQPSO-CSP achieved a reduction of 2.12%, 5.74%, and 7.71% in error rate compared to BPSO-CSP, Basic-CSP-1, and Basic-CSP-2, respectively. These decreases are remarkable in terms of MI-based BCIs. Paired Wilcoxon signed rank tests at 95% confidence level establish a significant difference in error rate between BQPSO-CSP and the other three CSP methods, and between BPSO-CSP and the two Basic-CSP methods, with p values all equaling 0.043. In addition, the average number of channels used by BQPSO-CSP was considerably decreased to an average of 14.9, as compared to 30.8 (in BPSO) and 118 (in Basic-CSP-1). Paired Wilcoxon signed rank tests at 95% confidence level reveal a significant difference in the number of channels between any two of the former three CSP methods, with p values all equaling 0.043. Finally, compared with SCSP-1, BQPSO-CSP remarkably reduced the average error rate by 9.66% and the average number of channels by 7.7. In the ECoG data set, Table 4 reveals that BQPSO-CSP yielded an average reduction of 8.63% in the error rate of Basic-CSP-1, by decreasing the average number of channels from 74 to 7.9. The decrease in error rate is especially large for subject AM (14.98%). Likewise, BPSO-CSP reduced the average error rate by 3.63% with a remarkable drop in the average number of channels from 74 to 16.9. Hence, both BQPSO-CSP and BPSO-CSP are capable of reducing the error rate by removing a large number of redundant channels. However, BQPSO-CSP was considerably more effective than BPSO-CSP, evidenced by its considerably lower error rate with fewer channels selecte for each of the three patients. Compared with REC cross-val., BQPSO-CSP reduced the average error rate by 8.75% and the average number of channels by 2.9. Discussion Feature extraction is a crucial component in a BCI system as the classification performance depends primarily upon the quality of feature vectors used for classification rather than the classifier itself. CSP is a powerful spatial filtering algorithm that is widely used for feature extraction in MI-based BCIs. However, the use of excessive electrodes for data recordings renders CSP algorithm over-fitting, especially when the size of training set is small. Furthermore, installing a large number of electrodes adds inconvenience to practical application of BCIs. Thereby, it is an extremely important step to determine the minimum optimal number and positions of electrodes for building a high-performance BCI. This can be accomplished by channel selection. While channel selection has been studied extensively, it is still a huge challenge to accurately determine the number and positions of channels for a specific subject. In this context, we propose a Figure 6. Visualization of the most important spatial patterns of the two MI tasks derived from three CSP methods for subjects aa and al in the EEG data set (118 raw electrodes in total). The dots in each topological map represent the whole channels in raw data or the channels selected by BPSO/BQPSO. The color at each electrode denotes the magnitude of spatial patterns. novel evolutionary search algorithm, BQPSO to optimize channel selection in MI-based BCIs to acquire better data -obtaining high classification accuracy using as few channels as possible. BQPSO combines the strength of genetic algorithm (GA) with the features of PSO and is thus able to determine the global optimum of an optimization problem more efficiently than BPSO. This is verified by our results from in Fig. 3 and Fig. 4, where BQPSO-CSP consistently achieved a significantly lower error rate than BPSO-CSP using a nearly identical number of channels, or a nearly identical error rate at a significantly fewer number of channels. What is the degree of performance improvement following channel selection? This question might be answered by the results from Table 3 and Table 4. These results indicate that both BQPSO-CSP and BPSO-CSP significantly decrease the average error rate as compared to Basic-CSP-1, which uses all available channels. Thus, these process of channel selection are more effective. Interestingly, Basic-CSP-2, which used 18 channels selected manually from prior knowledge of neurophysiology, increased the average error rate rather than reducing it. BQPSO-based channel selection decreased the average error rate from 13.8% to 8.06% for the EEG data set -an improvement of 41.59%; it was decreased from 24.98% to 16.05% for the ECoG data set -an improvement of 35.75%. It is important to note that these results were achieved at only one combination of weight coefficients, i.e. w 1 = w 2 = 0.5. Considering that there were nine additional pairs of weight coefficients, further improvements in classification performance are entirely possible. It must also be noted that since electrodes for the ECoG data set were arranged for removing epileptic foci, they did not cover the whole motor area important for MI-based BCI study. This may explain why the average error rate of the ECoG data set was larger than that of the EEG data set, although the former had higher SNR. Despite this, the average improvement in error rate remained as high as 35.75%. This should be attributed to BQPSObased channel selection which displayed success in selecting informative channels, while removing redundant ones. For MI-based BCI paradigms, different types of brain signals can be used as input for a BCI system. In the study conducted Figure 7. Visualization of the most important spatial patterns of the two MI tasks derived from three CSP methods for the patient AM in the ECoG data set (64 raw electrodes in total). The dots in each topological map represent the whole channels in raw data or the channels selected by BPSO/BQPSO. by Naseer and Hong (2015), fNIRS signals arising from two mental tasks of right and left wrist MI were exploited for building a BCI. The mean and slope of changes in oxygenated hemoglobin (HbO) concentration were extracted as the feature signal for classification. The results, based on the slope of changes in HbO concentration, suggest an average classification accuracy of 87.28% across ten subjects using the data segment of 2-7 s. This degree of accuracy is on par with that obtained in our study (91.94% for the EEG data set, and 83.95% for ECoG data set), demonstrating the promising potential of fNIRS-based BCIs. How many channels are necessary to achieve satisfactory classification performance (∼90%) for MI-based BCI? The answer depends upon several factors, including the effect of subjects, experimental conditions, and signal processing algorithms used for classification. In the case that the latter two factors are fixed, the number of channels required for a high accuracy rate becomes subject-specific, i.e. it is heavily determined by the subjects themselves. It can be observed from Fig. 3 that an error rate of 10% was achieved by subjects al, aw, and ay, using 10 or fewer channels, and by subject using 20 channels. The subject av could not achieve the error rate regardless of the number of channels used for classification. It can be observed from the third row of Fig. 5 and Fig. 6 that the optimal position of electrodes might vary for different subjects, but was nevertheless primarily located in motor areas related to corresponding limbs. Note that results in Table 4 cannot be used to explain the problem of the number of channels as the ECoG recording channels were confined to localized brain regions for the purpose of surgery. In summary, for most welltrained subjects, about 20 carefully selected channels can ensure satisfactory classification performance if the experimental conditions and classification algorithms are well-designed. This study focused on channel selection methods in MIbased BCI applications. There are two requirements for channel selection-first, to reduce the number of channels, and second, to reduce the error rate compared to that yielded by using all available channels in raw data. To this end, the BQPSO-based wrapping approach is proposed for channel selection. Although it is computationally demanding, the subset of selected channels can achieve better classification results. The proposed BQPSO-CSP method for channel selection outperforms Basic-CSP-1, in terms of both classification accuracy and the number of selected channels. That is to say the BQPSO-CSP method can achieve higher classification accuracy with fewer channels compared to the CSP method using all available channels. As such, the convenience (fewer channels) and practicability (lower error rate) of a BCI system can be improved simultaneously. Conclusion To increase the classification ability of CSP, an evolutionary search algorithm, BQPSO, is proposed for channel selection, which is achieved by a wrapping manner. The fitness function of BQPSO is defined as the weighted sum of the error rate and the relative number of channels. The classification performance of BQPSO-based CSP method was tested on two data sets and compared with that of BPSO-based CSP, and Basic-CSP which employs either all, or manually selected, channels. Experimental results demonstrate that the proposed BQPSO-CSP method outperforms the BPSO-CSP method, by both reducing the error rate and the required number of channels for a MI-based BCI, as compared to Basic-CSP methods using all channels.
2019-07-20T13:04:18.586Z
2019-06-30T00:00:00.000
{ "year": 2019, "sha1": "c1fe21934a1bdf6731b084e6f9aafc294878c501", "oa_license": "CCBY", "oa_url": "https://doi.org/10.31083/j.jin.2019.02.17", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2fbaaec73aacaaa397b322b5c672d611dbf8d1fa", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
117524622
pes2o/s2orc
v3-fos-license
Proof of the Borwein-Broadhurst conjecture for a dilogarithmic integral arising in quantum field theory Borwein and Broadhurst, using experimental-mathematics techniques, in 1998 identified numerous hyperbolic 3-manifolds whose volumes are rationally related to values of various Dirichlet L series $\textup{L}_{d}(s)$. In particular, in the simplest case of an ideal tetrahedron in hyperbolic space, they conjectured that a dilogarithmic integral representing the volume equals to $\textup{L}_{-7}(2)$. Here we have provided a formal proof of this conjecture which has been recently numerically verified (to at least 19,995 digits, using 45 minutes on 1024 processors) in cutting-edge computing experiments. The proof essentially relies on the results of Zagier on the formula for the value of Dedekind zeta function $\zeta_{\mathbb{K}}(2)$ for an arbitrary field $\mathbb{K}$. Introduction The following and other numerous related integrals arose out of some studies in quantum field theory, in analysis of hyperbolic manifolds whose complementary volumes result from evaluations of Feynman diagrams [1][2][3][4][5]. I 7 represents the volume of an ideal tetrahedron in hyperbolic space H 3 and is the simplest of 998 empirically (i.e. by using various experimental-mathematics techniques) determined cases where the volume of a closed hyperbolic 3-manifold is rational multiple of values of various Dirichlet L series [1]. Here an ideal (or totally asymptotic) tetrahedron in H 3 is a hyperbolic tetrahedron with all four vertices at infinity. In 1998 Borwein and Broadhurst conjectured that = L −7 (2) ∼ = 1.151925470544491 (2) where L −7 (s) is the primitive Dirichlet L-series modulo seven [1,[6][7][8][9][10]. The sign ? here indicates that numerical verification of this "identity" has been performed, but that no formal proof of it is yet known [10]. The verification of (2) has been performed on several occasions, and recently, the agreement between the values for I 7 and L −7 (s) to at least 19, 995 digits, using 45 minutes on 1024 processors has been found in computations at the very edge of presently available numerical techniques and computing technology (see [11] and Remark 1 below). Note that is the only (and particularly "nasty") singularity of the integrand in the integral (1) inside the interval ( π 3 , π 2 ). In this paper our aim is to rigorously demonstrate the truth of the Borwein-Broadhurst conjecture (2). In order to do that, first we evaluate the integral I 7 in a closed form as follows in terms of the values of the Clausen function Cl 2 (θ). Second, we show that Third, a formal proof is given for the following striking and unexpected relation (verified numerically to 1, 800-digit accuracy [1]) between six values of Cl 2 (θ) The conjecture (2) is a rather interesting illustration of current work and results in experimental mathematics [6][7][8][9][10], namely a type of mathematical investigation that stresses the importance and significance of computational (or "numerical") experiments, and, in which, advanced computing technology is used to explore mathematical structures, test conjectures and suggest generalizations. Computations can in many cases provide very compelling evidence for mathematical assertions, however, results discovered experimentally will, in general, lack some of the rigor associated with mathematics but will provide general insights into mathematical problems to guide any further exploration, either experimental or traditional. As in experimental science, experimental mathematics can be used to make predictions which can be verified or falsified on the bases of additional experiments. Some examples of research tools are computer and symbolic algebra, arbitrary precision arithmetic, Gröbner bases, integer relation algorithms (such as the LLL algorithm and PSLQ algorithm), computer visualization, cellular automata and related structures, and various databases. A significant milestone and achievement of experimental mathematics was the discovery in 1995 of the Bailey-Borwein-Plouffe formula for the binary digits of π. For the sake of an illustration, we mention several recent papers [12][13][14] as an example of power of this technique in the context of physics-related problems. Preliminaries Before proceeding to a proof of (2) we provide some preliminaries. Let d n be the Kronecker symbol defined for a positive integer n and a non-square integer d satisfying the congruences d ≡ 0 or 1 (mod 4) (some of the admissible values are −8, −7, −4, −3, 5, 8,12,13,17,20,21), then, for R(s) > 1, define Dirichlet L series L d (s) in the following way (for more details, see, for instance, [15,16]) Symbol d n , for a given admissible d, only assumes values 1, −1 and 0, and is a periodic function with a period of |d|, thus L d (s) can be rewritten as the following sum where ζ(s, a) denotes the Hurwitz zeta function Now, since −7 n , n ∈ N, is periodic in n with a period of 7 and 1, 1, −1, 1, −1, −1 and 0 are its first seven values, we have L −7 (s) = 1 7 s ζ s, Clearly, L −7 (2) can be expressed in terms of the Hurwitz zeta function values, however, we have failed to utilize this connection in our attempts to establish (2). Remark 1. L −7 (2) expressed by means of six the ζ(2, s) values was used by Bailey and Borwein in their tests of the conjecture (2). I 7 was computed (using highly parallel tanh-sinh quadrature) to 20, 000 digit accuracy and the result was compared with a 20, 000-digit evaluation of the six-term infinite series for L −7 (2). The evaluation of the integral I 7 (by System X at Virginia Tech, an Apple G5-based parallel supercomputer comprised of 1,100 2GHz dual processor Power Mac G5 computers) is probably the highest-precision non-trivial numerical integration performed to date. The following lemma is familiar multiplication formula for the Clausen function In particular we have the duplication and triplication formula Formal proof of (2) We are now ready to demonstrate that Equations (4), (5) and (6) hold true and in that way formally prove the conjectured identity (2). In order to obtain (4) we shall use Lemmas 2 and 3. Since the integral I 7 in (1) could be decomposed as dθ, φ 7 being the singularity (3), by making use of Lemma 3, we find that On the other hand, by Lemma 2, we have and In the end, these expressions and (9) together give the sought result given by (4). Observe that recently Coffey [19] evaluated differently this integral in terms of Cl 2 (θ). To obtain (5) It is obvious then, that, by making use of this expansion and the above definitions of L d (s) and Cl 2 (θ), (7) and (8), we would get the required formula (5). What remains is to provide the most important part of our proof of (2), i.e. to show that (6) holds true. It is interesting that it has not been noticed before that the validity of the relation (6) is an immediate consequence of the results of Zagier [20, p. 287]. More precisely, (6) was essentially proved by Zagier, who deduced the value of L −7 (2) in two different and independent ways, and, as a result, obtained that L −7 (2) equals to 2 and also equals to where A(x) is the real-valued function defined by As a matter of fact, in [20], Zagier investigates the values of the Dedekind zeta function (of an algebraic field K) ζ K (s) at a positive even integral argument s = 2m. He gives a geometric proof of the formula for the value of ζ K (2) for an arbitrary field K (Theorem 1) and the proof involves the interpretation of ζ K (2) as the volume of a hyperbolic manifold. In addition, by routine number-theoretical tools, he also finds the formula for the value of ζ K (2m) valid if K is abelian over Q (Theorem 2). The above expressions in (10a) and (10b) are, respectively, examples of the application of Theorem 2 and Theorem 1 in the case of the imaginary quadratic field Q( √ −7). Observe that then the Dedekind zeta function factors as where the Riemann zeta function and Dirichlet L series, ζ(s) and L −7 (s), are the afore-defined functions. Upon recalling that ζ(2) = π 2 6 we have and it is clear now that the values of L −7 (2) given by (10a) and (10b) follow from the results for ζ Q( √ −7) (2) found by Zagier and explicitly stated as Equations (5) and (6) in [20, p. 287]. To obtain (6), we only need to rewrite the expressions (10a) and (10b) in terms of the Clausen function Cl 2 (θ) and show that they respectively appear on the left-hand and right-hand side of (6). Finally, it is clear that combining (5) and (12) leads to the relation between six values of Cl 2 (θ) given in (6).
2010-10-31T19:19:12.000Z
2010-10-31T00:00:00.000
{ "year": 2010, "sha1": "bde625fa8ac1f9bab9566235c81fc66ca17fa009", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bde625fa8ac1f9bab9566235c81fc66ca17fa009", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
119106055
pes2o/s2orc
v3-fos-license
Experimental Eavesdropping Attack against Ekert's Protocol based on Wigner's Inequality We experimentally implemented an eavesdropping attack against the Ekert protocol for quantum key distribution based on the Wigner inequality. We demonstrate a serious lack of security of this protocol when the eavesdropper gains total control of the source. In addition we tested a modified Wigner inequality which should guarantee a secure quantum key distribution. Quantum key distribution (QKD) provides a method for distributing a secret key for unconditional secret communications based on the "one time pad" because it guarantees that the presence of any eavesdropper compromising the security of the key is revealed. For a review on this topic see [1]. The first protocol for QKD has been proposed in 1984 by Bennett and Brassard [2], the worldwide famous BB84 protocol. In 1991 A. Ekert proposed a new QKD protocol whose security relies on the non-local behavior of quantum mechanics, i.e., on Bell's inequalities [3]. In ref. [12] the Wigner inequality was first proposed to provide an easier and equally reliable eavesdropping test as the CHSH when the Ekert protocol is implemented. The necessary security proof of the Ekert protocol based on the Wigner inequality consists in verifying the violation of W ≥ 0. To obtain the Wigner inequality (W ≥ 0) it is necessary to review the Wigner argument [15]. Two main assumptions are stipulated in the proofs of the Wigner inequality: locality and realism. Locality means that Alice's measurements do not influence Bob's measurements, and vice versa. Realism means that, given any physical property, its value exists independently of its observation or measurement. The counterpart of the local-realistic theories is the non-locality behavior of quantum mechanics, a signature of quantum entanglement. In particular Wigner considered a quantum system prepared in the singlet state, and he obtained the violation of the in-equality W ≥ 0, i.e. , W = −0.125. Furthermore, in the derivation of his inequality, Wigner assumed perfect anticorrelation in the measurement results. This assumption is obviously reasonable in the test of realism and locality of a physical theory (it reflects the classical counterpart of a quantum system prepared in the singlet state). Nevertheless, in terms of QKD this assumption corresponds to a lack of security. In fact, when the eavesdropper, Eve, measures photons on either one or both of Alice and Bob channels, her presence should be revealed by a higher value of W than the local-realistic theories limit W = 0, as it happens for the CHSH inequality [3]. Unfortunately this is not the case. In fact, only when Eve adopts an interceptresend strategy and detects one photon of the pair, the inequality becomes W ≥ 0.0625, but, as we will show, this is not for eavesdropping on both channels, because in this case there is no limit [16]. In this letter we perform an experiment proving the weakness of the Wigner inequality as a security test for QKD, under the condition of Eve gaining total control of the source of photon pairs. Under this condition, she prepares each particle of the pair separately in a well defined polarization direction, in other words she prepares the photon in Alice's channel in the state |φ A , and the photon in Bob's channel in the state |φ B , respectively Thus Eve has a perfect knowledge of the polarization of the photons sent and, even if the non-local behavior of the original quantum system (the singlet state) is completely removed, we prove that she can avoid disclosing herself. We remind the reader that ref. [16] presented a modified version of the Wigner's parameter W which maintains the same limits, i.e. , W ≥ 0 for local realistic theories and W = −0.125 for the singlet state, but allows secure QKD, because W contains the additional term accounting for the anticorrelation. In our experiment we also measure W and we observe that the minimum of W is well above the limit for local-realistic theories in agreement with the theory [16], ensuring a secure QKD. The measured quantities in our experiment are W and W [16], respectively where p αA,αB (x A , y B ) are the probabilities of detecting the pair of photons by the couple of detectors x A , y B (x A = + A , − A and y B = + B , − B ) when in the detection apparatuses two half-wave plates (HWPs) project photons in the polarization bases In Fig. 1 Fig. 1 (a) along "Fig. 3 (a)" line, while min( W ) ≃ 0.0443 are almost in the middle of dark grey regions of Fig. 1 (b). The straight lines for φ B = 0 • , 62 • , 98 • represent sections of the plots where the theoretical predictions are compared with the experimental results of Figs. 3 (a), (b) and (c). In Fig. 2 we depict the experimental scheme considering the situation in which Eve has total control of the source. In this scheme, the source under Eve's control replaces the source of entangled photon pairs of a typical QKD scheme [10,11,12,13,14]. Eve's source is obtained by a 1 mm length LiIO 3 nonlinear crystal (NLC2) pumped by ultrashort pulses (150 fs) at 415 nm generated from a second harmonic (obtained from NLC1) of a ultrashort mode-locked Ti-Sapphire with a repetition rate of 76 MHz pumped by a 532 nm green laser. The NLC2 realizes a non-collinear type I phase matching and Eve selects two quantum correlated optical channels along which the twin photons at 830 nm (emitted at 3.4 • ) are sent towards Alice and Bob's detection apparatuses [17,18]. The down-converted photons of a pair have the same polarization state (ordinary waves) and Eve can modify deterministically the polarization state of the photon by means of a half-wave-plate (HWP) in each channel, in other words Eve sends photon pairs to Alice and Bob with polarization state |φ A and |φ B , respectively. Alice and Bob's detection apparatuses are identical and are composed of an open air-fiber coupler to collect the down-converted light by single-mode optical fibers. The detection of photons in the proper polarization basis is guaranteed by a HWP before the fiber coupler and a fiber-integrated polarizing beam splitter (PBS). Photons at the two output ports of the PBS are sent to fiber coupled photon counters (Perkin-Elmer SPCM-AQR-14) [19]. Interference filters peaked at 830 nm with 11 nm bandwidth are placed in front of the fiber couplers to reduce straylight. Coincident counts between any of Alice's detectors (+ A , − A ) and any of Bob's detectors (+ B , − B ) are obtained from an Elsag prototype of four-channel coincident circuit [20,21]. Single-counts and coincidences are counted by a National Instruments [19] sixteen channels counter plug-in PC card. The terms p αA,αB (x A , y B ) are estimated in terms of the number of coincident counts: (4) where N αA,αB (x A , y B ) is the number of coincidences measured by the couple of detectors x A , y B (x, y = +, −) when Alice and Bob's detection apparatuses project photons in the polarization bases at Eq.s (3). In Fig. 3 (a) we present our main result: photons sent by Eve in a definite polarization state violate the limit of local-realistic theories. Experimental data for W (circles) and W (squares) are presented versus φ A , with φ B fixed approximately at 62 • and show a good agreement with theoretical predictions (lines). As expected from the theory [16], not only does W violate the limit of local-realistic theories (W = 0), but also some data points pass the quantum limit (W = −0.125); while W is well above the limit of local-realistic theories. The theoretical curves are calculated with φ A = 62 • , and the discrepancy between theory and experiment can be explained by noting the difficulties in the proper angular positioning of the four HWPs and in the noise introduced by real optical devices, e.g., fibers, PBSs, detectors dark counts and straylight. In Fig. 3 (b) we present the experimental data and the theoretical curve obtained with φ B = 98 • , corresponding to a position close to the minima of W as predicted by the theory. Fig. 3 (b) shows a good agreement between experimental data (circles) and theoretical predictions of W and the minimum of experimental values, 0.0685, is slightly higher than the theoretical predictions of 0.0466. Furthermore, Fig. 3 (b) shows also the experimental data for W (small squares) together with the associated theoretical curve, and we observe that also in this case a violation of the local-realistic theories limit occurs. According to Eq.s (1) and (2), we note that W differs from W only because of the term p 0 • and this occurs when In Fig. 3 (c) we consider the situation when φ B = 0 • and we observe that the experimental data for W (small circles) are almost superimposed to the W ones (squares) in good agreement with the theoretical prediction, W = W (line). Some further analysis of W must be considered for the practical implementation of the Ekert protocol based on Wigner's inequality. According to [12], we highlight that the Ekert's protocol based on modified Wigner's inequality still guarantees a simplification with respect to the one based on the CHSH inequality, because Alice and Bob randomly choose between two rather than three bases. Though the necessity of an experimental evaluation of the term p 0 • forces Alice and Bob to sacrifice part of the key for the sake of security, we note that in any practical implementation of QKD protocols, Alice and Bob distill from the noisy sifted key a nearly noise-free corrected key by means of error correction procedures subjected to the constraint of knowing the quantum bit error rate (QBER). Also, the QBER is estimated at the cost of losing part of the key. Thus, we suggest using the same sacrificed part of the key to estimate both W and QBER. To perform a proper comparison of the performances of Ekert protocols based on Wigner's inequality versus the one CHSH-based [3], it is necessary to consider situations where the same number of analyzer settings are employed. In particular, we consider the modified protocol based on Wigner inequality proposed in [16] where Alice and Bob measure randomly using three analyzer settings (as in the case of CHSH). This protocol is more efficient than the protocol based on CHSH. In particular, for CHSH only 2/9 of the qubits exchanged are devoted to the key generation [3], while here we can improve till 1/3 depending on the security needs. Furthermore in this protocol none of the qubits exchanged are discarded while in the case of CHSH 1/3 of the qubits are discarded [16]. In conclusion, this paper highlights the insecurity of Ekert's protocol based on the Wigner inequality. We performed an experiment simulating the total control of photons in Alice and Bob channels by an eavesdropper, proving that the QKD Ekert protocol based on Wigner's inequality presents a serious lack of security. In addition, we proved that a modified version of the Wigner security parameter guarantees secure QKD. We are indebted to P. Varisco, A. Martinoli, P. De Nicolo, S. Bruzzo, I. Ruo Berchera, G. Di Giuseppe. This experiment was carried out in the Quantum Optics Laboratory of Elsag S.p.A., Genova (Italy), within a project entitled "Quantum Cryptographic Key Distribution" cofunded by the Italian Ministry of Education, University and Research (MIUR) -grant n. 67679/ L. 488. In addition S. C. acknowledges the partial support of the DARPA QuIST program and M. L. R. acknowledges the partial support by INFM.
2019-04-14T03:19:36.142Z
2003-08-05T00:00:00.000
{ "year": 2003, "sha1": "ece9d0f54a4f7d00d2ada5299159a12556e4e8b9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/quant-ph/0308030", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b93b885850a5744f0bc45720e2c386462c3b6a57", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
261136568
pes2o/s2orc
v3-fos-license
Complications Related to Childhood Idiopathic Nephrotic Syndrome, Its Treatment and the Associated Risks in Patients Aim Nephrotic syndrome is the most common childhood glomerular disorder, but data on the associated complications are limited and predisposing risk factors have not been fully defined. The aim of this study was to evaluate disease- and treatment-related acute and chronic complications in patients with childhood idiopathic nephrotic syndrome (INS), and to identify the risk factors involved in the development of complications. Methods This single-center study was performed at the pediatric nephrology department of a tertiary pediatric hospital in Turkey. The study included 411 patients with a diagnosis of childhood INS, 128 of whom had disease-related and treatment-related complications. Patients diagnosed and followed-up between January 2010 and January 2022 were evaluated retrospectively. Results Complications occurred in 31.1% of the 411 patients. Mean age at the time of diagnosis was 7.54 ± 4.37 years, and the male/female ratio was 0.9:1. Among the patients with complications, 96.9% were disease-related, and 50.8% were treatment-related complications. In older age, high proteinuria level, a low estimated glomerular filtration rate (eGFR) level at diagnosis, and female gender were significant risk factors for complication development (P = 0.000, P = 0.006, P = 0.04, and P = 0.07, respectively). Chronic kidney disease (CKD) developed in 7% of patients and 2.9% of patients had end-stage renal disease (ESRD). Additionally, three of 12 patients with progressive ESRD underwent transplantation. Also the incidence of ESRD was significantly higher in the patients with complications than in those without complications (P < 0.05). Conclusion The present findings suggest that careful monitoring of patients with childhood INS at risk for complications and implementation of personalized treatment programs can improve long-term outcomes, especially in patients that progress to ESRD and are followed-up with dialysis or transplantation as targeted therapy. Introduction Nephrotic syndrome (NS) is the most common glomerular disorder of childhood, with an incidence of 2-7 per 100,000 children [1].Clinically, it is defined as the presence of the four classic symptoms: massive proteinuria, hypoalbuminemia, diffuse edema, and, in most cases, hyperlipidemia.The main causes of childhood 'idiopathic' NS are minimal change disease (MCD) and, less frequently, focal segmental glomerulosclerosis (FSGS). The pathogenesis of NS involves structural changes in the glomerular filtration barrier, increased permeability, and consequent massive leakage of albumin and other negatively charged proteins into urine [2].Loss of plasma proteins in urine causes complications of NS, either as a direct result of varying protein concentrations in plasma or as a secondary consequence of altered cellular function [3].Complications of childhood NS fall into two categories: disease-related complications treatment-related complications. Data on complications in pediatric NS patients are limited and predisposing risk factors are yet to be fully defined.The aim of the present study was to use our large patient data set to evaluate treatment-and disease-related acute and chronic complications in children with idiopathic nephrotic syndrome (INS), and to identify the risk factors involved in the development of complications.It was hypothesized that careful monitoring of patients at risk for complications and implementation of personalized treatment programs would improve long-term outcomes. Study population This single-center retrospective study was performed at the pediatric nephrology department of a tertiary pediatric hospital in Turkey.The study included 411 patients with a diagnosis of childhood INS, 128 of whom had disease-related and treatment-related complications.Patients diagnosed and followed-up between January 2010 and January 2022 were evaluated retrospectively.Inclusion criteria were age 1-18 years and regular follow-up at the pediatric nephrology department for ≥1 years.Patients diagnosed with infantile and secondary NS (immunoglobulin A (IgA) nephropathy, lupus, Henoch Schönlein Purpura, or malignancy) were excluded from the study. Physical examination and laboratory findings for all patients with NS were reviewed.Demographic data, clinical pattern of NS, renal biopsy results, hospitalizations, presence of infection, history of venous thromboembolism (VTE), and treatment-related complications were recorded.Duration of follow-up, physical examination findings, including edema, blood pressure (BP), and laboratory findings, such as serum creatinine, estimated glomerular filtration rate (eGFR), albumin, and urinalysis, were retrospectively obtained from patient medical records and reviewed.Kidney biopsy was performed in patients aged <1-10 years at disease onset that had accompanying macroscopic hematuria and hypertension (HT), were resistant to steroids, or had a decreased glomerular filtration rate. Definitions Kidney Disease Improving Global Outcomes (KDIGO) 2012 guidelines were used as the reference for clinical diagnosis of NS, definitions of remission/relapse, and acute kidney injury (AKI).The treatment protocol was also based on KDIGO recommendations [4].NS was defined as edema with massive proteinuria (greater than 40 mg/m 2 per hour and/or loss of 3 g or more per day of protein into the urine or, on a single spot urine collection, the presence of 2 g of protein per gram of urine creatinine) and serum albumin <2.5 g/dl. eGFR was calculated according to the Schwartz formula [5].Chronic kidney disease (CKD) was defined as a creatinine clearance <60 ml/min/1.73m 2 in two consecutive measurements performed ≥3 months apart [4].AKI was defined according to the KDIGO guidelines as stage 2 or 3 AKI, i.e., a two-fold increase in the serum creatinine level from baseline.We used the baseline creatinine value, but if it was unknown the value at remission was accepted as the standard.HT was diagnosed according to the American Academy of Pediatrics 2017 HT Guidelines and defined as the mean of three consecutive systolic and/or diastolic BP measurements via the auscultatory method ≥95th percentile for age, gender, and height, or BP >130/80 mm Hg in participants aged ≥13 years [6].A Z score <-2.0 for age and gender in the bone mineral density (BMD) test was defined as a decrease in BMD, and osteoporosis was diagnosed in patients with a history of fracture.Psychosis secondary to steroid therapy was defined as any change in mental status detected by the clinician.Cushingoid appearance and obesity, defined as complications, include patients who developed completely after treatment and were therefore considered drug-related.At the beginning of the treatment, the patients were informed about the complications, nutritional recommendations were made and they were kept under regular control.The classification of treatment-related complications is shown in Table 2.The study protocol was approved by the Ethics Committee and was performed in accordance with the Declaration of Helsinki (2020-KAEK-141/370). .Categorical variables are presented as number and percentage.Student's t test for independent samples was used to compare the means of continuous variables.The χ2 test or Fisher's exact test was used to compare categorical variables.Logistic regression analysis was used to identify variables that were independently associated with complications at the 95% CI.Variables selected for multivariate analysis were used to build a final model after discarding any violation of proportionality assumptions.The level of statistical significance was set at P < 0.05. Results Complications occurred in 128 (31.1%) of the patients, of which approximately 36% had frequent relapsing/steroid-dependent NS.Mean age at diagnosis of NS was 7.54 ± 4.37 years, and the male to female ratio was 0.9:1.In total, 61.7% of the patients were diagnosed histopathologically and 30.5% had FSGS.Among the patients with complications, 96.9% were disease-related, and 50.8% were treatment-related complications.Only 3.1% of the patients had treatment-related complications without disease-related complications.The most common complication was infection, with a rate of 63.3%.At the time of presentation, the mean eGFR was 110. AKI: Acute kidney injury There wasn't a significant difference in the albumin level between the patients with and without complications (P = 0.33).The level of proteinuria was significantly higher in the patients with complications (P = 0.02).Patients with complications had a significantly lower eGFR at admission (P = 0.04), were significantly older at the time of diagnosis (P = 0.00), and were female significantly more frequently (P = 0.002) (Table 4).Logistic regression analysis showed that in older age at diagnosis, female gender, a high proteinuria level, and a low eGFR were significant risk factors for complication development (P = 0.000, P = 0.006, P = 0.04, and P = 0.07, respectively).The logistic regression analysis results are shown in Table 5. Discussion NS is among the most common glomerular diseases of childhood, and is characterized by severe and prolonged proteinuria, and prolonged and multiple treatments that are associated with an increased risk of systemic complications.The present study aimed to evaluate disease-and treatment-related complications, and the associated risk factors in patients with childhood INS.The remarkable result of our study is that the risk factors identified for complications associated with INS are presented.The risk factors for the development of complications in these patients were determined as in older age at diagnosis, female gender, a high proteinuria level, and a low eGFR were significant risk factors. Patients with NS are at risk of infection.Despite treatment advances, infection remain a major problem in developing countries and a cause of death [7].NS is associated with a low IgG concentration due to urinary loss and altered production, and the presence of factor B and factor I in alternative pathway components, which contribute to the risk of infection [7].In the present study the most common complication observed during follow-up was URTIs (37%).Similarly, other studies report that acute respiratory tract infection is the most common infection [8].In terms of the frequency of complications in the present study URTIs were followed by pneumonia, chickenpox, peritonitis, gastroenteritis, UTI, cellulitis, bacteremia, and hepatitis B infection.Pneumonia frequency had the variability in the studies and observed in 3.3%-12.9% of cases [9,10].Peritonitis is also a life-threatening and common infectious complication of NS and it was reported that peritonitis had a high rate among the infectious complications, as in the present study [9,10].Adedoyin et al. showed that the one of most common infectious complications of NS is UTI with incidence of 16% similarly with this study [11].Studies have reported a low incidence of skin infections in children with NS and in the present study the frequency of cellulitis and skin infections was only 6.2% [12].In the present study the bacteremia, hepatitis, and chickenpox rates were similar to that of skin infections, although they are rarely reported in the literature [13].Based on the present findings, we think that the diversity of infectious complications may related to demographic (geographic and socioeconomic) characteristics. Little is known about AKI in children with NS.Among patients with AKI, most of them have a history of intravascular volume depletion, including diarrhea, vomiting, and dehydration.In addition, infections and drug use are also known as major predisposing factors underlying AKI [14].In the present study 12.1% of the patients developed AKI and 90% of these cases occurred during hospitalization.Sato et al. reported that 24% of INS patients in their study had AKI [15].Similarly, Kim et al. showed a high incidence of AKI (32.2%) among children hospitalized with NS in their single-center studies [16]. Thrombosis is a well-known complication of NS that is associated with a high risk of mortality.Various mechanisms have been reported to promote thrombosis in NS patients that generally fall into two categories: urinary loss of proteins that prevent thrombosis and increased synthesis of factors that promote thrombosis.In the present study, thromboembolic complications were observed in 3.8% of the patients.In addition, 12 (75%) patients were in relapse at the time they had thrombosis.Among the 16 patients with thrombosis, the most common type was catheter-related thrombosis in the jugular and subclavian veins (n = 5 (31.25%)), and DVT of the femoral, popliteal, and cephalic veins of the extremities (n = 7 (43.7 %)).The incidence of thrombosis in the present study is similar to the overall rate of 1.8%-6.6%reported earlier [17]. As reported earlier, in the present study thrombosis occurred during the relapse phase of NS in most patients [17,18].Lilova et al. showed that the most common sites of thrombosis are the DVT and central venous thrombosis as in the present study [18]. It is known that such complications as anemia, hypothyroidism, and hypocalcemia can occur in NS patients due to loss of binding proteins.In the present study, anemia was observed in 9.2% of patients, hypothyroidism in 2.6%, and hypocalcemia in 3.1%.According to the literature, the prevalence of iron deficiency anemia in nephrotic patients is variable (19.2%-59%) [19,20].Thyroid functions were evaluated in children with NS and hypothyroidism was noted in 58.6% of patients [21].We think the difference in the hypothyroidism and anemia rate may be due to differences in the criteria used to definations and inclusion. A study on the relationship between NS and hypocalcemia based on measurement of ionized calcium reported that 3% of patients had hypocalcemia [22].Studies on the frequency of hypocalcemia in NS patients are limited, but they all report a very low rate when based on ionized calcium. Treatment-related complications vary greatly by type and frequency in NS patients due to a variety of treatments, drug side-effects, and metabolic response.In the present study corticosteroid-related complications were the most common (89.2%).Low BMD (n = 30) and posterior subcapsular cataracts (n = 27) were the most common corticosteroid-related complications.Oh et al. observed NS patients treated with steroids, and they found HT being the most common complication, affecting 62% of patients [23].A smallscale Japanese study reported that 17 (40.5%) of 42 patients treated with steroids for primary kidney disease developed corticosteroid-induced diabetes [24].In studies reported from centers where routine ophthalmologic evaluation is performed, steroid-associated cataracts has been shown to occurred in 10%-27% of children with NS [25].Most likely, the variation in outcomes can be explained by differences in the duration of steroid therapy and cumulative steroid dose, or by clinicians' management strategies for complications. Large-scale studies presenting complications associated with non-steroidal immunosuppressive agents are very limited.Complications associated with alkylating agents developed in 6.2% of the present study's patients, as infectious complications in three patients and nausea in one patient.Complications with cyclosporine A developed in 22 patients, with 11 with nephrotoxicity, 10 with hirsutism, four with HT, and two with neurotoxicity.Complications with tacrolimus developed in four patients, including four cases of nephrotoxicity, one of diabetes mellitus, one of HT, one of tremor, and one of headache.Mycophenolate mofetil-related complications developed in three patients, as follows: gastrointestinal system complaints: n = 2; bone marrow suppression; n = 1; infection: n = 1.Only one of the five patients treated with rituximab had bronchospasm during infusion.A study that included 47 NS patients treated with alkylating agents observed the following adverse effects: leukopenia (n = 1); acute chemical cystitis (n = 1); alopecia (n = 1); severe infections (n = 1) [26].Another study reported that 6% of NS patients treated with cyclosporine A had renal failure and 10% had HT, which is similar to the present findings.A study on the adverse effects of tacrolimus in childhood NS patients observed that the most common adverse event was diarrhea, followed by AKI, hyperglycemia, and HT [27].Also a study on NS patients treated with mycophenolate mofetil reported abdominal pain as an adverse reaction in two patients [28].Rituximab is generally well tolerated in most childhood NS patients and the most commonly reported adverse reactions are infusion-related, with a frequency of 5%-53% [29].The treatment-related side effects and their frequencies noted in the present study are similar to those in the literature. During the long-term follow-up (mean: 6.65 ± 4.18 years) of the present study's patients with complications, CKD developed in 22.7% and ESRD developed in 9.4%.As compared to the patients without complications, those with complications had a significantly higher incidence of ESRD.These rates are also higher than in earlier studies in INS, which reported a CKD prevalence of 3%-10% and an ESRD prevalence of 3.6% during a mean follow-up of 7.70 ± 3.81 years [30].It supports the conclusion that the development of CKD and ESRD is more common in patients with complications. The study has some limitations, including its single-center retrospective design.Despite these limitations, we think the study makes a significant contribution to the literature, as it presents for the first time data on the complications of childhood NS and data on both disease-and treatment-related complications, as well as the risk factors for the development of complications.Another limitation of our study is the deficiencies in genetic data.It could not be presented in our study because genetic studies in our patients were insufficient to reach a conclusion.However, it is known that the frequency of ESRD is much higher in patients who are investigated for reasons such as steroid resistance, the patient's age at presentation (infantile-congenital NS), the presence of a family history, and who are shown to have a genetic mutation [31].For this reason, we think that if genetic results were included in our study, we could obtain much more enlightening results.Therefore, there is a need for larger studies that include clinical and genetic findings. Conclusions In this study, proteinuria levels were found to be significantly higher and eGFR significantly lower in patients who developed complications compared to those who did not.At the same time, patients who developed TABLE 1 : Complications related to massive proteinuria or hypoalbuminemia 23± 43.27 mL/min/1.73m-2,the mean albumin level was 1.88 ± 0.85 mg/dL, and the mean proteinuria level was 222.81 ± 134.16 mL/min/1.73m-2.Mean duration of follow-up was 6.65 ± 4.18 years.Thrombosis occurred during the first year following diagnosis of NS in nine (56.25%) of the patients and >1 year after diagnosis in seven (43.75%).Thrombosis was observed in more than one region in 18.75% of patients with thrombosis.In patients with loss of binding proteins, 9.2% had anemia, 2.6% had hypothyroidism, and 3.1% had hypocalcemia as complications.Hyperlipidemia as a cardiovascular complication occurred in 21.1% of the patients.Treatment-related complications occurred in 15.5% of the INS patients.Complications related to ≥1 drugs were observed in 26 patients.Corticosteroid-related complications occurred most frequently (89.2%).Corticosteroid-related cushingoid features (obesity) were observed after treatment in 12 patients, growth retardation in 15, HT in three, low BMD in 30 (osteoporosis: n = 2), posterior subcapsular cataracts in 27, glaucoma in three, behavioral disorders in one, increased intracranial pressure (ICP) in one, gastrointestinal side-effects (peptic ulcer) in 11, skin disorders in 15, and glucose intolerance in three. Among the infectious complications, 37% were upper respiratory tract infections (URTIs), followed by pneumonia (23.4%), chickenpox (21%), peritonitis (18.5%), gastroenteritis (18.5%), urinary tract infection (UTI) (11.1%), cellulitis (6.2%), bacteremia (3.7%), and hepatitis B infection (3.7%).In all, 49.4% of the patients had a history of >1 infections, and 33.3% presented with infection at the time of NS diagnosis.Additionally, 86.4% of the patients had ≥1 hospitalizations due to infection.At the time of infectious complications, 32 (39.5%) patients were not receiving immunosuppressive therapy and 55 (67.9%) were in relapse.Acute renal failure developed in 12.1% of patients, and 90% of these events occurred during hospitalization.AKI occurred a mean 2.90 ± 2.96 years after NS diagnosis.During AKI, 20 (40%) of the patients were not receiving immunosuppressive therapy, and 29 (58%) patients were diagnosed with AKI during relapse.Thromboembolic complications developed in 3.8% of the patients, of which nine (56.3%) of the cases were detected during hospitalization and one (6.3%) of the patients required intensive care.At the time of thrombosis, six (37.5%) patients were not receiving immunosuppressive therapy and 12 (75%) patients were in relapse.Among the 16 patients with thrombosis, seven (43.7%) had deep vein thrombosis (DVT) of the femoral, popliteal, and cephalic veins of the lower extremities; five (31.25%) had catheter-related thrombosis in the jugular or subclavian vein; two (12.5%) had renal vein thrombosis; one (6.25%)hadsagittalsinus thrombosis and associated cerebral infarct; and one (6.3%)hadintracardiac(in both the right ventricle and right atrium) thrombosis.Diagnosis of NS and thrombosis occurred at the same time in one (6.3%)patient.ineight (12.3%)patients as follows: nephrotoxicity: n = 4; diabetes mellitus: n = 1; HT: n = 1; tremor: n = 1; headache: n = 1.Only one of the five patients treated with rituximab had bronchospasm during infusion.The distribution of complications is shown in Table3. TABLE 5 : Logistic regression analysis of the risk factors for the development of complications of NS CI : Confidence interval; OR: Odds ratio; NS: Nephrotic syndrome; eGFR: Estimated glomerular filtration rateDuring the long-term follow-up, CKD developed in 7% of the patients and the mean time from diagnosis to CKD was 4.19 ± 4.78 years.In total, 2.9% of the patients had end-stage renal disease (ESRD).Moreover, three of the 12 patients with progressive ESRD underwent transplantation.Furthermore, 83.3% of the patients that progressed to ESRD had disease-or treatment-related complications.As compared to the patients without complications, those with complications had a significantly higher incidence of ESRD (P < 0.05).
2023-08-26T15:39:08.719Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "e220430e96493998d423ad0ddce386e6202d1db1", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/original_article/pdf/174792/20230822-1869-idpy0b.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e7bb46f693067e1bac1334f2676ef80a87e1914", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
187715346
pes2o/s2orc
v3-fos-license
Predictive model to determine the volume of water flooded reservoir with the slope of the bottom topography that is not part of mounds or depressions The predictive model for determining of water volume depending on the level of flooding based in the article proposed. The model includes the coastal zone and part of the intermediate zone between the depressions and the mounds of the bottom. The model makes it possible to outpace the potential danger of a hydrological emergency in terms of water volume, depending on a small number of parameters. Introduction One of the most important global problems of the last decades has been the global climate change. The imbalance of natural systems has led to temperature anomalies, changes in precipitation patterns and an increase in such phenomena as hurricanes, floods, earthquakes and droughts. According to the United Nations, the damage caused by natural disasters, including floods has only increased over the years. Flooding is one of the most dangerous and frequent destructive disasters in terms of area of distribution, average annual material damage and the number of victims. Currently, there are no uniform rules for accounting, collection and storage of information on waters that occurred in different regions, a single system of damage assessment and a single system of complex multi-factor flood risk assessment. In Russia, 40-70 major floods occur annually, of which 70-80% floods and floods. The average annual damage from such flooding is more than 40 billion rubles [1]. In Russia there are more than 30 thousand reservoirs, which are operated without reconstruction for more than 50 years and are in disrepair. Numerous studies by European scientists show that due to climate change, the socio-economic development of infrastructure in Europe could double the risk of flooding by 2050. Currently, catastrophic flooding in Europe occurs every 16 years. Floods in 2000-2012 cost the European Union more than 50 billion euros, and by 2050 these losses could reach 25 billion euros per year. Thus, the problem of management of the hydrological situation of the area is quite relevant, because there is a danger of hydrodynamic accidents. Materials and methods To ensure the successful implementation of measures to reduce damage from floods, it is necessary to forecast the development of floods on the basis of mathematical modeling of the flooding process, which provides adequate forecast values of the main characteristics of flooding of the territory necessary for the development of effective measures to minimize damage from them. A special interest of applied nature is the problem of modeling the slopes of the bottom relief, which are not part of the mounds or depressions. These slopes include the coastal zone, as well as part of the intermediate zone between depressions and mounds [2]. For land topography these areas include Dingle, sole, gully, precipice, part of the valley etc. These sites will be modeled by defining equations of a plane. Results and discussions The main result of the study was the problem of modeling the slopes of the bottom relief that are not part of the mounds or depressions. These slopes include the coastal zone, as well as part of the intermediate zone between depressions and mounds [2]. For land topography these areas include Dingle, sole, gully, precipice, part of the valley etc. The sites were modeled by defining equations of a plane bounded by the point's spot soundings (figure 1). It is desirable to have a model with certain and small set of parameters that characterize the volume of water [19][20][21][22][23] Then the volume of liquid concentrated on this rectangle for the underwater hill is equal to: In fact, we obtain a predictive model that depends on a small number of parameters. So, if the water level rises to a height level h the new volume of water is calculated by two terms: 1) the first term is associated with the relief within the boundaries of the former reservoir. To the volume of water calculated by formulas (2)-(4) the summand is added: here S(water) the surface area of the reservoir, which is easy to determine on the map; 2) the second term is connected with the flooded relief of the earlier coastal plots [19][20][21][22][23][24][25]. The volume of water in this area is calculated according to the same scheme by formulas (8)- (10). In this case, to determine the media -ˆj D and ˆj  land relief is used ( figure 4). Let us denote it: The final formula is as follows: Conclusion Thus, as a result of the study on a fundamentally new basis, a predictive model for determining the volume of water in a flooded reservoir is obtained. This model actually has one control parameter -. level h Of course, taking into account the factor that its value affects the shape of the newly flooded, therefore, the land part of the relief of the territory. They are defined in a similar way. In model dynamics of process is not considered, however it is simple to carry out calculation of, a new configuration of flooding at change of water level with high frequency. Note that at the current level to determine the level line and the shape of the aquifer media can be using sonars. Based on the proposed model have been carried out the calculation of the probability of occurrence of accidents at low intakes of the Voronezh region. The conducted research allowed to establish the number of small water intakes that are in an emergency condition and to determine the initial level of development of hydrological emergencies necessary to create a predictive model of flooding.
2019-06-13T13:23:52.723Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "5aa35ca7ce4247e54b072e04563c435baf7afb66", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1203/1/012075", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "05c41fb91ab2c6c981c6c5e6b1a0bf1a8b4feff7", "s2fieldsofstudy": [ "Environmental Science", "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics", "Geology" ] }
233709464
pes2o/s2orc
v3-fos-license
Soil Diversity (Pedodiversity) and Ecosystem Services : Soil ecosystem (3%), (6) Vertisols (2%), and (7) Histosols and Andisols (1% each). Taxonomic, genetic, parametric, and functional pedodiversity are an essential context for analyzing, interpreting, and reporting ES/ED within the ES framework. Although each approach can be used separately, three of these approaches (genetic, parametric, and functional) fall within the “umbrella” of taxonomic pedodiversity, which separates soils based on properties important to potential use. Extrinsic factors play a major role in pedodiversity and should be accounted for in ES/ED valuation based on various databases (e.g., National Atmospheric Deposition Program (NADP) databases). Pedodiversity is crucial in identifying soil capacity (pedocapacity) and “hotspots” of ES/ED as part of business decision making to provide more sustainable use of soil resources. Pedodiversity is not a static construct but is highly dynamic, and various human activities (e.g., agriculture, urbanization) can lead to soil degradation and even soil extinction. Types of Soil Diversity (Pedodiversity) Examples Taxonomic (diversity of soil classes) USDA Soil Taxonomy (e.g., soil order, series) Genetic (diversity of genetic horizons) A, B, etc. Parametric (diversity of soil properties) Soil organic matter (SOM), calcium carbonate (CaCO 3 ), etc. Functional (soil behavior under different use) Interpretive models to predict soil behavior Soil and its diversity (pedodiversity) play significant roles in underlying ecosystem goods and services for humans [16][17][18], who have developed a human-centered ecosystem services framework [19] as an approach for valuing these goods and services in both economic and non-economic ways [20]. In fact, pedodiversity can be considered an ecosystem good and service in its own right [2]. According to Bartkowski (2017) [21], economic valuations of diversity are rare and often focus primarily on biodiversity [22,23]. Previous research on biodiversity and its significance lists the following benefits [24]: (1) biodiversity supports healthy ecosystems by increasing ecosystem stability, while the loss of biodiversity can reduce their function and efficiency; (2) the relationship between the loss of biodiversity and ecosystem function is not linear, with greater impact as the loss of biodiversity increases; (3) both variety of species and key individual species are critical for ecosystem functioning, with diversity across trophic levels potentially having a more important function compared to species within trophic levels. From a business point of view, Stephenson (2012) [25] describes the utility of biodiversity to: (1) identify the stock, its physical state, and spatial patterns of biodiversity in relation to the key ecosystem services (e.g., water and carbon sequestration), which are at risk and have a high value (e.g., social, economic); (2) assess biodiversity trends, high-risk biodiversity loss with its key drivers as well as a reference point against which progress can be measured; (3) develop a long-term coordinated vision assessing trade-offs and potential synergies including cost-benefit analyses; (4) identify and implement a cost-effective policy option; and (5) monitor progress towards objectives and reviewing and revising policies over time based on the progress. Ecosystem services (ES) are goods and services provided by functioning ecological systems that directly and/or indirectly benefit human populations (e.g., food and climate regulation) [16][17][18]. At the same time, however, functioning ecological systems also can present detrimental effects for humans or so-called ecosystem disservices (ED) (e.g., social cost of carbon dioxide) [12]. Adhikari and Hartemink (2016) [16] examined the link between soil properties and ES without including the concept of pedodiversity and its measures in their literature review. Chandler et al. (2018) [26] proposed integrating soil analyses within frameworks for ES and the organizational hierarchy of soil systems. Mikhailova et al. (2020) [12] pointed out that applications of ES to soils are narrowly defined (e.g., soil-based, pedosphere-based), treating soil as a closed system instead of an open system, which requires a soil systems-based approach to ES. Mikhailova et al. (2020) [12] suggested including the contributions of the Earth's spheres (atmosphere, biosphere, hydrosphere, lithosphere, ecosphere, and anthroposphere) in the economic analysis of soil ES. Because most soils have been modified by humans, Mikhailova et al. (2020) [12] examined the business side of ecosystem services of soil systems and proposed to use the term "soil systems goods and services" (SSGS) instead of "soil ecosystem goods and services." Applications of biodiversity concepts and their measures to pedodiversity can be problematic, because they have not been designed explicitly for pedodiversity and its associated ES/ED valuations. Pedodiversity concepts and measures in their current forms have not been considered in ES/ED valuations and business decision making. Most likely, the types of pedodiversity and measurement approaches listed in Table 1 cannot be used solely on their own in ES/ED but must be applied in combination with each other or even all together concerning specific ES/ED within a particular administrative extent. The objective of this study is to illustrate the application of pedodiversity concepts and measures to value ES/ED, with examples provided primarily from the contiguous U.S., its administrative units, and the USDA Soil Taxonomy system of soil classification. Although the focus of the examples is on the U.S., the applications and measures described should be readily applicable to other geographic areas and market economies. Data Compilation and Analyses Soil survey information (including soil orders, suborders, great groups, subgroups, families and series) was obtained from Soil Survey Geographic (SSURGO) Database (2020) [27]. The information for each state in the contiguous U.S. was extracted using Zonal Statistics (Tables) spatial analyst tool in ArcGIS ® Pro 2.6 (ESRI, Redlands, CA, USA), while the information for the regions and the Land Resource Regions (LRR) was computed by developing a Structured Query Language (SQL) code that was utilized in SSURGO webpage (https://sdmdataaccess.nrcs.usda.gov/, accessed on 10 October 2020). All this information was then used to create a Microsoft Excel file with the soil survey information for each boundary. Examples of soil ES/ED and their monetary valuations were obtained from various literature sources using the Web of Science [28]. These examples encompass the three major groups of ES commonly used in the literature: provisioning, regulation/maintenance, and cultural [29]. Table 2 provides a conceptual overview of the accounting framework for market and non-market valuation of benefits/damages from three groups of ES (provisioning, regulation/maintenance, and cultural) based on biophysical and administrative accounts with examples primarily from the U.S. and its soils, as well as the related market-based information obtained from U.S. sources. Table 2. A conceptual overview of the accounting framework for a systems-based approach in the ecosystem services (ES) valuation of various soil ecosystem goods and services based on soil diversity (pedodiversity) (adapted from Groshans et al., 2018 [30]). Table 3 provides a conceptual overview of the total economic value (TEV) framework with insurance value adapted from various sources to provide a general explanation of valuation methods used in the examples primarily from the U.S., which may apply to other market economies. It should be noted that the relevance and applications of economic valuation to soil systems are not always clearly defined and can be subject to interpretation. Table 3. The total economic value (TEV) framework with insurance value (adapted from Nimmo-Bell (2011) [31], NZIER, 2018 [32], Baveye et al., 2016 [18], and Bartkowski et al., 2020 [20] Pedodiversity is influenced by intrinsic (within the soil) factors, including taxonomic, genetic, parametric, and functional pedodiversity, which provide an important context for analyzing, interpreting, and reporting ES/ED within the ES framework. Although each approach can be used separately, three of these approaches (genetic, parametric, and functional) fall within the "umbrella" of taxonomic pedodiversity, which separates soils based on properties important for potential use. Pedodiversity in the U.S. can be quantified and valued within the framework of the USDA Soil Taxonomy (Soil Survey Staff, 1999 [15]), an international system of soil classification, with the purpose of organizing soils into groups with similar properties. Soil individual (a three-dimensional body) is the object of soil classification in Soil Taxonomy, which is based on a nested and hierarchical system with six taxonomic categories: order, suborder, great group, subgroup, family, and series (e.g., Cecil (Fine, kaolinitic, thermic Typic Kanhapludults) ( Table 4). Table 4 demonstrates an example of criteria and sequence of taxonomic categories used to classify mineral soils (this sequence is different for organic soils and soils with permafrost). Soil Taxonomy is the basic system of soil classification for making and interpreting soil surveys, which can be used with the ES framework ( Figure 2, Table 5). STOCKS FLOWS VALUE Example of a soil map generated with Web Soil Survey (WSS) [33] showing soil cover and land use. Example of a soil map generated with Web Soil Survey (WSS) [33] showing soil cover and land use. Figure 2. Example of a soil map generated with Web Soil Survey (WSS) [33] showing soil cover and land use. Pedodiversity data in soil surveys and databases (e.g., maps, depth of soil horizons, and soil properties) are useful in business applications because they provide information about the pedodiversity of soil capital and the necessary data to calculate stocks of soil biotic (e.g., organic carbon) and abiotic (e.g., sand, silt, clay, and calcium carbonate) resources within different extents (e.g., science-based, administrative based, or in combination; by soil depth, by soil horizon, etc.). This information is essential for various ES/ED applications (e.g., provisioning and regulating) and even cultural ecosystem services. The names of some soil series used in the U.S. reflect cultural and historical heritage. For example, the name of New Mexico State Soil "Penistaja" is derived from the Navajo name meaning "forced to sit" [34]. The most general category of soil orders in Soil Taxonomy provides a useful framework and description of soil, which can be applied to describe the soil stock and its composition, its potential for delivering key ES, and constraints (ED) at several soil system scales, for example, world, continent, region, country, and watershed ( Table 6). General characteristics and constraints of these soil orders provide both qualitative and quantitative measures regarding the ability of these soils to supply ES/ED within a geographic area. Taxonomic pedodiversity in the contiguous U.S. exhibits a wide range of soil diversity, with 11 soil orders, 65 suborders, 317 great groups, 2026 subgroups, and 19,602 series ( Table 7). Table 7 shows the "soil order abundance"-total area of each soil order within the contiguous U.S. based on Soil Survey Geographic (SSURGO) Database (2020) with the following distribution: (1) Mollisols (27%), (2) Alfisols (17%), (3) Entisols (14%), (4) Inceptisols and Aridisols (11% each), (5) Spodosols (3%), (6) Vertisols (2%), and (7) Histosols and Andisols (1% each). In terms of the degree of weathering: slightly-weathered soils are 27%, intermediately-weathered soils are 58%, and strongly-weathered soils are 15% of the total area. Information about taxonomic pedodiversity can be linked to various ES/ED. In terms of provisioning ES, 58% of the contiguous U.S. is occupied by soils with high and moderate fertility status (without taking into account the past and present land use). It also can be used to analyze the patterns of value for regulating ES. [35] provided a valuation of soil organic carbon (SOC) stocks in the contiguous U.S. based on taxonomic pedodiversity and the avoided social cost of carbon (SC-CO 2 ) emissions, which varied by the degree of soil weathering as indicated by soil order information. This study found the following distribution of SC-CO 2 contribution within the contiguous U.S.: slightly-weathered soils (38%), intermediately-weathered soil (51%), and strongly-weathered soils (11%). In another example, according to [36], Mollisols have the highest total soil carbon (TSC, soil organic + soil inorganic carbon) storage midpoint value ($7.78T) based on the social cost of carbon (SC-CO 2 ) and avoided emissions provided by carbon sequestration, which is about 30% of the total midpoint value for the contiguous U.S. These types of analyses are useful in identifying soil "hotspots" with regards to various ES/ED applications at different scales which has the potential to be managed with precision agriculture [2,37]. It can be concluded that taxonomic pedodiversity provides an important context for analyzing, summarizing, and presenting soil data for ES/ED applications. Soil series is also a useful taxonomic category to describe pedodiversity regarding ES/ED at more detailed scales (e.g., farm and field), and this category is closely allied to interpretive uses (e.g., suitabilities and limitations for crop production and construction) ( Table 7). Soil series consist of pedons that are grouped together based on similarity in pedogenesis, soil chemistry, and physical properties [38]. The number of soil series within the soil extent can describe its diversity (Table 7). According to Table 7, Mollisols have the highest number of soil series (5569), followed by Entisols (3700). Amundson et al. (2003) [2] proposed to apply a commonly used biodiversity parameter ("species density") to soil diversity, which they called a "series density" parameter (number of series divided by 100,000 ha) ( Table 7). Taxonomic pedodiversity can also be used within administrative boundaries (e.g., Land Resource Regions, LRRs) ( Table 8). Land Resource Regions (LRRs) are defined by the USDA using major land resource area (MLRA) and agricultural markets, which are denoted using capital letters (A, B, C, etc.; see Table 8 Table 7, the average series density for the contiguous U.S. is 2.7 series/100,000 ha, and slightly-weathered soils have the highest series densities with Andisols in the lead (9.3 series/100,000 ha). Variation in soil series density can relate to ES/ED, but it depends on the properties of the soil series within an area and the interpretive uses. Soil ES related to agriculture can be reduced in some areas by high soil variability, which can impact the soil productivity at the farm scale. For example, in areas with soils derived from glacial materials, the high variability of soil properties can occur at the field scale limiting agricultural use and productivity. Series density equals the number of series divided by 100,000 ha. Examples of Genetic Pedodiversity and Ecosystem Services Genetic pedodiversity refers to the diversity of genetic horizons (soil layers commonly parallel to the soil surface), which designation indicates a qualitative description of changes) ( Figure 3, Table 12) [15,40]. Diagnostic horizons are quantitatively defined and not equivalent to the genetic horizons in Soil Taxonomy [15]. Soil horizons are integral components of the taxonomic pedodiversity and can be used to compute distinct or combined stocks within the soil (Figure 3, Table 12). Examples of Genetic Pedodiversity and Ecosystem Services Genetic pedodiversity refers to the diversity of genetic horizons (soil layers commonly parallel to the soil surface), which designation indicates a qualitative description of changes) ( Figure 3, Table 12) [15,40]. Diagnostic horizons are quantitatively defined and not equivalent to the genetic horizons in Soil Taxonomy [15]. Soil horizons are integral components of the taxonomic pedodiversity and can be used to compute distinct or combined stocks within the soil (Figure 3, Table 12). Numerous combinations of master horizons and their lowercase letters describe the incredible diversity of soil resources in the landscape. Soil horizons vary in thickness and exhibit within-horizon lateral and vertical variation [40]. Although all of the horizons in Table 12 present various types of value (e.g., market and non-market), some of these horizons can be over-exploited for their ES. For example, horizon A is commonly plowed for agricultural production and subject to nutrient depletion and erosion [41]. Ireland had to pass policies to protect peatlands, which often contain soil order of Histosols with horizon O in various stages of decomposition (lowercase letters: i, e, a, Table 12), which is used for multiple purposes (e.g., fuel and horticulture) [42]. Permafrost (indicated by the lowercase letter f in Table 12) is thawing rapidly, releasing large amounts of carbon dioxide and methane gases [43]. Examples of Parametric Pedodiversity and Ecosystem Services Parametric pedodiversity refers to the diversity of soil properties, which are also often used in the context of taxonomic and genetic pedodiversities. Soil properties vary by soil type and exhibit within-horizon lateral and vertical variation [40]. Although there is no standardized list of soil properties, Adhikari and Hartemink (2016) [16] provided key soil properties related to ES: soil organic carbon; sand, silt, clay, and coarse fragments; pH; depth to bedrock; bulk density; available water capacity; cation exchange capacity; electrical conductivity; soil porosity and air permeability; hydraulic conductivity and infiltration; soil biota; soil structure and aggregation; soil temperature; clay mineralogy, and subsoil pans. This list seems to focus primarily on soil physical properties, but soil chemical properties and qualitative soil descriptions (e.g., official soil series descriptions) are also important in ES/ED valuations. Soil chemical properties (e.g., plant nutrients) are essential for agricultural production, and it is important to monitor the supply of these nutrients to meet the yearly recommended dietary allowances and intakes by population [44]. [47] compared field sampling and soil survey database for spatial heterogeneity in surface granulometry (sand, silt, and clay) for potential use in ES/ED valuations and concluded that field sampling provided more detailed information. The same study revealed that soil texture and coarse fragments are lithospheric-derived resources ( Figure 1) and can be valued based on "soil" or "mineral" stock. Among soil properties, soil organic matter (SOM) and soil organic carbon (SOC) are the most researched soil properties because of their significance in provisioning (e.g., soil fertility) and regulating (e.g., carbon sequestration) ES. Guo et al. (2006) [39] reported spatial variability of soil carbon (SOC, SIC) in each of the soil orders within the contiguous U.S., and potential decline in SOC in the 0-20 cm depth (e.g., rooting depth of most crops) compared to 20-100 cm depth. Taxonomic pedodiversity and human demand for soil nutrients can adversely affect soil health and nutritional security [48]. Examples of Functional Pedodiversity and Ecosystem Services Functional pedodiversity refers to soil behavior under different uses. Taxonomic and genetic pedodiversities influence soil behavior under current and potential uses. According to Amundson et al. (2003) [2], "in less than two centuries, the landscape of the U.S. has been transformed to the degree that would astound our 19th-century predecessors". According to Merrill and Leatherby (2018) [49], "the 48 contiguous states alone are 1.9 billion-acre jigsaw puzzle of cities, farms, forests and pastures that Americans use to feed themselves, power their economy and extract value for business and pleasure" with six major types of land: cropland (391.5M acres, where M = million = 10 6 ), forest (538.6M acres), pasture/range (654M acres), urban (69.4M acres), special use (168.6M acres), and miscellaneous (68.9M acres). Soils under different uses make a significant contribution to various ES/ED in the contiguous U.S., but monetary valuations of these ES/ED are rare at this scale. Traditionally, provisioning ES have been seen as the primary value from the soil; however, in the face of potentially severe economic impacts from climate change, regulating ES should be recognized for their potential to provide ES through mitigation of net CO 2 release through different management regimes designed to maximize CO 2 uptake and minimize CO 2 release. Site-specific management of soil carbon "hotspots" through precision agriculture could serve to reduce CO 2 emission and provide regulating ES to humanity [37]. Although no economic system has been developed to coordinate long-term practices to sequester C [50], any contribution to CO 2 reduction would help mitigate emissions from fossil fuels. Different soil carbon (SOC, SIC, and TSC) should be accounted for in the mitigation efforts. A few studies attempted to put a monetary value on regulating ES/ED from SOC, SIC, and TSC within the contiguous U.S. using the social cost of carbon (SC-CO 2 ) of $42 per metric ton of CO 2 , which is applicable for the year 2020 based on 2007 U.S. dollars and an average discount rate of 3% [35,36,46,51] Extrinsic Factors: Examples of Monetary Valuations Based on Interaction of Soil Diversity (Pedodiversity) and the Earth's Spheres Pedodiversity is influenced by extrinsic (outside soil) factors from atmosphere, biosphere, hydrosphere, lithosphere, ecosphere, and anthroposphere (Figure 1), which can increase or decrease the value of ES/ED associated with pedodiversity. Valuations of both intrinsic and extrinsic factors can be made within biophysical accounts (e.g., within soil order boundaries) and then "translated" into the administrative accounts for decision making. For example, Figure 4 demonstrates the share between values of total soil carbon (intrinsic) and average annual total (extrinsic) monetary values of non-constrained potential soil inorganic carbon (SIC) sequestration from combined atmospheric Ca 2+ and Mg 2+ deposition (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015) for different regions in the contiguous U.S. based on an avoided SC-CO 2 of $42 per metric ton of CO 2 . In this case, both intrinsic (TSC) and extrinsic (poten-tial for SIC sequestration from atmospheric Ca 2+ and Mg 2+ deposition) factors are spatially heterogeneous without considering physical and economic constraints for achieving the maximum potential for SIC sequestration from atmospheric sources [52]. Both intrinsic and extrinsic factors limit pedodiversity in its ability to supply ES/ED. Climate change is another set of extrinsic factors impacting the value of ES/ED derived from pedodiversity. Global warming threatens the existence of soil order of Gelisols because of thawing permafrost. Gelisols store large amounts of soil organic matter (SOM), and its decomposition can release large amounts of carbon dioxide and methane [53]. An increase in both ambient and soil temperature can intensify SOM decomposition and lead to self-ignition conditions in soil carbon-rich soils (e.g., Gelisols, Histosols), leading to wildfires [54]. Climate change is also predicted to increase global water erosion from 30 to 66% [55]. Urbanization, another example of extrinsic factors, alters soils in various ways (e.g., erosion and pollution) and creates ES/ED specific to urban soil diversity (urban pedodiversity), which requires adjustment of valuations to urban environments [56,57]. Urbanization alters the stocks and flows of ES/ED provided by soils in urban and non-urban environments since urban environments are not self-supporting, which creates a significant demand for ES from urban fringes and beyond [58]. Land Cover Change (LCC) as a Threat to Pedodiversity Pedodiversity in the contiguous U.S. is experiencing significant threats and losses, especially in agriculturally productive and important soils (e.g., Alfisols and Mollisols) and regions (e.g., Midwest, Northern Plains, South Central) that is driven by land cover change (e.g., urbanization, deforestation, and agricultural expansion) [2]. These soils and regions have high provisioning ES value, which can result in unsustainable use accompanied by the loss of regulating and provisioning ES services [59]. Although the economic value of ES from agriculturally productive soils is somewhat reflected in the total value of U.S. agricultural production, the social costs associated with this production and rates of soil diversity (pedodiversity) extinction are rarely reported (Table 13) [2]. The total social costs of agricultural production (present and past) are difficult to quantify because they are impacted by numerous factors, but on-going research on ES/ED provides useful insight into potential valuation methods. This type of analysis should account for ES and ED (actual or realized, and potential) provided by pedodiveristy using biophysical (e.g., soil orders) and administrative (e.g., states) accounts. For example, [30,59] Mapping can provide spatial context between population, urbanization, and endangered soil series (Table 13, Figures 5 and 6). Results show that there is a link between states and regions with a high number of endangered soil series and the value of provisioning and regulating ES (California; Midwest, Northern Plains, and South-Central regions). States and regions with zero endangered series represent areas with generally low productivity soils such as Aridisols (e.g., Arizona and New Mexico), Ultisols (e.g., Georgia, South Carolina, and North Carolina), and Entisols and Inceptisols (e.g., Maine and Vermont) (Table 13, Figure 5). Some states and regions have low human populations, but some of the highest proportions of endangered rare soil series (Table 13, Figure 5). For example, the Northern Plains region has only 4.19% of the total U.S. population and some of the highest numbers of endangered soil series [60,61]. In the Midwest, Iowa has almost 81% of its rare soil series endangered, but it has only 0.95% of the total U.S. population (Table 13, Figure 5) [60,61]. Similarly, Kansas has nearly 43% of its rare soil series endangered, but it has only 0.88% of the U.S. population. These states have some of the highest values associated with provisioning ES from SOC, SIC, and TSC, as well as regulating ES/ED associated with these carbon stocks. A mismatch between "potential" and "realized" supply/demand of flow-dependent ES/ED [62] is not a new phenomenon for Kansas, which experienced soil (East) n/a n/a n/a n/a n/a n/a 19.54 n/a (Midwest) n/a n/a n/a n/a n/a n/a 18.63 n/a (South Central) n/a n/a n/a n/a n/a n/a 12.24 n/a (Southeast) n/a n/a n/a n/a n/a n/a 22.74 n/a (Northern Plains) n/a n/a n/a n/a n/a n/a 4.19 n/a (West) n/a n/a n/a n/a n/a n/a 20.72 n/a Totals n/a n/a n/a n/a n/a n/a 98.06 80.7 Note: n/a = not available. For "series abundance," the following categories are defined [2]: (a) rare soils-less than 1000 ha total area in the US, and (b) endangered soils as those rare or rare-unique soil series that have lost more than 50% of their area to various land disturbances. degradation and the Dust Bowl in the past due to a combination of prolonged drought and "suitcase farming," which was sponsored by "non-resident farmers" [63]. From the ES framework perspective, examining the potential for dry land farming in the 1920s, it would have indicated limitations due to intrinsic factors (high soil erodibility because of silty soil texture) and extrinsic factors (susceptibility of this area to regular droughts). According to Lee and Gill [64], the soil and drought conditions were compounded by an economic collapse that reduced crop value. Social costs associated with the Dust Bowl went far beyond the boundaries of the states where it originated, with almost half a million Dust Bowl refugees, massive quantities of topsoil being deposited in the Atlantic Ocean and impacting the air quality of Washington D.C. and other faraway states [64]. The ES framework, in combination with detailed spatial and temporal environmental data, can be used to inform sustainable decision-making to help avoid and mitigate similar and other disasters. For example, land cover change maps over time can provide insight into geographical patterns of ES stocks, flows, and values. Urbanization trends increase demand for ES, which are not always supplied by local soil resources and require soil ecosystem goods and services to be "imported" from soil stocks in other geographic areas (Figures 6 and 7). Figure 7 shows large urban area increases in the states of Texas, California, Florida, Arizona, Georgia, and North Carolina. Increases in urban areas in states can be accompanied by decrease in the agricultural areas (e.g., Florida, Arizona, and Georgia) and/or loss of forest areas (e.g., California, Arizona, Texas, and Georgia) (Figures 8 and 9). In some cases, states increase the flow of ecosystem goods and services by increasing the agricultural area within the state based on available soil resources (e.g., Texas) ( Figure 8). In other cases, an increase in the flow of ecosystem goods and services by increasing the agricultural area may be limited because of constraints associated with inherently lowfertility soils and other extrinsic factors (e.g., low precipitation). According to Wentland et al. (2020) [65], overall U.S. land cover has seen declines in agricultural areas, forests, and pasture, and increases in developed areas as well as barren, and scrub/shrub land cover classes. These declines are mostly concentrated in the Southeastern U.S. [65]. [65] reported a 28% decline in land value during a financial crisis in the last decade without ES valuation. This is a clear evidence that land value and ES value are not connected, which can be an essential consideration for future research. There may be limited instances where the land value is tied to agricultural productivity (provisioning ES). Soil ecosystem goods and services are no longer used locally but are subject to a vast global distribution network, contributing to biogeochemical cycles' destabilization. Climate Change as a Threat to Pedodiversity Climate change poses a range of unique threats (e.g., changes in temperature, precipitation, and extreme conditions) to pedodiversity and its ES, which will be discussed following the concept of pedodiversity and its measures outlined in this study. Since pedodiversity (biotic + abiotic) forms from the interaction of various spheres (biosphere, lithosphere, hydrosphere, atmosphere, ecosphere, and anthroposphere), climate change threats will be multifaceted and complex. Both biotic and abiotic aspects of pedodiversity are sensitive to climate change and include the following examples relevant to ED: • Biotic (e.g., increase in soil organic matter decomposition rates due to increase in temperature and precipitation [66] leading to increase in soil CO 2 emissions and associated social costs); • Abiotic (e.g., increase in soil erosion due to an increase in precipitation and extreme rainfall events [67]). Pedodiversity is influenced by intrinsic (within the soil) and extrinsic (outside soil) factors, where climate change is an extrinsic factor (e.g., changes in temperature, precipitation) with subsequent effect on alterations in intrinsic soil characteristics and properties (e.g., soil temperature and moisture regimes, and moisture content). In terms of taxonomic pedodiversity (diversity of soil classes), climate change poses an existential threat to the soil order of Gelisols. Climate change can lead to changes in soil classification, especially with regards to the use of soil temperature (e.g., pergelic, subgelic, cryic, and frigid) and moisture regimes (e.g., udic and ustic). An example of climate-induced changes in genetic diversity (diversity of soil horizons) includes the potential disappearance of permafrost, which is indicated by the lowercase letter "f" (frozen) in the soil profile. Climate change will impact parametric pedodiversity (diversity of soil properties) in various ways; for example, it can reduce soil organic matter content because of increased decomposition due to temperature. Soil pH can become more acidic because of increased precipitation and leaching, and in the case of agricultural soil, more liming material will need to be applied to compensate for the reduction in provisioning service provided by soil. Functional pedodiversity (soil behavior under different uses) will be affected in many parts of the world because of climate change. For example, global sea rise will influence soils under rice production, resulting in annual crop losses of up to $10.59 billion USD [68]. Projections of future U.S. climate change predict that the entire U.S. is likely to warm over the next 40 years, with an increase of 1-2 • C over much of the country and a 2-3 • C increase in the interior of the country [69]. Reilly et al. (2003) [70] examined the effects of climate change on provisioning ES from U.S. agriculture and reported a potential shift of crop production northwards and a positive overall increase in agricultural production with regional differences (e.g., possible declines in production in the Southern U.S.). These ES changes are likely to be accompanied by ED in the form of increased social costs associated with carbon dioxide emissions, soil erosion, depletion of soil nutrients, and others, which contribute to the issues of soil and human security worldwide [71]. Climate change in combination with population growth may increase demand for soil nutrients, which replacement from soil weathering is relatively slow in comparison with "anthropogenic use rate" [72]. Since pedodiversity is not evenly distributed within most geographic areas, soil nutrient depletion can be more acute in some places leading to prohibitively high replacement costs associated with fertilizer and liming applications. If the nutrients (e.g., base cations) are not replaced through liming and fertilization, it will alter the soil chemical composition, which can change its pedodiversity classification. Climate change will have a direct impact on the classification of soils, with some soil types disappearing and others changing in both extent and properties. Soil carbon changes associated with climate change and increased organic matter decomposition will also change how soils are classified as they are "decarbonized." Discussion Pedodiversity is a source of various ecosystem goods, services, and disservices, and its value is as complex as its concept. The total economic value (TEV) of pedodiversity is only a portion of the total system value (TSV) of pedodiversity because pedodiversity and ES form a multilayered relationship with the general trend of decreasing the tangibility of the value of soil to users (Table 3) from the monetary value (e.g., actual use values: consumptive food production) to pedodiversity value (e.g., intrinsic value) ( Figure 10) [72,73]. Currently, there are four main approaches to analyze pedodiversity: taxonomic (diversity of soil classes), genetic (diversity of genetic horizons), parametric (diversity of soil properties), and functional (soil behavior under different use). The concept of pedodiversity and its classification varies by country; therefore, its applications to ES are country-specific [74]. According to Gerasimova (2010) [74], "the American Soil Taxonomy is the main and single classification in 45 countries, whereas, in 80 countries, it is used along with the national classifications". Despite differences in country-specific classifications, these soil classifications provide science-based soil information that can be integrated with administrative accounts (Table 2). Taxonomic, genetic, parametric, and functional pedodiversity provide an essential context for analyzing, interpreting, and reporting ES/ED within the ES framework for business applications. Although each approach can be used separately, three of these approaches (genetic, parametric, and functional) fall within the "umbrella" of taxonomic pedodiversity, which separates soils based on properties important to potential use. Taxonomic pedodiversity provides a general description of the stock, its type, and spatial (both horizontal and vertical) distribution, which are particularly useful in agricultural business applications (e.g., soil productivity ratings in soil survey). For example, an area abundance of soil orders describes the spatial distribution within defined administrative boundaries (e.g., LRRs defined by the USDA based on MLRAs and agricultural markets) ( Table 9). The phrases "portfolio effect" and "evenness effect" [75] are often applied to describe the theoretical links between biodiversity and ecosystem function. The "portfolio effect" is the analogy between the stock market and species diversity, where having more species allows a system to better respond to external stimuli. At the same time, the "evenness effect" finds that having similar numbers of species can help buffer against disturbances [76]. The concepts of "portfolio effect,", "evenness effect," and the newly proposed "distribution effect" can also be applied to pedodiversity with various degrees of interpretation (Figures 11 and 12). Quantitative (e.g., quantity of carbon stored, etc.) Qualitative (e.g., health, social benefits, etc.) " Knowns" and " unknowns" Decreasing tangibility of value to user Figure 10. The newly expanded scope of pedodiversity valuation pyramid with a comparison of total economic value (TEV) and total system value (TSV) of ecosystem services (ES) and disservices (ED) (adapted from Gantioler et al., 2000 [76]). Figure 11 illustrates these concepts using the contiguous United States as an example. In this context, the "portfolio effect" is defined as the number of different stocks (soil orders) within the country ( Figure 11). The "distribution effect" shows the distribution of stocks (soil orders), its variation (e.g., slightly-weathered, intermediately-weathered, and stronglyweathered soils), and associated avoided or realized social costs of SOC, SIC, and TSC within the country. Figure 12 illustrates these concepts using three states (Iowa, Rhode Island, and Georgia). In this context, the "portfolio effect" is defined as the number of different stocks (soil orders) within each state: Iowa (5), Rhode Island (3), and Georgia (7) (Figure 12). The "evenness effect" describes instances when similar soil types are evenly represented (an example is not shown) (e.g., Mollisols and Alfisols are both fertile soils). For each state, a paired graph shows the proportion of total area occupied by soil order and value of soil organic carbon (SOC) based on avoided social cost of CO 2 , with Iowa having largest values, mainly from Alfisols and Mollisols, low total value in Rhode Island, and intermediate value in Georgia based on Ultisols and Histolsols. Another pedodiversity measure, series density, provides important information about soil diversity, but its interpretation can differ from biological species density. While higher levels of species density are often seen as an advantage when describing biological systems [77], areas with higher soil series density (e.g., typical for soils derived from glacial parent material) may have less agricultural productivity compared with more homogenous and productive soils (e.g., typical for soils derived from loess parent material). Some regions with homogenous soils are characterized by low ES and productivity (e.g., Aridisols). In terms of pedodiversity and ES, it is not just the density or numbers of different soils, but their properties (e.g., chemical and physical) as they relate to the effectiveness (level of performance) and reliability (consistency and predictability) to drive production including agriculture [78]. The soil-to-agricultural market value chain is heavily dependent on large homogeneous areas of soils with high agricultural productivity associated with "soil carbon hotspots" (e.g., Midwest, Northern Plains in the U.S.) [37]. These areas have the most significant pedodiversity loss (and even extinction) and some of the lowest proportion of the U.S. population (Table 12, Figure 5). There is a potential to manage these "hotspots" with precision agriculture technology. It should be noted that not all homogeneous soil areas necessarily have a high ES value. Loss of pedodiversity may continue, considering projected world population growth [79,80], given that the Midwest and Northern Plains regions export large quantities of agricultural products to the world. Economic estimates focus on the profit from provisioning ES through the sales of agricultural products without considering regulating (e.g., the social cost of pollution including greenhouse emissions) and replacement cost associated with loss of soil nutrients ( Figure 6) [80,81]. This focus on the direct use value, with little or no regard to the passive and intrinsic usevalues, may lead to unsustainable use of pedodiversity (Table 3 and Figure 10). The amount available to replace the lost value of pedodiversity, through insurance value (Table 3) may not be possible. For example, replacement of some soil nutrients (e.g., phosphorus and potassium) may not be economically feasible since their mineral supply is very limited in the world [81,82]). Estimates of social costs can be performed based on taxonomic pedodiversity (biophysical accounts) (Table 14) and using administrative (boundary-based) accounts (Table 15). [51]. Biocapacity and ecological footprint are commonly used in environmental carrying capacity (ECC) assessments (e.g., urban areas) [83], and clearly, pedocapacity, or the capacity of the soil to provide various ES, should be a part of these calculations as well. The value of pedocapacity should include both intrinsic (e.g., avoided social costs) and extrinsic (e.g., realized social costs) estimates. Extrinsic realized social costs may be impossible to estimate because their impacts extend beyond the pedosphere boundary, such as in the case of realized social costs of carbon (SC-CO 2 ) ( Figure 13). Limited biocapacity (including pedocapacity) in urban areas often results in urban areas exceeding their ECC, sometimes crossing into other countries [83]. Table 15. Integration of biophysical accounts (science-based) and administrative accounts (boundarybased). Degree of soil development and area-normalized midpoint values of soil organic carbon (SOC) storage in the upper 2 m depth within the contiguous United States (U.S.), based on midpoint SOC numbers from Guo et al., 2006 [39] and a social cost of carbon (SC-CO 2 ) of $42 (USD) per metric ton of CO 2 [51]. Figure 13. Relationship between intrinsic (e.g., avoided social costs) and extrinsic (e.g., realized social costs) estimates of social costs associated with pedosphere in general (a), and using the state of Iowa and soil organic carbon (SOC) as an example (b) based on midpoint SOC numbers from Guo et al., 2006 [41] and a social cost of carbon (SC-CO 2 ) of $42 (USD) per metric ton of CO 2 [51]. Slight ←----------Degree of Weathering and Soil Development ---------→ Strong Most research efforts are focused on documenting biodiversity loss, but pedodiversity loss can be of catastrophic consequence to humanity; therefore, it is important to understand the extinction patterns and their underlying processes [84]. Global warming has various impacts on the soil, especially on soil organic matter (SOM) decomposition, which is an oxidation process accompanied by oxygen consumption and CO 2 release [85]: R-(C, 4H) + 2O 2 → CO 2 ↑ + 2H 2 O + energy (478 kJ mol −1 C) The decomposition of SOM, which is accompanied by the release of CO 2 and other gases, accelerates in the presence of increased heat (e.g., global warming) and can be compared to a "fire triangle." Analogous to the "fire triangle," the "SOM decomposition triangle" represents the three items (soil organic matter, oxygen, and heat) that feed SOM decomposition emissions of CO 2 and other invisible gases that fuel global warming ( Figure 14). Unlike a regular fire, which is visible, the invisible greenhouse gases are like an invisible "fire" that can only be prevented and minimized by identifying the location of "fuel loading" (soil organic matter) throughout the landscape. Figure 14. The "SOM decomposition triangle" represents the three items (soil organic matter, oxygen, and heat) that feed soil decomposition emissions of CO 2 and other gases that fuel global warming. Soil Organic Matter The Earth's regions and soils with high SOM levels (e.g., Histosols, Gelisols, Alfisols, Mollisols, and Vertisols) tend to be more susceptible to greenhouse gas emissions with increasing global temperatures. Histosols and Gelisols are of particular concern because they are threatened by draining (Histosols) and thawing (Gelisols), which can cause soil degradation with global consequences [86,87]. For example, Pastick et al. (2015) [86] reported that 16 to 24% (out of 38%) of near-surface permafrost will disappear by the end of the 21st century. States and regions with a higher proportion of their area occupied by high-risk soils ("hotspots") [37] are experiencing the highest losses in ES (especially in provisioning), which is often caused by the demand for ES (e.g., provisioning) outside their boundaries. According to Hansjurgens et al. (2018) [88], pedodiversity distribution around the world poses an important question about "fairness" not only in the provisioning of ES but also in the associated and past ED costs. Administrative accounts (e.g., states and regions) in combination with pedodiversity concepts can provide information to develop cost-effective policy options to manage benefits (ES) and risks (ED) from pedodiversity. These benefits and risks often extend beyond the boundaries of individual states and regions (e.g., greenhouse emissions), therefore creating a need for a long-term coordinated vision, collaboration, and monitoring. It should be noted that both the ES framework and its valuation measures are human-centric, bias, and focused on short-term human scale interests instead of treating and valuing pedodiversity at a long-term geologic time scale [12,20]. According to Table 3 and Figure 10, pedodiversity tangibility values tend to decrease from "actual use" values to "intrinsic" values (benefits to nature). Soil series are often associated with these monetary "actual use" values (e.g., provisioning: food, etc.) because they represent soil properties within property boundaries in contrast to soil orders, which are often associated with large spatial extents which cross multiple property boundaries representing "intrinsic" values and social costs ( Figure 15). According to Guerry et al. (2015) [89], "perhaps the most difficult challenge in the path of success is removing the fundamental asymmetry at the heart of economic systems, which rewards the production of marketed commodities but not the provision of nonmarketed ecosystem services or the sustainable use of natural capital that supports these services." Market transformations of pedodiversity can result not only in welfare but damages as well [90], which can pose a threat to soil security, national security, food security, infrastructure, and human life [91][92][93]. Pedodiversity can be both valuable and problematic to human well-being, depending on the point of view. The value of pedodiversity is that it is a human construct, which is used to "categorize" the soil continuum in a discrete way [93] and can be applied to ES/ED within administrative boundaries for socio-economic analysis. The problem with the discretization of both soils and related ES/ED is that it can oversimplify the complex nature of pedodiversity, which is a product of the interaction between the Earth's various spheres and their diversities (Figure 1). For example, Bach et al. (2020) [94] discusses the contribution of soil biodiversity to ES, which varies by soil type (taxonomic pedodiversity) and would require integration of pedodiversity with soil biodiversity for sustainable soil management. Human activity (e.g., agriculture and urbanization) can erode soil pedodiversity by converting soils to more uniform human-altered soils (Anthrosols) with a reduction in soil ES [95]. The perception of pedodiversity [96] and its contribution to ES/ED depends on the human "behavioral dimensions" ("human nature"), which are less understood in both perceived ES benefits and ED, especially with regards to regulating ES/ED (e.g., greenhouse gas emission) which tend to be of global significance [97]. Conclusions This study examined the application of soil diversity (pedodiversity) concepts (taxonomic, genetic, parametric, and functional) and its measures to value ES/ED with examples based on the contiguous United States (U.S.), its administrative units, and the systems of soil classification (e.g., U.S. Department of Agriculture (USDA) Soil Taxonomy, Soil Survey Geographic (SSURGO) Database). Pedodiversity provides an important context (e.g., "portfolio effect", "distribution effect", and "evenness effect") for analyzing, interpreting, and reporting ES/ED within the ES framework for business applications. Taxonomic pedodiversity in the contiguous U.S. exhibits high soil diversity, which is not evenly distributed within administrative units. Pedodiversity distribution around the country poses an important question about "fairness" not only in the provisioning of ES but also in the associated and past ED costs. Pedodiversity in the U.S. is under various threats, including land cover change (urbanization, agriculture, deforestation) and climate change (existential threat to the soil order of Gelisols). Pedodiversity losses are especially high in agriculturally productive and important soils (e.g., Alfisols, Mollisols) and regions (e.g., Midwest, Northern Plains, South Central) with some of the lowest proportions of U.S. total population. There is a mismatch between "potential" and "realized" supply/demand of flow-dependent ES/ED. With over 80% of the U.S. population living in urban environments, there is an increase in demand for ES, which is not always supplied by local soil resources and requires soil ecosystem goods and services to be "imported" from other geographic areas. The flow of ecosystem goods and services is often accompanied by the expansion of agricultural areas based on available soil resources. Low-fertility soils and other extrinsic factors (e.g., low precipitation) may limit the flow of ecosystem goods and services. Climate change will have a direct impact on pedodiversity and the classification of soils, with some soil types disappearing and others changing in both extent, and properties. Administrative accounts (e.g., states and regions) in combination with pedodiversity concepts can provide information to develop cost-effective policy options to manage benefits (ES) and risks (ED) from pedodiversity. These benefits and risks often extend beyond the boundaries of individual states and regions (e.g., greenhouse emissions), creating a need for a long-term coordinated vision, collaboration, and monitoring. Acknowledgments: We would like to thank the reviewers for their constructive comments and suggestions. Conflicts of Interest: The authors declare no conflict of interest.
2021-05-05T00:09:39.054Z
2021-03-11T00:00:00.000
{ "year": 2021, "sha1": "0425f74cfbaed8fa693efc6d48c64917c9ce8826", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-445X/10/3/288/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8ed2236b6725c15f95139a9589b75952d4da2ff3", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
1670907
pes2o/s2orc
v3-fos-license
Anosov automorphisms on certain classes of nilmanifolds We give a necessary and sufficient condition for $k$-step nilmanifolds associated with graphs $(k \geq 3)$ to admit Anosov automorphisms. We also prove nonexistence of Anosov automorphisms on certain classes of 2-step and 3-step nilmanifolds. Introduction A well-known class of Anosov diffeomorphisms arises as follows. Let N be a simply connected nilpotent Lie group and let Γ be a lattice in N; namely Γ is a discrete subgroup such that Γ\N is compact. If τ is a hyperbolic automorphism (see §2 for the definition) of N such that τ (Γ) = Γ then we get a diffeomorphism τ of Γ\N, defined by τ (Γx) = Γτ (x) for all x ∈ N, which is an Anosov diffeomorphism of the compact nilmanifold Γ\N. Anosov diffeomorphisms arising in this way are called Anosov automorphisms of nilmanifolds. Let K be a finite group of automorphisms of N and let Γ be a torsion free discrete cocompact subgroup of K ⋉N. The Γ-action on N is given by (τ, x).y = xτ (y) where τ ∈ K and x, y ∈ N. Now consider the quotient space Γ\N under the action of Γ on N. We call such a compact manifold Γ\N an infranilmanifold. If f is a hyperbolic automorphism of N such that f normalises the subgroup K in the group of automorphisms of N and f (Γ) = Γ then f induces a diffeomorphism f of the infranilmanifold Γ\N; we call such a f an Anosov automorphism of an infranilmanifold Γ\N. The only known examples of Anosov diffeomorphisms are on nilmanifolds and infranilmanifolds. It is conjectured that any Anosov diffeomorphism is topologically conjugate to an Anosov automorphism of an infranilmanifold. By a result of A. Manning [8] all Anosov diffeomorphisms on nilmanifolds are topologically conjugate to Anosov automorphisms. This highlights the question of classifying all compact nilmanifolds which admit Anosov automorphisms. Indeed it is easy to see that not all of them do. The first example (due to Borel) of a non-toral nilmanifold admitting an Anosov automorphism was described by S. Smale [10]. Later L. Auslander and J. Scheuneman [1] gave a class of nilmanifolds admitting Anosov automorphisms. In this paper we associate a k-step nilmanifold (k ≥ 3) with each graph, and give a necessary and sufficient condition, in terms of the graph, for the nilmanifold to admit Anosov automorphisms. We also prove some results on nonexistence of Anosov automorphisms on certain 2-step and 3-step nilmanifolds. Preliminaries In this section we recall some definitions and preliminaries concerning nilpotent Lie groups and nilmanifolds. We also recall results concerning automorphisms of a 2-step nilmanifold associated with a graph (see [3] for details). Let N be a simply connected nilpotent Lie group and N be the Lie algebra of N, which is a nilpotent Lie algebra. Let Aut(N) denote the group of Lie automorphisms of N. Let Aut(N ) denote the group of Lie algebra automorphisms of N . Aut(N) is isomorphic to the group Aut(N ), the isomorphism is being given by τ → dτ , where dτ is the differential of τ . Let Γ be a discrete subgroup of N such that Γ\N admits a finite N-invariant Borel measure. We call such a subgroup a lattice in N. As N is a nilpotent Lie group, a discrete subgroup Γ is a lattice in N if and only if Γ\N is compact (see Theorem 2.1 in [9]). A nilmanifold is a quotient Γ\N , where N is a simply connected nilpotent Lie group and Γ is lattice in N. An automorphism σ ∈ Aut(N ) is said to be hyperbolic if all of its eigenvalues are of modulus different from 1. An automorphism τ ∈ Aut(N) is said to be hyperbolic if all eigenvalues of the differential dτ are of modulus different from 1. Now we recall the construction of the 2-step nilmanifold associated with a given graph and we recall some results about its automorphism group (see [3] for details). Let (S, E) be a finite simple graph, where S is the set of vertices and E is the set of edges. Let V be a real vector space with S as a basis. Let W be the subspace of ∧ 2 V spanned by {α ∧ β : α, β ∈ S, αβ ∈ E}, where ∧ 2 V is the second exterior power of V . Let N = V ⊕ W . We define the Lie bracket operation [ , ] on N as follows. [ , ] : N × N → N is defined to be the unique bilinear map satisfying the following conditions: i) for α, β ∈ S, [α, β] = α ∧ β if αβ ∈ E and 0 otherwise; ii) [α, β ∧ γ] = 0 for all α, β, γ ∈ S; iii) [α ∧ β, γ ∧ δ] = 0 for all α, β, γ, δ ∈ S. We call N (defined as above) the 2-step nilpotent Lie algebra associated to the graph (S, E). Let N be the simply connected Lie group with Lie algebra N . Let Γ be the subgroup of N generated by exp(S), where exp denotes the exponential map. It can be seen that Γ is a lattice in N. A nilmanifold Γ\N is called the 2-step nilmanifold associated with the graph (S, E). For any σ ∈ S we define Let ∼ be an equivalence relation on S defined as follows: for α, β ∈ S, α ∼ β if either α = β or, Ω ′ (α) ⊂ Ω(β) and Ω ′ (β) ⊂ Ω(α) (see [3] for details). Let {S λ } λ∈Λ denote the set of all equivalence classes in S with respect to the equivalence relation ∼, where Λ is an index set. S λ , λ ∈ Λ, are called the coherent components of S. For each λ ∈ Λ, let V λ denote the subspace of V spanned by S λ . We recall some results (see [3]): Theorem 2.1 Let (S, E) be a finite graph and let N = V ⊕W be the 2-step nilpotent Lie algebra associated with (S, E) (notation as above). Let G denote the subgroup of GL(V ) consisting of all restrictions, τ |V , such that τ ∈ Aut(N ) and τ (V ) = V . Then G is a Lie subgroup of GL(V ) and the following conditions are satisfied: i) The connected component of the identity in G, which we denote be G 0 , can be expressed as ( λ∈Λ GL + (V λ )) · M, where for each λ ∈ Λ, GL + (V λ ) denotes the subgroup of GL(V λ ) consisting of all the elements with positive determinant and M is a closed connected nilpotent normal subgroup of G. ii) The elements of Λ can be arranged as λ 1 , . . . , λ k so that for all j = 1, . . . , k, Lemma 2.2 Let λ 1 , . . . , λ k be an enumeration of Λ such that assertion (ii) of Theorem 2.1 holds. For each j = 1, . . . , k let N j = (⊕ i≤j V λ i )⊕W ; also let N 0 = W . Let τ be a Lie automorphism of N contained in the connected component of the identity in Aut(N ). Then each N j is invariant under the action of τ . Let Φ be the (additive) subgroup of N generated by S ∪{ 1 2 (α ∧β : α, β ∈ S, αβ ∈ E}. If τ (Φ) = Φ then for all j = 1, . . . , k the determinant of the action of τ on N j is ±1. 3 k-step nilmanifold associated with the graph In this section we associate a k-step (k ≥ 3) nilmanifold (i.e. covered by a k-step simply connected nilpotent Lie group) with every graph and we give a necessary and sufficient condition for such nilmanifolds to admit an Anosov automorphism. Starting with a graph (S, E) we define a k-step (k ≥ 3) nilpotent Lie algebra as follows. Let (S, E) be a finite graph, where S is the set of vertices and E is the set of edges. Suppose N denotes the 2-step nilpotent Lie algebra associated with (S, E) (see §2) i.e. N = V ⊕ W where V is a vector space with S as a basis and W is the subspace of ∧ 2 V spanned by {α ∧ β : α, β ∈ S, αβ ∈ E}. Let N k (V ) be a free k-step nilpotent Lie algebra on V (see [1] for the definition). We denote by H k the k-step nilpotent Lie algebra N k (V )/J , where J denotes an ideal of N k (V ) generated by all elements [α, β] such that αβ is not an edge. Let N K be the simply connected nilpotent Lie group with Lie algebra H k . Suppose Φ k is the (additive) subgroup of H k generated by the elements of the type [α, [β, . . . ]], where α, β, . . . ∈ S. Then there exists a Z-subalgebra Φ 0 [1]). We note that Γ k is a lattice in N k . We call a nilmanifold Γ k \N k a k-step nilmanifold associated with the graph (S, E). We give a necessary and sufficient condition for the nilmanifold Γ k \N k to admit an Anosov automorphism. Let (S, E) be a graph and Γ k \N k be a k-step nilmanifold (k ≥ 2) associated with (S, E). We refer §2 and §3 for the notation. Remark 4.2 We note that any automorphism of N can be extended to an automorphism of H k . The automorphism group Aut(H k ) is the semidirect product of Aut(N ) and a connected group. This can be seen by observing that Aut(H k ) is a semidirect product of Aut(H k /H k−1 k ) and Hom(V, H k−1 k ), and Aut(H k /H 2 k ) is the same as Aut(N ). i) For every λ, |S λ | ≥ 2; and ii) If |S λ | = l, with 2 ≤ l ≤ k, and α, β ∈ S λ then αβ is not an edge. Proof Suppose that for each λ ∈ Λ (i) and (ii) hold. We will prove that there exists a hyperbolic automorphism τ ∈ Aut(H k ) such that τ (Φ k ) = Φ k . We have V = ⊕ λ∈Λ V λ (see §2 for notation). For each λ ∈ Λ let Φ λ be the subgroup of V λ generated by S λ . There exists g λ ∈ GL(V λ ) such that g λ (Φ λ ) = Φ λ if and only if the matrix representing g λ with respect to the basis S λ belongs to The existence of such elements can be proved by using a result of S. G. Dani (see Corollary 4.7 in [4]). Let g λ denote the transformation from GL(V λ ) whose matrix with respect to the basis S λ is A λ . By the above observation g λ (Φ λ ) = Φ λ . We choose natural numbers j λ , λ ∈ Λ, such that | λ∈Ω (c λi 1 c λi 2 · · · c λin λ ) j λ | = 1 for all subsets Ω of Λ such that |Ω| ≥ 2 and 2 ≤ λ∈Ω n λ ≤ k, where c λi j 's are eigenvalues of g λ . Let g ∈ GL(V ) be the element whose restriction to V λ is g j λ λ , for each λ ∈ Λ. There exists τ ∈ Aut(N ) such that g is the restriction of τ to V (see Theorem 2.1). We know that τ constructed as above is a hyperbolic automorphism of N . This can be seen from the proof of the Theorem 1.1 in [3] and the hypothesis of the theorem. Let τ be an automorphism of H k obtained by extending τ . We note that τ (Φ k ) = Φ k by construction. We will prove that τ is hyperbolic as a linear transformation. Suppose if possible τ has an eigenvalue, say c, of absolute value 1. Then c must be an eigenvalue of the restriction of τ to V n , 3 ≤ n ≤ k (see Notation 4.1), since τ is hyperbolic on N . Now using the fact that τ (V λ ) = V λ for all λ ∈ Λ and recalling the construction of g, we see that there exists λ ∈ Λ such that |S λ | = n and V n λ is nonzero (see Notation 4.1). But by the condition in the hypothesis αβ is not an edge, for all α, β ∈ S λ . Hence [α, β] = 0, for all α, β ∈ S λ . This contradiction shows that τ is hyperbolic. Hence Γ k \N k admits an Anosov automorphism. Conversely suppose that Γ k \N k admits an Anosov automorphism. Hence there exists τ ∈ Aut(H k ) such that τ (Φ k ) = Φ k and τ is a hyperbolic linear transformation. Let τ ∈ Aut(N ) denote an automorphism of N induced by τ . We can assume that As τ is a hyperbolic linear transformation, |S λ | ≥ 2 for every λ, and if |S λ | = 2 then αβ is not an edge for αβ ∈ S λ (see Theorem 1.1 in [3]). We may assume that τ is contained in the connected component of the identity in Aut(N ) (see Remark 4.2). Let G denote the subgroup of GL(V ) consisting of all restrictions, τ |V , such that τ ∈ Aut(N ) and τ (V ) = V . We write the elements of Λ as λ 1 , λ 2 , . . . , λ m such that for all j = 1, . . . , m, i≤j V λ i is invariant under the action of G 0 , where G 0 is the connected component of identity in G (see We note that each N j is invariant under the action of τ (see Lemma 2.2). As the determinant of the induced action of τ on N j /N j−1 is ±1, the product of the eigenvalues θ 1 , θ 2 , . . . , θ l of the induced action is ±1. Since the action is hyperbolic, at least two eigenvalues, say We note that v 1 and v 2 are linearly independent since θ 1 and θ 2 are distinct. We write ]. By considering the complexification of τ and τ we have Hence we have an eigenvalue ±1 for the induced action of τ on (N l j ) C /W ′ which is a contradiction, since by assumption τ is hyperbolic. This shows that αβ is not an edge for all α, β ∈ S λ , where |S λ | = l, 1 ≤ l ≤ k. This completes the proof of the theorem. ii) Let (S, E) be a cycle on 4 vertices. The corresponding k-step nilmanifold admits an Anosov automorphism for all k ≥ 2. In particular, we get an example of 20-dimensional 3-step nilmanifold admitting an Anosov automorphism. iii) A complete bipartite graph (S, E) is a graph where S is a disjoint union of two subsets S 1 and S 2 , each containing at least two elements, and E = {αβ : α ∈ S 1 , β ∈ S 2 }. In this case S 1 and S 2 are the coherent components. Hence the k-step nilmanifold associated with a complete bipartite graph admits an Anosov automorphism for all k ≥ 2. In particular, if we choose S 1 and S 2 such that |S 1 | = m and |S 2 | = n we get an example of l-dimensional 3-step nilmanifold admitting an Anosov automorphism, where l = m(n−1) 2 − (n−2)(n−1)m 2 +n(m−1) 2 − (m−2)(m−1)n 2 + 2mn. iv) Let (S, E) be a "magnet" graph with core C i.e. C is a subset of S such that its complement in S contains at least two elements and E = {αβ : α ∈ C, β ∈ S, α = β}. The k-step nilmanifold associated with (S, E) admits an Anosov automorphism if and only if k < |C|. Nonexistence of Anosov automorphisms on certain 2-step nilmanifolds In this section we prove some results on nonexistence of Anosov automorphisms on certain nilmanifolds. Let N Q be the 2-step nilpotent Lie algebra over Q, associated to the graph (S, E). Let X = [α, β] + [γ, δ], where α, β, γ, δ are distinct vertices in S such that αβ, γδ, αγ, αδ ∈ E. Let H Q denote the quotient N Q / X where X is the one-dimensional subspace spanned by X. Let H = N / X . It was proved in [5] that if the graph (S, E) is a complete graph (i.e. αβ ∈ E for all α, β ∈ S), then H Q does not admit a hyperbolic automorphism whose characteristic polynomial has integer coefficients and unit constant term (see Theorem 3.2 of [5]). We prove a similar result for an arbitrary graph. Theorem 5.1 The 2-step nilpotent Lie algebra H Q , defined as above, does not admit a hyperbolic automorphism whose characteristic polynomial has integer coefficients and unit constant term. Notation 5.2 We recall that N = V ⊕ W (see §3). We decompose H as H = V ⊕ W ′ , where W ′ = W/ X . Let G be the subgroup of GL(V ) consisting of all restrictions, τ |V , such that τ ∈ Aut(H) and τ (V ) = V . Let G be the subgroup of GL(V ) consisting of all restrictions, τ |V , such that τ ∈ Aut(N ) and τ (V ) = V . It can be seen that subgroups G and G of GL(V ) are Lie subgroups. Let G (resp. G) be the Lie algebra of G (resp. G). Let G 0 (resp. G 0 ) be the connected component of the identity in G (resp. in G). Let D (resp. D) be the Lie subalgebra of G (resp. of G) consisting of all endomorphisms in G (resp. in G) that are represented by diagonal matrices with respect to the basis S. Note that D consists of all the endomorphisms in D which are contained in G. For η, ζ ∈ S, let E ηζ be the element of End(V ) such that E ηζ (ζ) = η and E ηζ (ξ) = 0 for all ξ ∈ S, ξ = ζ. Proposition 5.4 The Lie algebra G, defined as above, is spanned by D, W γβ αδ ∩ G, W δα βγ ∩ G, W δβ αγ ∩ G, W βδ γα ∩ G, and the elements of G of the following type: Proof Let Y ∈ G. Then it can be expressed as Y = Y 0 + η,ζ∈S,η =ζ a ηζ E ηζ , where Y 0 ∈ D (see Notation 5.2) and a ηζ ∈ R. By using the fact that E ζζ ∈ G for all ζ / ∈ S ′ , we observe that a ηζ E ηζ is contained in G for all η, ζ / ∈ S ′ (see the proof of Proposition 3.1 in [3]). We note that ∈ S ′ , a ζα E ζα + a ζγ E ζγ and a αζ E αζ + a γζ E γζ are contained in G. Now as E αα + E δδ ∈ G, we have a ζα E ζα and a αζ E αζ are in G. Similarly we can see that a ηζ E ηζ ∈ G, for all η ∈ S ′ and ζ / ∈ S ′ ; and a ηζ E ηζ ∈ G, for all η / ∈ S ′ and ζ ∈ S ′ . We also have [E ββ + E γγ , [E ββ + E δδ , [E αα + E γγ , Y ]]] ∈ G. This shows that Z = a γδ E γδ + a βα E βα − a δγ E δγ − a αβ E αβ ∈ G. Also [E ββ + E γγ , Z] ∈ G. Therefore we get a γδ E γδ +a βα E βα +a δγ E δγ +a αβ E αβ ∈ G. Hence a γδ E γδ +a βα E βα and a δγ E δγ +a αβ E αβ are contained in G. Since [E αα +E γγ , a γδ E γδ +a βα E βα ] ∈ G, we have a γδ E γδ − a βα E βα ∈ G and hence a γδ E γδ ∈ G. Hence we have proved that if a γδ = 0 then E γδ ∈ G. Similarly it can be proved that E βα ∈ G if a βα = 0, E αβ ∈ G if a αβ = 0, As a αβ E αβ and a γδ E γδ are contained in G, we have a αδ E αδ + a γβ E γβ ∈ G. Similarly we can prove that a βγ E βγ + a δα E δα ∈ G. As Y ∈ G, by above observations we have Z ′′ = Y 0 +a αγ E αγ +a γα E γα +a βδ E βδ +a δβ E δβ ∈ G. Considering the element [E αα + E δδ , Z ′′ ] we prove that a αγ E αγ + a δβ E δβ ∈ G and a γα E γα + a βδ E βδ are in G. Hence we have now Y 0 ∈ G. As Y 0 is in D, Y 0 ∈ D. Hence we have proved our claim that G is spanned by D, W γβ αδ ∩ G, W δα βγ ∩ G, W δβ αγ ∩ G, W βδ γα ∩ G, and the elements of G of the type (i)-(iv) as stated. Proposition 5.5 Any automorphism T in G 0 is induced by an automorphism T in G 0 , with T X = X . Proof We will prove the following: If the element from the type (i)-(iv) in the statement of Proposition 5.4, considered as an element of End(V ), is in the Lie algebra G; then that element is in G (see Notation 5.2). We will also prove that W γβ αδ ∩ G, W δα βγ ∩ G, W δβ αγ ∩ G, and W βδ γα ∩ G are contained in G. Let I denote the identity transformation in GL(V ). By similar arguments we can prove our claim for the elements of the type (ii), (iii) and (iv). Similarly we see that our claim holds for the elements of W δα βγ ∩ G, W δβ αγ ∩ G, and W βδ γα ∩ G. By using the above argument, Proposition 5.4 and Theorem 2.10.1 in [11], we see that there exists an open neighbourhood U of I in G 0 such that any automorphism contained in U can be lifted to an automorphism of N which keeps an ideal X invariant. Hence any automorphism T in G 0 can be lifted to an automorphism T in G 0 such that T ( X ) = X (use Proposition 3.18 in [12]). Proof of Theorem 5.1: Suppose θ ∈ Aut(H Q ) is a hyperbolic automorphism such that its characteristic polynomial has integer coefficients and unit constant term. Since Aut(H) has finitely many connected components, by replacing θ by its suitable power we may assume that θ is contained in the connected component of the identity in Aut(H). By Proposition 5.5, we see that there exists an automorphism θ contained in the connected component of the identity of Aut(N ) such that its characteristic polynomial has integer coefficients and unit constant, θ(X) = X, and θ has an eigenvalue 1 of multiplicity 1. We can assume that the matrix of θ with respect to the basis S ∪ E is an integer matrix. We have θ(N j ) = N j for each j = 1, . . . , k where N j = (⊕ i≤j V λ i ) ⊕ W and λ 1 , . . . , λ k is an enumeration of Λ such that for all j = 1, . . . , m, i≤j V λ i is invariant under the action of G 0 (see §2). Let π j : N j → V λ j denote the canonical projection for each j = 1, . . . , k. Let θ λ j : V λ j → V λ j be given by θ λ j = π j • θ. All the eigenvalues of θ on W are pairwise products of the eigenvalues on V λ 's. Also a λ a µ occurs as an eigenvalue of θ|W if and only if there exists ζ ∈ S λ and η ∈ S µ (notation are as before) such that ζη is an edge, where a λ is an eigenvalue of θ λ and a µ is an eigenvalue of θ µ As 1 is an eigenvalue of θ, there exist λ and µ in Λ such that ζη is an edge for ζ ∈ S λ and η ∈ S µ , and a λ a µ = 1, a λ and a µ being eigenvalues of θ λ and θ µ respectively. We will prove that λ = µ. Suppose that λ = µ. Let a ′ λ be a conjugate of a λ over Q, a ′ λ = a λ . Then a µ = a −1 λ and a −1 λ ′ are conjugates over Q. The minimal polynomial of a µ over Q divides the characteristic polynomial of θ µ . Hence a −1 λ ′ occurs as an eigenvalue of θ µ . As by our assumption ζη is an edge for all ζ ∈ S λ and η ∈ S µ , a λ ′ a −1 λ ′ occurs as an eigenvalue of θ, where a λ = a ′ λ , and hence we arrive at a contradiction as the multiplicity of the eigenvalue 1 is 1. Therefore λ = µ. Hence there exists λ ∈ Λ such that the restriction of a graph (S, E) on S λ is complete and a λ a ′ λ = 1 for the some eigenvalues a λ and a ′ λ of θ λ and X ∈ [V λ , V λ ], which is not possible (by Theorem 3.2 of [4]). This completes the proof of the theorem. Remark 5.6 Let H be the simply connected nilpotent Lie group corresponding to the Lie algebra H. Let Γ be a lattice in H corresponding to H Q (see [4]). Then Theorem 5.1 shows that the nilmanifold Γ\H does not admit an Anosov automorphism. Thus we have proved T (I) = I and hence T factors through an automorphism of Q. Let θ denote the automorphism of Q induced by T . We claim that θ(X) ∈ X . We note that θ(Y ) = θ(Y ) in M, for all Y ∈ V 3 , where bar is taken to denote the elements in M represented by elements in Q. Now θ(X) = 0 in M, as θ is an automorphism of M. This implies that θ(X) ∈ X in Q. . Let E be the set of all unordered pairs λµ with λ, µ ∈ Λ, such that αβ ∈ E for α ∈ S λ and β ∈ S µ . We recall that V λ denotes the subspace of V spanned by S λ , λ ∈ Λ. Let λ 1 , . . . , λ k be an enumeration of Λ such that assertion (ii) of Theorem 2.1 holds. Let N j = (⊕ i≤j V λ i ) ⊕ W for j = 1, . . . , k. Theorem 6.3 The Lie algebra M does not admit a hyperbolic automorphism whose characteristic polynomial has integer coefficients and unit constant term. Proof Let θ be a hyperbolic automorphism of M such that the characteristic polynomial of θ has integer coefficients and unit constant term. Let θ be an automorphism of Q as obtained in the previous proposition. Let τ be an automorphism of N induced by θ. As θ is a hyperbolic automorphism, and θ(X) = X, 1 is an eigenvalue of θ of multiplicity 1. Since the characteristic polynomial of θ has integer coefficients and unit constant term, we may assume that the matrix of θ has all integer entries (by replacing θ by some power of θ if necessary.) As Aut(N ) has finitely many components, we may assume that θ lies in the connected component of the identity of Aut(Q) and τ lies in the connected component of the identity of Aut(N ). Hence θ(N j ) = N j (see Lemma 2.2). Let π λ j : N j → V λ j denote the canonical projection. Let θ λ j be an endomorphism of V λ j defined by θ λ j (v) = π λ j (θ(v)), for all v ∈ V λ j . We note that θ λ j is an automorphism of V λ j . Case(ii): Suppose δ λ δ ′ λ δ ′′ λ = 1 for some λ ∈ Λ such that the restriction of (S, E) on S λ is complete and δ λ , δ ′ λ , δ ′′ λ are the eigenvalues of θ λ . If δ λ = δ ′ λ = δ ′′ λ then δ 3 λ = 1, which is a contradiction since θ is hyperbolic. Hence we may assume that δ ′ λ = δ ′′ λ . Let V C λ denote the complexification of V λ . Suppose that Y, Y ′ , Y ′′ ∈ V C λ are eigenvectors corresponding to the eigenvalues δ λ , δ ′ λ , δ ′′ λ . We consider the complexification of Q and θ also. As δ ′ λ = δ ′′ λ , Y ′ and Y ′′ are linearly independent. Remark 6.4 Theorem 6.3 shows that the nilmanifold Γ\M, where M is the simply connected nilpotent Lie group corresponding to the Lie algebra M ⊗ R and Γ is a lattice in M corresponding to M, does not admit an Anosov automorphism. In particular, a nilmanifold Γ\M, where Γ corresponds to the rational Lie algebra given by a quotient of free 3-step nilpotent Lie algebra by a one-dimensional ideal, does not admit an Anosov automorphism.
2014-10-01T00:00:00.000Z
2006-01-01T00:00:00.000
{ "year": 2006, "sha1": "93f88042ac60cf0ef118d40f11d70f1b7ad64701", "oa_license": null, "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/EA9BA56D82B1F5B4AF0EEB04D1C1B59C/S0017089505002958a.pdf/div-class-title-anosov-automorphisms-on-certain-classes-of-nilmanifolds-div.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "e00415d8f32ad98f5bfdc6a92eb5fb95dfd6f310", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
200107531
pes2o/s2orc
v3-fos-license
Developing autoplay media based mathematics teaching materials for elementary school This study aims to develop and create autoplay learning media based teaching materials on mathematics learning for elementary schools which meet the criteria of valid, practical, and effective. This study adopts Research and Development (R&D) through the development model of ADDIE (Analyze, Design, Development, Implementation, and Evaluation). Data collection techniques applied in this study are questionnaires and tests. Besides, the results of the research were analyzed through questionnaire analysis, t test, and n-gain test. The results of this study indicate that the development of instructional materials based on autoplay learning media meets the criteria of valid, practical and effective, where the average validation results by 3 validators are 90.67; students’ response on the use of instructional media-based teaching materials is 85% which is responded positively; learning outcomes of the experimental class was better than the control class; and there was a high increase in the experimental class between pretest and posttest of 0.85. Introduction Mathematics is one of the lessons that must be taught at elementary school level. This is because mathematics can shape students to think logically and systematically [1]. It is expected that by studying mathematics, students are able to connect and understand one of mathematical concept with other mathematical concepts to solve problems in everyday life [2]. Based on the results of observations conducted by researchers at Surya Buana Islamic Elementary School of Malang, it was found that the teaching materials used by the teacher in the learning process were still very simple, namely using only thematic books prepared by the government. Teachers have not yet developed interesting and innovative teaching materials, so that student learning outcomes in mathematics material are still low. This is evidenced by the achievement of students' scores, which are mostly still under the Minimum Score Criterion set by the school which is 70. From the 24 fifth grade students, only 8 students around 33.3% achieved above the minimum score criterion. The remaining students or around 66. 6% have not yet reached the minimum score criterion. In his research Wandani & Nasution (2017) [3], Wijaya & Rakhmawati (2015) [4], Nu'aimah & Kholis (2016) [5] also revealed that most teachers have not [9] in their research also revealed that learning that does not use interactive multimedia has an impact on students' poor understanding of the subject matter. For that reason, one alternative that can be used to improve students' understanding of concepts in mathematics learning is to develop teaching materials based on autoplay learning media. Autoplay learning media is one of the interactive multimedia-based learning. Autoplay Media is a software for creating multimedia by integrating various types of media such as images, sound, video, text and flash into the presentation that is made [10]. Interactive learning media is a two-way or communicative learning media. Interactive multimedia according to Sanaki (2009) [11] is a multimedia that is equipped with a controller that can be operated by the user so that users can choose what they want for the next process. According to research conducted by Hayumuti, Susilo & Manahal (2016) [12], they revealed that interactive multimedia learning is able to provide a concrete picture of the concept of subject matter that must be understood by students. The results of research conducted by Rajendra & Sudana (2017) [13] revealed that learning using interactive multimedia can help students to improve concept understanding of the subject matter. Research conducted by Utami & Julianto (2013) [14], Nisa, Wati & Mahardika (2017) [15], Hanafi (2017) [16] also revealed that students' conceptual understanding of the subject matter increased after the teacher used interactive multimedia. The same conclusion was stated by Maharani (2015) [17], Novyarti, Marzal & Rohati (2014) [18] she concluded that the existence of multimedia learning is very influential on understanding of students' concepts of mathematics subject matter. Methodology This study adopts Research and Development (R & D) methods. The models used in developing Autoplay media is ADDIE (Analysis, Design, Development, Implementation and Evaluation) models. This model was chosen because the ADDIE model has excellence, namely the work procedure systematic, that is, each step always refers to the previous step that has been fixed, so that an effective product can be obtained [17]. In addition, the ADDIE model is a general learning model that is suitable for development research. The product quality developed in this study is determined by 3 criteria, namely validity, practicality, and effectiveness [19], [20]. ADDIE model has 5 stages of development, namely Analysis, Design, Development, Implementation, and Evaluation. The flow of the ADDIE model can be described as shown in Figure 1: Figure 1, it is clear that this development model begins with the analysis phase. The analysis phase is a process of defining what students will learn, namely analyzing needs, identifying problems, and carrying out task analysis [21]. The next stage is design. Design is arranged by studying the problem, then finding solutions through identification of the needs analysis stage in the previous process. The third stage is development. At this stage researchers try to compile and design instructional materials 3 based on information that has been obtained from various previous stages. After the production of the Autoplay Mathematics learning media product at the 5th grade of theme 8, a validation test was conducted to the expert judgment. Validation is done to material experts, media experts, and learning experts. The fourth stage is implementation. At this stage, teaching materials are implemented in the learning process. The aim is to collect data that can be used as a basis for determining the level of practicality of the products produced. Submission of learning materials was carried out in small group trials and field trials for fifth grade students of Surya Buana Islamic Elementary School of Malang. The last stage is the evaluation phase. At this stage the researcher evaluates the products that have been developed. This is done to determine the effectiveness of the products developed, namely autoplay media. For testing this learning media is done by comparing the conditions before and after using the new system (before-after) [17]. While the target of the user trial subjects were fifth grade students of Class A as the control group and fifth grade students of class B as the experimental group at Surya Buana Islamic Elementary School of Malang. The data collection techniques in this study were observation, questionnaires and tests. Observation techniques are used to find out the description of student activity with the application of instructional materials based on Autoplay learning media. Meanwhile, the questionnaire technique is used to collect data about the accuracy of the components of teaching materials, the accuracy of the material, the accuracy of the system, the accuracy of the design or design, the attractiveness of teaching materials based on Autoplay learning media. Furthermore, the questionnaire was analyzed to determine the feasibility of teaching materials based on Autoplay learning media as well as proposed as a guide in product revisions to produce better products. While the test technique is used to see the effect of learning outcomes on instructional materials based on Autoplay learning media, which are pretest and posttest. Furthermore, the results of the research data were analyzed using questionnaire percentage test, t test test and n gain test. Learning Media Validation Results Based on the results of validation by 3 validators consisting of validators of mathematics material experts, validators of learning media experts, and practitioner validators of elementary school, it obtained the data as in table 1 below: Changing the look of the Menu on the evaluation page to be more interesting Adding a timer and changging the look of evaluation page to be more simple The Results of the Learning Media Practicality Test The practicality of the developed learning media was assessed based on students' responses to the learning process using autoplay media. The data of students' responses to the use of autoplay-based learning media are as it follows: 1) Test small groups This small group test was conducted on 6 students consisting of two children representing good-ability students, two middle / medium-capable children, and two low-ability children. From the results of the small group test, it was obtained data that the total score of the students' responses to the use of autoplay media based teaching materials was 258 and a maximum score of 300. Then the percentage of student responses can be calculated as it follows: Percentage = ∑ ∑ 100 % Percentage = 258 300 100 %= 86% The calculation results above show that the percentage of students' responses to the use of autoplay media based teaching materials is 86%. The score is included in a good category, namely practical autoplay media based teaching materials used in the learning process. 2) Field trials This field trial was carried out to 24 students of Surya Buana Islamic Elementary School of Malang. From the results of the small group test, it was obtained data that the total score of the students' responses to the use of autoplay media based teaching materials was 1030 and a maximum score of 1200. Then the percentage of student responses can be calculated as it follows: Percentage = ∑ ∑ 100 % Percentage = 1030 1200 100 %= 85% The calculation results above show that the percentage of students' responses to the use of autoplay media based teaching materials is 85%. The score is included in a good category, namely practical autoplay media based teaching materials used in the learning process. The Effectiveness of Learning Media Test Results Effectiveness is a criterion that shows that the media developed successfully achieve the desired goals. The effectiveness of the learning media developed is seen from the average difference between the control class learning outcomes and the experimental class and the improvement of experimental class learning outcomes before and after using instructional materials based on autoplay learning media. The effectiveness of this study refers to the results of the calculation of the t test and n gain test. The results of the calculation of the experimental class and control class learning outcomes are as follows in table 3: Table 3 shows that the average score of the experimental class is 87 and the average score of the control class is 84. This means that the average score of the experimental class is better than the control class. To see whether the average difference is significant or not, a t-test is necessarily needed. The results of the calculation of the t test are as it follows: In table 4, it is known that the significance value (Sig.2-tailed) is 0.232. This score states that Ho is rejected and H1 is accepted. So it can be concluded that the learning outcomes (post-test) that use instructional materials based on Autoplay learning media have a significant difference. In this case the instructional material based on Autoplay learning has a better effect than conventional teaching materials on learning outcomes of Mathematics material on the theme of Ecosystems. This is evidenced by an increase in the results of learning Mathematics by using instructional materials based on Autoplay learning of Mathematics material on the Ecosystems theme the average distribution is higher than conventional teaching materials. Furthermore, to find out whether there is an increase in the ability of students to understand the concept of the subject matter before and after using autoplay learning media, we can use the n gain test. The results of the calculation of the n gain test are as it follows: Table 5 shows that the results of the n-gain test on the comparison of the results of the pre-test and post-test obtained an average value of 0.85. This value is high. This means that the learning process using autoplay media-based learning media can improve student learning outcomes. From the description of the above research shows that the developed autoplay learning media has fulfilled valid criteria, because the average expert validation results of 90.67 (valid) are included in the good criteria. In addition, autoplay learning media also fulfills practical criteria, because students' responses to the use of autoplay media-based teaching materials are 85%. The score is included in a good category, namely practical autoplay media-based teaching materials used in the learning process. Autoplay learning media also meet the effective criteria, because the average value of student learning outcomes is 87. This score exceeds the minimum score criterion set by the school which is 70. this was also strengthened by the results of the n-gain test which showed that there was an increase in the ability of students to understand the concept of mathematics material by using autoplay learning media of 0.85. This means that the autoplay learning media effectively enhances students' conceptual understanding of mathematics materials. This is because the use of multimedia in the learning process has several advantages including 1) providing interactive processes; 2) providing easy feedback; and 3) giving learners to determine the topic of learning and multimedia processes to provide systematic control in the learning process [22]. In addition, according to research conducted by Smith [27] revealed that the use of multimedia in the learning process was able to increase learning motivation, learning activities, and students' learning outcomes. Conclusion Based on the results of the research described, it can be concluded that the media of autoplay learning developed has met valid criteria, because the average expert validation results of 90.67 (valid) are included in the criteria. In addition, autoplay learning media also fulfills the practical criteria, because the students' responses to the use of autoplay media based teaching materials is 85%. The score is included in a good category, namely practical autoplay media-based teaching materials used in the learning process. Autoplay learning media also meet the effective criteria, because the average score of students' learning outcomes is 87. This score exceeds the minimum score criterion set by the school which is 70. this was also strengthened by the results of the n-gain test which showed that there was an increase in the ability of students to understand the concept of mathematics materials through the use of autoplay learning media of 0.85. This means that the autoplay learning media effectively enhances students' conceptual understanding of mathematics materials.
2019-08-15T16:46:20.772Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "4213a366714a126f47b8ca7434b32c31f56abb8a", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1175/1/012265", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f305434e52a87ab2f422748af20d13a465292fe5", "s2fieldsofstudy": [ "Mathematics", "Education" ], "extfieldsofstudy": [ "Physics" ] }
201265368
pes2o/s2orc
v3-fos-license
Direct calculation of computer-generated holograms in sparse bases. Computer-generated holography is computationally intensive, making it especially challenging for holographic displays where high-resolutions and video rates are needed. To this end, we propose a technique for directly calculating short-time Fourier transform coefficients without the need for a look-up table. Because point spread functions are sparse in this transform domain, only a small fraction of the coefficients need to be updated, enabling significant speed gains. Twenty-fold accelerations are reported over the reference implementation. This approach generalizes the notion of the phase-added stereogram, allowing for the calculatiion of an arbitrary number of Fourier coefficients per block, enabling high calculation speed with holograms of good visual quality, targeting minimal memory requirements. Introduction Numerical diffraction simulates how coherent light propagates through free space.Because of the properties of waves, these calculations are computationally costly: every point in the 3D scene can potentially affect every other point in space. This problem is especially important for holographic displays.They are considered to be the ultimate 3D display technology [1], because holograms can account for all human visual cues, including stereopsis, eye-focusing, parallax and no exhibit accommodation-vergence conflict.These displays require holograms to be calculated at video rates and at high resolutions, which require highly efficient Computer Generated Holography (CGH) algorithms. CGH algorithms come in many forms and have various trade-offs [1,2], such as: calculation time, what graphical effects are supported (occlusion, shadows, reflectivity, etc.) and what type of 3D scenes and models can be accounted for.They can be classified as point cloud methods [3][4][5][6][7][8], layer-based methods [9][10][11][12], polygon methods [13][14][15][16], RGB+depth methods [17,18], and ray-based methods [19][20][21][22][23].These categories are not sharply defined or mutually exclusive, as hybrid algorithms exist [24].This paper will focus on a subset of the point cloud algorithms, which we call "sparse CGH algorithms".Sparsity is a concept from signal processing literature; it is a measure of how few coefficients are needed to accurately describe a signal in particular basis.For examples, images such as natural photographs typically are sparse in wavelet space, making the wavelet transform highly suitable for compressing images [25], since most coefficients will be (almost) zero. Sparse CGH algorithms thus calculate the hologram in a well-chosen transform domain, so that most of the signal's energy is concentrated in a small number of coefficients, thereby requiring much fewer updates than the total hologram pixel count; this can speed up hologram calculation considerably.The challenge is to choose the right transform, balancing sparsity (i.e.minimizing the number of needed coefficients) with the complexity of updating individual coefficients and the calculation cost of the inverse transform, maximizing performance and quality. Presently, there are only a few examples of sparse CGH in literature [1].Although initially not described as such, it can be argued that the first instance of sparse CGH is the well-known Wavefront Recording Plane (WRP) [5], which calculates Point Spread Functions (PSFs) in an intermediate plane, which effectively corresponds to performing CGH in a convolved space.Because the propagation distance of a point to the intermediary plane is relatively short, the spatial PSF support is small, rendering them sparser in the spatial domain.After accumulating all the PSFs, the intermediary plane can be propagated efficiently using a convolution, such as the Fresnel transform or the Angular Spectrum Method (ASM) [26].This notion can be extended further by using multiple WRPs, allocating points to their respective nearest WRP [9,11]. Recently, another type of approach has been proposed that leverages the redundancies within a PSF in the wavefront plane instead of only relying on optimizing WRP depth placement.This was first proposed by Shimobaba et al. [27], by precomputing PSF coefficients in wavelet space using a Look-Up Table (LUT).Because neighboring PSF samples are highly correlated, much like in conventional imagery, fewer coefficients are needed to accurately represent PSFs.Later, sparse CGH was proposed using the Short-Time Fourier Transform (STFT), which was shown to yield even higher sparsity and better preservation of views at large angles [28].Both these approaches can be combined with WRPs to even further increase sparsity [28,29]. With this outlook, the phase-added stereogram [20] using the Fast Fourier Transform (FFT) [30] can be viewed as a special case of the sparse STFT CGH: for every PSF, a single STFT coefficient is updated in every block.Albeit fast, this approximation is relatively coarse, because the number of available discrete frequencies is limited.This will reduce the visual quality and hence limiting the application domain.Several extensions have been proposed [31] to improve quality, but these may significantly increase memory requirements by oversampling the FFT space by using blocks which are much larger than those of the basic method. However, typically sparse CGH algorithm rely on a LUT for their operation, which introduces limitations.To this end, we propose the first sparse CGH algorithm without a LUT.This makes the system much more flexible: (1) PSFs positions are not discretized anymore for a fixed set of 3D locations; (2) no need for precomputing and storing sizeable LUT or using overcomplete representations, enabling a wider range of applications such as FPGAs (Field Programmable Gate Arrays) and embedded systems; (3) it reduces memory-bound performance limitations, making the algorithm more cache-friendly.Hence, the proposed algorithm can be viewed as a generalization of the phase-added stereogram. The main novel contributions are the following: • A derivation of an analytical model for directly calculating STFT coefficients, and proposing an efficiently computable and accurate approximation. • An algorithm to selectively update a small subset of the highest-magnitude STFT coefficients, with a configurable sparsity level trading-off speed and quality. • A validation reporting a 20-fold speedup over the conventional algorithm using a C++ implementation tested on 3D point cloud models, and an error analysis measuring how much the proposed algorithm deviates from a reference PSF for various sparsity levels. The paper is organized as follows: In section 2, we first derive the analytical expression of the STFT coefficients for PSFs; then an accurate and efficiently computable approximation is proposed, followed by a technique to select which subset of coefficients to update.Then, in section 3, the experiments are divided in two parts: (1) measurment of the accuracy of the proposed algorithm for various sparsity levels, and (2) measuring the gain in calculation speed w.r.t. a reference method, including a comparison in visual quality.Finally, we conclude in section 4 and discuss potential avenues for future work. Methodology The Fresnel approximation of a PSF P, laterally displaced with a translation (δ, ) and at a depth z from the hologram plane (ignoring the scaling factor) is given by where i is the imaginary unit, γ 2 = π λz with wavelength λ.This expression can be used to calculate a hologram H by summing over a collection of M points P j , j ∈ {1, ..., M }, each with their own values (γ j , δ j , j ): Sparse CGH will compute the values of P j indirectly using a linear transform T .If T is well-chosen, the PSF P j will be sparse and can be computed using only a few coefficients.Because of its linearity, we get that meaning we can first accumulate all P j in the transform domain, and only need to apply the inverse transform T −1 once on the summed PSF coefficients.If T can be computed quickly compared to the summation, we can obtain a large speedup.Here, we choose T to be the STFT. The STFT divides the hologram into evenly-sized blocks, followed by a local Discrete Fourier Transform (DFT) on each block.The goal is to get an expression for directly calculating these coefficients for any given P(x, y) without relying on a LUT.To obtain the local Fourier coefficients of a hologram block of dimensions 2B × 2B, we have to evaluate the following integral f : Since both the PSF P as well as the Fourier transform are separable, so is f .We thus need to compute which are identical up to constants.The function f x cannot be written using only elementary functions, but it can be expressed in terms of the error function with t as the variable of integration.We can now rewrite f x (ω) = This expression can be simplified further.Firstly, we are not interested in the leading complex constant, since this constant phase delay and scaling will not affect the appearance of the hologram.Secondly, we can group the constants to obtain: where α = ω 2γ and x) is proportional to the error function evaluated on a diagonal of the complex plane, where c ∈ C can be chosen freely.Directly evaluating the integral from Eq. ( 6) to calculate E(x) would be too computationally costly, so this will be tackled in the next subsection. Efficiently approximating E(x) The aim is to find an efficiently computable, yet accurate approximation of E(x).This is needed if we want the sparse STFT coefficient computation to be faster than the conventional PSF computation in the spatial domain.To achieve this, an approximation was chosen with two use cases: one for small absolute values of x, and one for large values of |x|.Furthermore, c = (i − 1) π 2 was selected to simplify the subsequent expressions.For large values, we model the limit behavior of E using its asymptotic expansion: The series is truncated and simplified to a function E L for x → ±∞: where sgn(x) is the sign function, equal to 1 when x is positive and equal to −1 when x is negative. Given that E(x) is an odd function, namely erf(x) = −erf(−x), the same expansion can be reused for very small values when x → −∞.However, this approximation cannot be used for small values of |x|; as x approaches 0, the singularity in the denominator of the first term of E L (x) will dominate, leading to large deviations from E(x).Instead, we can use the Taylor series around x = 0. We get the following Taylor approximation E S (x) valid for small values of |x|: for the real and imaginary parts, respectively.Note that the terms are separated by powers of x 4 , meaning that x 4 needs only to be computed once and can be reused for evaluating the polynomials more efficiently.What remains to be done is to find out when E L or E S should be selected for any given value of x.This can be determined by looking at their errors w.r.t. to E, and finding the intersection point.On Fig. 1 we plot the Mean Squared Error (MSE) using a logarithmic plot scale.Numerically, the intersection point is found to be x ≈ 1.609.We thus define the approximation of E(x) to be The values of E(x) and Ẽ(x) are shown (for the imaginary part) on Fig. 2. x ≈ 1.609 MSE Fig. 1.MSE of the approximations for small values (E S (x)) and large values (E L (x)) compared to the reference E(x), on a logarithmic scale.The crossing point of both curves (at x ≈ 1.609) is chosen to select the approximation with the smallest error depending on x.Given the symmetry of E(x), the curves look exactly the same (but mirrored) for negative values of x. 2. Comparison of the (imaginary parts of) E(x) and Ẽ(x).The curves show good agreement (see Fig. 1 for the error analysis). Accurate approximations of the PSF Now that we have an expression f (ω, η) for directly computing the PSF in the STFT domain, a new question arises: how do we determine what subset of the coefficients to compute?The location of the highest-magnitude coefficients should be known a priori, since calculating all of them and finding the maximum would defeat the purpose of using a sparse representation. This problem can be addressed by analyzing the simultaneous time-frequency representation of signals (a.k.a.Wigner space or phase space).Most of the energy of a single STFT coefficient is concentrated within a bounded region in space and bandwidth, which can be represented graphically by Heisenberg boxes (cf.Fig. 3).Every column represents a single STFT block, and every row are all coefficients of the same frequency. The energy distribution of the STFT coefficients can be found by looking at the instantaneous frequency of PSFs.The instantaneous frequency ν(x) of a (one-dimensional) PSF P(x) is given by where ∠ is the complex argument (or angle).The curve ν(x) is a line with a slope inversely proportional to z and displaced by a lateral translation δ.Because of the non-stationary behavior of P(x) and the discreteness of the transforms, the energy won't be concentrated at a single coefficient, but rather leak and spread out to neighboring ones.Therefore, a group of STFT coefficients centered around the line ν(x) for every block are selected for calculation to maximize the energy capture, as shown in blue on Fig. 3.The leftmost STFT coefficient index j x (or equivalently j y ) is taken so that the subblock is centered as closely around the expected peak as possible: clamping the value to ensure valid indices for any point position, and using the floor operator • (rounding down).The value b x is the x-coordinate of the STFT block, which is a multiple of N • p (for pixel pitch p).An example is shown on Fig. 4, which also confirms the contiguousness of the coefficients: the energy is localized in a small group of coefficients, leaving all others to be close to zero.Note that the proposed algorithm will also naturally suppress aliasing when the PSF frequency surpasses the Nyquist rate because of potentially too steep incidence angles of light, since it won't compute the aliased coefficients.Furthermore, changing the STFT block size N will trade-off time and frequency resolution.The best block size to maximize sparsity will largely depend on the distribution of point distances to the hologram plane, as analyzed more in detail in [28]. Finally, the block-wise inverse FFT should be performed after accumulating all PSF contributions to obtain the final hologram.This has a computational complexity of O(t log N) for t hologram pixels and block size N.Because N is a relatively small constant, the inverse STFT can be executed very fast.Moreover, individual blocks will generally completely fit in the cache, contrary to global transforms spanning over the whole hologram, further improving speed. Experiments This section consists of two parts.Firstly, we evaluate the accuracy of the proposed algorithm w.r.t. the reference approach, i.e. calculating the hologram in the spatial domain by directly evaluating Eq. ( 2).Secondly, we measure the calculation times and report the speedup and compare rendered views from the generated holograms. Measuring the accuracy of the sparse STFT algorithm For this experiment, a PSF is computed with dimensions of 512 × 512 pixels, p = 4 µm, λ = 633 nm and z = 3 cm.The proposed algorithm uses blocks of size N = 32, and is tested for subblock sizes ranging from n = 2 up to n = 32. In Fig. 5, several renditions are shown for different values of n, besides a reference PSF.Moreover, Fig. 6 depicts a few examples of sparse PSFs for n = 4 at distances z = 2 cm up to z = 8 cm.At higher n, the PSFs are visually indistinguishale from the reference, validating the adequacy of the method.At lower subblock size, block edge defects become increasingly visible, due to the properties of the block-wise DFT.These are reminiscent of JPEG artifacts at high compression rates.Nonetheless, the circular shape of the PSF is still well-preserved even at low n values.To quantify the magnitude of the error, the Normalized Mean Square Error (NMSE) is reported as well, namely where R is the original reference PSF and S is the sparsified PSF.The subscript j denotes the pixel index, and the norm is Euclidean.On Fig. 7 the NMSE values are plotted for different values of n using the same settings.As expected, the error decreases progressively as n increases.Note that the error for n = 32 is not zero (≈ 3.0 • 10 −3 ), because of the approximation introduced by Ẽ(x) in section 2.1, and also because the DFT is an approximation of the analytical (continuous) Fourier transform.Because conventional point-cloud CGH algorithms are a linear combination of many individual PSFs, the distortion will be an accumulation of all PSF errors as well.Therefore these results largely generalize for multiple points or even complete 3D point cloud models. The gain in calculation time is reported for a collection of M = 10 5 points in Fig. 8, for n going from 2 up to 32.This is compared to the reference approach, taking up 557.9 seconds.As expected, the calculation time increases progressively as n grows larger.At n = 2, the calculation time is 9.8 s (a 56.8-fold speedup). When n ≥ 30, the reference method is faster, mainly because the expression for calculating STFT coefficients (Eq.( 7)) is more complex than the standard expression (Eq.( 1)). Generating holograms of 3D objects In this subsection, the aim is to compare how directly calculating the hologram of 3D objects in the spatial domain (the reference approach) compares to the proposed sparse algorithm.The reference implementation utilized the separability of the Fresnel transform instead of directly calculating P(x, y): first, the PSF was evaluated to compute the horizontal and vertical components, followed by an outer product to accumulate all hologram pixel values. The holograms were computed on a machine with an Intel Core i7 6700K CPU, 32GB RAM and ran using a C++17 implementation compiled with Microsoft Visual C++ 2017 on the OS Windows 10.The FFT was calculated using the FFTW3 library.STFT blocks of size N = 32 were used for the holograms of both 3D scenes. The first 3D scene was configured as shown on Fig. 9.To generate a point cloud, the "Ship" model surface was randomly sampled to create a point cloud consisting of M = 10 5 points.The hologram dimensions were 2048 × 2048 pixels, with p = 4 µm and λ = 633 nm.The subblock size was n = 4, leading to a sparsity of s = (n/N) 2 ≈ 1.56%. The reference implementation took 557.9 seconds, while the proposed implementation only took 26.4 seconds.We thus observe a 21-fold speedup over the default implementation.The STFT only took 13 ms (averaged over 10 runs), which is negligible compared to the total calculation time.Examples of rendered views can be seen on Fig. 10, showing that the overall quality is preserved from multiple viewpoints, even though we calculated only 1.56% if the total amount of coefficients. For the second 3D scene, a point cloud of the "Train" model was used (cf.Fig. 11), consisting of M = 2 • 10 5 points assigned with random phases.The hologram dimensions were also 2048 × 2048 pixels, with p = 3 µm and λ = 633 nm.In this case, several different values for n were used to compare differences in visual quality.where R is the reference image and S is the reconstruction from the sparsified representation; C is the total pixel count, j is the pixel index and R max is the maximum pixel value, which is 255 for 8-bit images. The results are reported in Table 1; the calculation times for the reference method is 1099.1 s.The rendered images are displayed in Fig. 12: almost no deterioration can be seen for n ≥ 4, but some noise and loss of detail is visible for n = 2. Taking n = 4 as the baseline, we obtain a speedup factor of 20.6. Conclusion This paper presents an algorithm to analytically calculate the STFT coefficients of PSFs, thereby enabling the sparse CGH needing only a small subset of the total amount of coefficients, without requiring a LUT.A speedup factor of 20 can be achieved w.r.t. the reference implementation with limited quality degradation. Because of the generality of Fresnel diffraction, this algorithm has use beyond holography for other applications requiring near-field numerical diffraction of 3D point clouds.It can be viewed as a generalization of the phase-added stereogram, trading off accuracy and calculation speed. Furthermore, this approach is orthogonal to many other CGH acceleration algorithms.Future work will involve combining this algorithm with WRPs, other input elements besides PSFs, and further analysis on tuning of the algorithm's parameters for optimizing quality and speed for any given 3D scene.Also, because of the low memory requirements, it is a good candidate for a parallelized implementation on specialized hardware such as FPGAs. 8 Fig. 3 .Fig. 4 . Fig. 3. Time-frequency charts of PSF curves superimposed on STFT coefficients.The red curves represent the instantaneous frequency of the two PSF instances, whose slope will depend on z and lateral displacement on δ.The (Heisenberg) boxes correspond to individual coefficients, indicating where their energy is concentrated in space (or time) and frequency.The blue boxes are the coefficients that will be computed for this PSF.Both (a) and (b) have the same sparsity rates s, but (b) has a larger block size N trading off more frequency resolution for less spatial resolution, which is better for points with large |z | values. Fig. 5 .Fig. 6 . Fig. 5. Various renderings of a PSF placed at distance z = 3 cm using different algorithm settings.The subfigures denoted with "reference" are rendered using the standard PSF expression P(x, y) in the spatial domain, while the others are generated with the proposed approximate STFT algorithm.n denotes the subblock size and s the corresponding sparsity.means the real part is shown, ϕ indicates that the argument or (wrapped) phase is shown.Note how the PSF progressively resembles a phase-added stereogram more as n decreases. Fig. 8 .Fig. 9 .Fig. 10 .Fig. 11 . Fig. 8. Calculation time for computing the hologram of the "Ship" point cloud, for various subblock dimensions n (for N = 32) compared to the reference method, shown by the full horizontal line on the graph. Fig. 12 . Fig. 12. Reconstructed images of the "Train" hologram, for various values of subblock size n.The front focus is shown by taking the absolute value after backpropagating at z = −10.3cm with the ASM.The back focus uses z = −11.2cm. The image quality can be measured using the Peak Signal-to-Noise Ratio (PSNR), defined byPSNR(R, S) = 10 log 10 C • R 2 max C j (R j − S j ) 2(17)
2019-08-23T12:12:39.531Z
2019-07-30T00:00:00.000
{ "year": 2019, "sha1": "6fdaf5ea0fca7ae7f77751284377adb071b6130d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.27.023124", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "33812f5b3570c6e38ecbf13857f0f61ebc23027f", "s2fieldsofstudy": [ "Computer Science", "Physics" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
253112261
pes2o/s2orc
v3-fos-license
Health Status Assessment of Diesel Engine Valve Clearance Based on BFA-BOA-VMD Adaptive Noise Reduction and Multi-Channel Information Fusion Regarding the problem of the valve gap health status being difficult to assess due to the complex composition of the condition monitoring signal during the operation of the diesel engine, this paper proposes an adaptive noise reduction and multi-channel information fusion method for the health status assessment of diesel engine valve clearance. For the problem of missing fault information of single-channel sensors in condition monitoring, we built a diesel engine valve clearance preset simulation test bench and constructed a multi-sensor acquisition system to realize the acquisition of diesel engine multi-dimensional cylinder head signals. At the same time, for the problem of poor adaptability of most signal analysis methods, the improved butterfly optimization algorithm by the bacterial foraging algorithm was adopted to adaptively optimize the key parameter for variational mode decomposition, with discrete entropy as the fitness value. Then, to reduce the uncertainty of artificially selecting fault characteristics, the characteristic parameters with a higher recognition degree of diesel engine signal were selected through characteristic sensitivity analysis. To achieve an effective dimensionality reduction integration of multi-channel features, a stacked sparse autoencoder was used to achieve deep fusion of the multi-dimensional feature values. Finally, the feature samples were entered into the constructed one-dimensional convolutional neural network with a four-layer parameter space for training to realize the health status assessment of the diesel engine. In addition, we verified the effectiveness of the method by carrying out valve degradation simulation experiments on the diesel engine test bench. Experimental results show that, compared with other common evaluation methods, the method used in this paper has a better health state evaluation effect. Introduction As the main power core of large mechanical equipment, the diesel engine is widely used in transportation, industrial production, agricultural machinery, the chemical industry, national defense and military equipment, and other fields [1][2][3]. Whether the diesel engine can work normally and healthily often directly affects the normal operation of the entire equipment system. Therefore, it is of great significance to carry out effective condition monitoring and health assessment for diesel engines. In engineering practice, with the prolongation of service time, the valve spring of a diesel engine may gradually deteriorate and deform, and the valve will wear and deposit carbon, which will lead to an abnormal increase of valve clearance, will reduce the efficiency of cylinder flow control, and will then cause the power of diesel engine to decrease. At the same time, the continuous abnormal increase of valve clearance may also cause vicious failures such as cylinder impact or valve breakage, causing huge economic losses and even threatening personal safety [4,5]. In the actual state monitoring process, it is very difficult to directly measure the valve clearance of the diesel engine, and the timeliness is low. The vibration signal of the cylinder head of the diesel engine contains rich state information brought by the inertial impact in the working cycle and various random excitations [6,7]. Therefore, in this study, the vibration signal of the diesel engine cylinder head was collected and its health status was monitored. The process of diesel engine health status assessment mainly includes signal monitoring, data preprocessing, feature extraction, and health status recognition. The first is the aspect of signal monitoring; most of the current monitoring of vibration signals of diesel engine cylinder heads usually only have a single sensor dimension. A large number of studies have shown that, in the state monitoring and state identification of equipment, multi-dimensional signal monitoring can often achieve better evaluation results than single-dimensional monitoring. Pan et al. [8] conducted an effective evaluation of the performance degradation process of wind turbine gearboxes based on multi-sensor fusion data. Dong et al. [9] achieved a high-precision health status diagnosis and a prediction of hydraulic pump equipment based on a hidden semi-Markov model by collecting multisensor signals of the hydraulic pump. Kamal Jafarian et al. [10] used vibration data captured by four sensors placed at different positions on the car engine and in different experimental environments to investigate engine failures, including misfire and valve clearance failures, and combined time-frequency domain analysis methods and neural networks to realize the engine failure mode diagnosis. In this regard, this paper built a multi-sensor monitoring system, conducted a simulated degradation experiment of diesel valve clearance, and collected multi-dimensional cylinder head vibration signals, covering the health status information of the diesel engine from multiple dimensions. Further, due to the complex structure of the diesel engine, the vibration signal of the cylinder head incorporates the vibration excitation of the whole engine, showing strong non-stationarity and nonlinearity. Therefore, the effective processing and analysis of the vibration signal of the diesel engine cylinder head are also one of the difficulties in its state evaluation process. In this regard, scholars have carried out effective and feasible research. Wei et al. [11] used time synchronization and least squares polynomial fitting to preprocess the original signal and obtained the diesel engine combustion noise transfer function according to the motor test and different injection strategies. Xi et al. [12] used the Stockwell transform to construct a time-frequency reference signal to guide the separation process of kernel-independent component analysis (ICA) to avoid artificial uncertainties in ICA. Shao et al. [13] combined the advantages of the manifold learning algorithm to process nonlinear data and carried out diversified preprocessing on the vibration signal of a marine diesel engine, which greatly improved the quality of feature extraction. Although the above research has achieved certain results, it cannot effectively self-adaptively separate and denoise the status information and noise interference components in the diesel engine cylinder head signal. In this regard, Wang et al. [14] used a new adaptive wavelet packet threshold function for vibration signal denoising, which can extract truly physically meaningful components from the signal. Wang et al. [15] combined power spectral entropy and variational mode decomposition (VMD) to adaptively process the vibration signal of the diesel engine and used Rihaczek distribution to obtain the time-frequency representation of the diesel engine with high aggregation. Liu et al. [16] proposed an adaptive Wigner-Ville distribution (WVD) and an improved fast correlation filter (FCBF) to solve the cross-term interference of WVD and the redundant control problem of fast FCBF, which effectively separated the vibration signal of the diesel engine's redundant interference information. Inspired by the above research, we first used the bacterial foraging algorithm (BFA) to improve the optimization performance of the butterfly optimization algorithm (BOA); then, the key parameters α and K in the VMD were optimized based on discrete entropy to realize the adaptive noise reduction of the vibration signal of the diesel engine cylinder head. In the aspect of signal feature extraction, the time-frequency characteristic analysis of the signal can show the effective state components in the equipment signal from the perspective of the excitation coupling mechanism. Tao et al. [17] used the time-domain statistical method to extract the time-domain features of the signal and obtained high-precision time-frequency domain features through high-resolution multi-synchronous compression transformation. Ahmad Taghizadeh-Alisaraei et al. [18] used time-frequency analysis methods such as short-term Fourier transform (STFT), Wigner-Ville distribution (WVD), and Choi-Williams distribution (CWD) to examine in detail the vibrations produced by a faulty engine. By comparing the time-frequency domain parameters of the vibration response of the normal and faulty injectors, the faulty injectors are effectively detected. Qin et al. [19] combined the extracted time-domain, time-frequency domain, and handcrafted time-domain statistical features and fed them into a multi-channel deep Siamese convolutional neural network to resist the influence of environmental noise and operating condition changes on the final diagnosis results. Although the above research methods can effectively realize the feature extraction of diesel engine cylinder head vibration signals, the selection of feature parameters is mostly artificial, which brings greater uncertainty to the accuracy of a diesel engine health status assessment. Therefore, we conducted a characteristic sensitivity analysis on the vibration signal of the cylinder head of the diesel engine and selected the characteristic parameters sensitive to the state of health of the diesel engine valve to improve the accuracy of the evaluation. On the other hand, the multi-channel sensor signal features have redundant information; thus, the effective dimensionality reduction of the features can improve the accuracy of the state evaluation and reduce a certain amount of calculation. In this regard, Yang et al. [20] introduced class information into the traditional non-negative matrix factorization (NMF) and developed an improved discriminative NMF method to achieve the effective dimensionality reduction of time-frequency images of diesel engine vibration signals. Hou et al. [21] used principal component analysis (PCA) to convert high-dimensional fault samples into low-dimensional samples to reduce the computational complexity for the problem of the high dimensionality of fault samples collected by multi-sensors. Chu et al. [22] proposed a visualization method based on texture-enhanced block non-negative matrix factorization (TE-BNMF). By performing time-frequency analysis on the vibration signal of the diesel engine cylinder head and then using the block non-negative matrix factorization (BNMF) algorithm to directly reduce the dimension to extract the feature parameters of the generated local binary feature map, the dimension reduction and enhancement of the feature map were realized. Although the above information fusion method plays the role of feature dimensionality reduction, it still cannot achieve the effect of deep fusion dimensionality reduction and effective state feature enhancement for effective information in multi-dimensional feature samples; therefore, the sparse autoencoder (SAE) with a strong data feature dimensionality reduction and representation ability was selected to perform deep dimensionality reduction learning on multidimensional feature data. SAE adds sparse constraints to the mapping process of the original autoencoder, has stronger data reconstruction and learning capabilities, and is widely used in various fields of military and industry (pattern recognition, semantic segmentation, image processing, etc.) [23][24][25]. Further, we stacked two SAEs together to form a stacked SAE (SSAE), which achieves a better feature dimensionality reduction and feature enhancement by increasing the parameter space and mapping calculation. Finally, in terms of health status recognition, since the feature samples after feature fusion are one-dimensional, we built a one-dimensional convolutional network (1DCNN) to achieve effective mapping between diesel engine feature samples and health status. A large number of studies have proven that, compared to traditional machine learning methods, deep networks have stronger data mining and analysis and learning capabilities [26,27]. Therefore, the evaluation method using 1DCNN as the identification model can accurately and effectively realize the health status evaluation of diesel engine valves. To sum up, this paper proposes a health status assessment of diesel engine valve clearance based on BFA-BOA-VMD adaptive noise reduction and multi-channel information fusion; then, the effectiveness of the proposed diesel engine state of health assessment method is proven by the diesel engine valve clearance preset simulation experiment. The main contributions and innovations of this paper are as follows: (1) We carried out a multi-dimensional sensor monitoring system, the valve clearance degradation preset simulation experiment of the diesel engine, and the acquisition of the multi-dimensional vibration signal of the diesel engine cylinder head was realized. (2) The optimization algorithm and the discrete entropy were combined to realize the adaptive noise reduction of the vibration signal of the diesel engine cylinder head, and then, through feature sensitivity analysis, effective sensitive feature parameters were selected. (3) We used SSAE with the stacked structure to realize the deep dimensionality reduction and feature reconstruction of multi-dimensional feature samples. Finally, the effective state evaluation of the diesel engine valve clearance was realized through the constructed 1DCNN. The rest of this paper is organized as follows: Section 2 details the relevant theory and evaluation process of health status assessment methods; Section 3 provides the implementation of the diesel valve clearance degradation preset failure experiment and the preparation of the data samples; Section 4 analyzes and discusses the results of the diesel engine clearance condition assessment; finally, Section 5 presents the conclusions of this study and an outlook for future work. Adaptive Noise Reduction Based on BFA-BOA-VMD In the process of the VMD decomposition of vibration signals, two parameters, the penalty factor α and the number of decomposition layers K, have a great influence on the effect of VMD decomposition. Therefore, we used the BOA algorithm improved by BFA to adaptively optimize the important parameters in the VMD, with discrete entropy as the standard, to realize the adaptive noise reduction of the vibration signal of the diesel engine cylinder head. Discrete Entropy Discrete entropy is a new algorithm used for measuring the complexity of time series that was proposed by Rostaghi and Azami in 2016 [28]; it overcomes the disadvantage that the arrangement entropy does not consider the magnitude of the amplitude and has the characteristics of good stability and fast calculation speed. The discrete entropy calculation process is as follows: (1) Normalize the sequence x = {x 1 , x 2 , · · · x N } to y = {y 1 , y 2 , · · · y N } using the normal distribution function as the nonlinear normalization function. Where N is the length of the sequence, y is the cumulative distribution value calculated using the mean and standard deviation of the sequence x as the values in the normal distribution of the corresponding parameters, and y satisfies the condition of y ∈ (0, 1). (2) Map y to integers in the range [1, c] by a linear algorithm, and obtain the sequence, where c is the number of categories, j is the j-th point in the sequence, and int is the rounding function. (3) Compute the embedding vector and scatter pattern w v 0 v 1 ···v m−1 (v = 1, 2, · · · , c), and compute the probability P for all scatter patterns: where z c i = v 0 ,z c i+d = v 1 , . . . ,z c i+(m−1)d = v m−1 ; w v 0 v 1 ···v m−1 is the scatter pattern, num(w v 0 v 1 ···v m−1 ) is the number of z m,c i 's mapped to scatter patterns, m is the embedding dimension, d is the time delay, and N − (m − 1)d represents the total number of embedding vectors. (4) Using the definition of information entropy, calculate the original sequence scatter entropy [29]. According to the calculation formula of scatter entropy, it can be found that the larger the scatter entropy value, the greater the complexity of the time series; conversely, the smaller the discrete entropy, the smaller the complexity of the time series and the higher the proportion of effective components. According to the entropy value theorem, the smaller the entropy value, the richer the effective information contained in the signal and the higher the signal-to-noise ratio; conversely, the higher the entropy, the more noise interference information in the signal and the lower the signal-to-noise ratio. Therefore, discrete entropy was used in this paper to determine the decomposition parameters of VMD. BFA-BOA Optimization Algorithm The butterfly optimization algorithm (BOA) is a new intelligent optimization algorithm proposed by Arora et al. [30] in 2019, which was inspired by the foraging and mating behavior of butterflies. Butterflies sense and analyze odors in the air to determine food sources and potential directions for mating partners. By observation, butterflies have very accurate judgments about the location of these sources. Additionally, they can identify different scents and perceive their intensity. A butterfly produces a scent of a certain intensity related to its fitness, which means that when a butterfly moves from one location to another, its fitness changes accordingly. When the butterfly senses that another butterfly is emitting more fragrance in the area, it will approach it, a stage known as the global search; alternatively, when the butterfly cannot perceive a scent larger than itself, it moves randomly, a stage called local search. The fragrance size f is expressed according to the stimulus intensity, and its calculation formula is: where I is the stimulus intensity, which is related to fitness; a is the power exponent, and the empirical value is 0.1; c is the sensory factor, and the empirical value is 0.01. The initial stage of the algorithm first randomly generates the position of each individual butterfly, and then, according to Formula (4), each individual butterfly generates fragrance at its respective initial position. Then, the algorithm enters the global search and local search stages. During the global search process, each individual butterfly moves towards the current global optimal position g * , which can be expressed as: where x t i represents the position vector of the kth butterfly at the t-th iteration, which is the individual cognitive flight part; r 1 is a random number in the range [0, 1]; f i represents the fragrance intensity of the i-th butterfly. The update process of local search can be expressed as [31]: where x t j and x t k represent the k-th and j-th butterflies randomly selected from the solution space; r 2 is a random number in the range [0, 1]. During the foraging process of individual butterflies, a switch probability P = 0.8 is set to switch between conventional global search and dense local search. In each iteration, a random number in the range [0, 1] is used to compare with the switching probability P to determine whether the individual butterfly's foraging mode is global search or local search. It can be found that, since the transition probability between the global search stage and the local search stage is generally fixed at 0.8, this makes most butterflies update their positions in the way of the global search stage. At the same time, the position update method in the global search phase is mainly to move toward the position with the strongest fragrance. If this update method is followed all the time, the diversity of the population will be affected in a later stage of the algorithm. Therefore, for the entire BOA algorithm, there are problems such as reduced population diversity, repeated search positions, and falling into local optimum in the later stage of the algorithm. In this regard, this study introduces the bacterial foraging algorithm (BFA) [32], which makes it possible for BOA to jump out of the local optimum in the later iteration of the algorithm by using the foraging mechanism of BFA. The biological basis of the bacterial foraging algorithm is the intelligent performance of Escherichia coli during the foraging process in the human intestine, i.e., the location of the bacteria is continuously updated through the three steps of chemotaxis, reproduction, and dispersal, so that the bacteria tend to be in a nutrient-rich place. Chemotaxis is characterized by the accumulation of Escherichia coli toward nutrient-rich regions, and dispersal is characterized by the departure of Escherichia coli from the original direction of movement due to a stimulus. The advantage-avoiding mechanism of Escherichia coli can be introduced into BOA. Consider a group of swimming Escherichia coli as the design variable in the solution space, regard the nutrient-rich source in the environment as the optimal solution of the problem to be solved, and define the unfavorable stimulus as some kind of satisfaction in the process of solving the optimization problem. In this way, the design variables in the solution space can adjust their motion behavior in time according to the solution situation of the optimal solution during the optimization process to continuously approach the optimal solution of the problem to be solved. The improved BOA algorithm based on bacterial foraging characteristics, if encountering unfavorable stimuli in the evolution process, implements the dispersal operation to move the individual butterfly, which provides the possibility for the algorithm to jump out of the local optimum. There are many ways to define unfavorable stimuli. In this study, they are defined by the process of optimization and solution: if the optimal value found for five consecutive generations changes within 0.01%, the algorithm is considered to be trapped in a local optimum. Satisfying this condition is considered an unfavorable stimulus, and the dispersing operation is performed on the solution variable. Combined with the BOA algorithm, the adjustment factor Z is introduced, and the formula used in the dispersal operation is: The formula used for the chemotaxis operation is: , the meaning of the other parameters is the same as the BOA algorithm Formulas (5) and (6); the subscripts k and d represent the k-th butterfly individual and the d-th dimension, respectively, and k = i. The learning factor c 3 is used to guide the individual movement of the butterfly and the inheritance weight of the local optimum. The value of c 3 is adjusted according to the selection of the values of c 1 and c 2 . It has been verified by many experiments that, in the case of min(c 1 , c 2 )/5 < c 3 < min(c 1 , c 2 ), better results can be obtained for the algorithm to jump out of the local optimum. To verify the improvement in the optimization ability of the improved BFA-BAO algorithm, we selected six performance test functions to conduct optimization simulation tests on BAO and BFA-BAO, respectively [33,34]. The population size per test and the maximum number of iterations of the algorithm were set to 50 and 500, respectively. The experimental results are shown in Figure 1. From the image of the performance test function, it can be concluded that, in addition to the maximum and minimum values, the test function also has dense and continuous local minima and local maxima, which can effectively test the global optimization ability and search ability of the optimization algorithm. It can be seen from the simulation test results of the BAO and BFA-BAO algorithms of the six types of test functions that the improved BFA-BAO algorithm completed the convergence of the algorithm more quickly and the final fitness function value was lower, i.e., the solution of the BFA-BAO algorithm was closer to the actual minimum value of the test function. As a result, the improved BFA-BAO algorithm had a better global optimization ability and convergence effect than the BFA algorithm. Adaptive VMD Noise Reduction Taking the discrete entropy of the signal as the fitness function, the BFA-BOA optimization algorithm in Section 2.1.2 was used to adaptively optimize the parameter penalty factor a and the number of decomposition layers K of the VMD. The IMF component with the largest proportion of effective health status information components (the smallest discrete entropy) was selected as the object of subsequent feature sensitivity analysis. The parameter optimization process is shown in Figure 2. (1) Set the algorithm parameters and initialize the individual position of the butterfly. (2) Calculate the initial fitness value. Calculate the fitness value of the individual butterfly according to the test function. (3) Select the nectar source. Arrange the fitness values calculated in step (2) in ascending order, select the butterfly position with the best fitness value as the nectar source position, and calculate its fragrance size. (4) Location update. Determine whether the current iteration performs a global search or a local search, and then update the position of each butterfly accordingly. (5) Calculate the fitness value. Calculate the fitness value of the updated position of each butterfly, and update the optimal position. (6) Individual movement operation based on bacterial foraging characteristics. It is judged whether the unfavorable stimulus is satisfied, i.e., whether the optimal value found for five consecutive generations in the optimization process changes within 0.01%. If so, it is considered to be trapped in a local optimum, and the individual with poor fitness value is subjected to the dispersing operation. (7) Repeat the iterative process of steps (1) to (6). If the set maximum number of iterations is reached, the algorithm is terminated and the global optimal solution is output, i.e., the IMF component with the lowest global discrete entropy. c . It has been verified by many experiments that, in the case of , better results can be obtained for the algorithm to jump out of the local optimum. To verify the improvement in the optimization ability of the improved BFA-BAO algorithm, we selected six performance test functions to conduct optimization simulation tests on BAO and BFA-BAO, respectively [33,34]. The population size per test and the maximum number of iterations of the algorithm were set to 50 and 500, respectively. The experimental results are shown in Figure 1. From the image of the performance test function, it can be concluded that, in addition to the maximum and minimum values, the test function also has dense and continuous local minima and local maxima, which can effectively test the global optimization ability and search ability of the optimization algorithm. It can be seen from the simulation test results of the BAO and BFA-BAO algorithms of the six types of test functions that the improved BFA-BAO algorithm completed the convergence of the algorithm more quickly and the final fitness function value was lower, i.e., the solution of the BFA-BAO algorithm was closer to the actual minimum value of the test function. As a result, the improved BFA-BAO algorithm had a better global optimization ability and convergence effect than the BFA algorithm. Adaptive VMD Noise Reduction Taking the discrete entropy of the signal as the fitness function, the BFA-BOA optimization algorithm in Section 2.1.2 was used to adaptively optimize the parameter penalty factor a and the number of decomposition layers K of the VMD. The IMF component with the largest proportion of effective health status information components (the smallest Feature Sensitivity Analysis In the process of feature extraction, human experience selection is often required, which will bring greater uncertainties to the assessment of the health status of the diesel engine, thereby affecting the accuracy of the assessment. Therefore, we used common time-domain and frequency-domain features as a set of characteristic parameters to conduct a characteristic sensitivity analysis on the vibration signal of the diesel engine cylinder head, to select characteristic parameters that are sensitive to the condition of diesel engine valve clearance, and to improve the accuracy and reliability of the evaluation. We used s to accurately represent the feature sensitivity: where s is the sensitivity, f i is the target analysis eigenvalue, and f is the reference eigenvalue. In this paper, the parameter indexes under the normal valve clearance state of the diesel engine were used as the reference eigenvalues, and the parameter indexes under other valve clearance conditions were used as the target analysis eigenvalues. The feature set included 14 common time domain features and 4 common frequency domain features. The 14 time-domain features included 9 dimensioned features: maximum value (f 1 ), minimum value (f 2 ), mean value (f 3 ), mean square value (f 4 ), peak value (f 5 ), peak-to-peak value (f 6 ), absolute mean value (f 7 ), variance (f 8 ), and root mean Square (RMS, f 9 ), and they included 5 dimensionless features: impulse factor (f 10 ), crest factor (f 11 ), shape factor (f 12 ), remainder gap (f 13 ) and factor margin factor (f 14 ). The 4 frequency domain characteristic parameters included average frequency (f 15 ), frequency center (f 16 ), frequency variance (f 17 ), and root mean square frequency (f 18 ) [35][36][37]. The serial numbers and formulas corresponding to the time-domain and frequency-domain characteristic parameters are shown in Table 1 (the time domain signal is represented by x i , N represents the length of x i , s k represents the spectrum of k, and e k represents the frequency value corresponding to the spectrum of the k-th point). ensors 2022, 22, 8129 9 of 27 discrete entropy) was selected as the object of subsequent feature sensitivity analysis. The parameter optimization process is shown in Figure 2. (1) Set the algorithm parameters and initialize the individual position of the butterfly (2) Calculate the initial fitness value. Calculate the fitness value of the individual butterfly according to the test function. (3) Select the nectar source. Arrange the fitness values calculated in step (2) in ascending order, select the butterfly position with the best fitness value as the nectar source position, and calculate its fragrance size. (4) Location update. Determine whether the current iteration performs a global search or a local search, and then update the position of each butterfly accordingly. (5) Calculate the fitness value. Calculate the fitness value of the updated position of each butterfly, and update the optimal position. (6) Individual movement operation based on bacterial foraging characteristics. It is judged whether the unfavorable stimulus is satisfied, i.e., whether the optimal value Deep Fusion of Multi-Channel Feature Values After the adaptive noise reduction and feature sensitivity analysis of the multi-channel cylinder head vibration signal of the diesel engine, the sensitive features were selected and extracted. This section describes the stacked sparse autoencoder (SSAE) that was built to realize the deep fusion and dimensionality reduction reconstruction of multi-channel feature values so that the feature vectors can be re-expressed in high-level abstraction. An autoencoder (AE) is an unsupervised neural network model that includes an encoder and decoder; it is mainly composed of the input layer, hidden layer, and output layer in the structure (see Figure 3). An AE can map the data to the high-level parameter space to express it abstractly, and the data features can be reconstructed and enhanced. Compared with traditional dimensionality reduction methods, such as principal component analysis (PCA) and independent component analysis (ICA), AE can not only reduce the data dimension, but it can also ensure the integrity and invariance of data features, which ensures the quality of input feature samples for subsequent network training [38]. Deep Fusion of Multi-Channel Feature Values After the adaptive noise reduction and feature sensitivity analysis of the multi-channel cylinder head vibration signal of the diesel engine, the sensitive features were selected and extracted. This section describes the stacked sparse autoencoder (SSAE) that was built to realize the deep fusion and dimensionality reduction reconstruction of multi-channel feature values so that the feature vectors can be re-expressed in high-level abstraction. Sparse Autoencoders (SAE) An autoencoder (AE) is an unsupervised neural network model that includes an encoder and decoder; it is mainly composed of the input layer, hidden layer, and output layer in the structure (see Figure 3). An AE can map the data to the high-level parameter space to express it abstractly, and the data features can be reconstructed and enhanced. Compared with traditional dimensionality reduction methods, such as principal component analysis (PCA) and independent component analysis (ICA), AE can not only reduce the data dimension, but it can also ensure the integrity and invariance of data features, which ensures the quality of input feature samples for subsequent network training [38]. SAE is This is consistent with AE in terms of structural composition, but SAE adds a sparsity limit to the loss function of AE; at the same time, only some hidden layer nodes are "active"; thus, the entire AE network becomes sparse. Assuming that the hidden layer activation function uses sigmoid, the hidden layer output is 1 to indicate that the node is "active", and the hidden layer output is 0 to indicate that the node is "inactive". Based on this, we introduced KL dispersion to measure the similarity between the average activation output of a certain hidden layer node and the sparsity  , and we set: SAE is consistent with AE in terms of structural composition, but SAE adds a sparsity limit to the loss function of AE; at the same time, only some hidden layer nodes are "active"; thus, the entire AE network becomes sparse. Assuming that the hidden layer activation function uses sigmoid, the hidden layer output is 1 to indicate that the node is "active", and the hidden layer output is 0 to indicate that the node is "inactive". Based on this, we introduced KL dispersion to measure the similarity between the average activation output of a certain hidden layer node and the sparsity ρ, and we set: where ∧ ρ j is the average sparse activation, x i is the training sample, and m is the number of training samples; a j (x i ) is the response output of the j-th node of the hidden layer to the i-th sample. In general, the sparsity coefficient ρ is set to 0.05 or 0.1. The larger the KL divergence, the greater the difference between ρ and ∧ ρ j , and the KL divergence equal to 0 means the two are completely equal. Then, the KL dispersion is added as a regular term to the loss function of AE to constrain the sparse rows of the entire AE network [39,40]: where β is the weight coefficient of the sparse constraint. Deep Fusion of Multi-Channel Feature Values Based on SSAE Further, on the basis of SAE, we used a stacking method to form two SAE networks into a stacked SAE network (SSAE). Because SSAE has a deeper network structure and parameter space, it has greater improvements and advantages than SAE in data fusion and feature reconstruction. In the specific process of multi-channel feature values fusion, assuming that there are m signal acquisition channels, after the vibration signal of the diesel engine cylinder head is subjected to adaptive noise reduction and characteristic sensitivity analysis, n characteristic parameters are selected. Then, the different channel features are concatenated and normalized so that the dimension of the feature vector before dimensionality reduction is m × n × 1. Further, the feature output dimension after dimension reduction is determined to be l according to the number of hidden layer network nodes of SSAE. Then, the cylinder head vibration signal under each valve clearance state can finally obtain a feature sample with dimension l × 1. The multi-channel feature values fusion process is shown in Figure 4; after the multidimensional sensor signal is analyzed by adaptive noise reduction and feature sensitivity, the effective dimensionality reduction of multi-source feature vectors can be realized through the SSAE network to construct feature samples. the KL divergence, the greater the difference between  and j   , and the KL divergence equal to 0 means the two are completely equal. Then, the KL dispersion is added as a regular term to the loss function of AE to constrain the sparse rows of the entire AE network [39,40]: where  is the weight coefficient of the sparse constraint. Deep Fusion of Multi-Channel Feature Values Based on SSAE Further, on the basis of SAE, we used a stacking method to form two SAE networks into a stacked SAE network (SSAE). Because SSAE has a deeper network structure and parameter space, it has greater improvements and advantages than SAE in data fusion and feature reconstruction. In the specific process of multi-channel feature values fusion, assuming that there are m signal acquisition channels, after the vibration signal of the diesel engine cylinder head is subjected to adaptive noise reduction and characteristic sensitivity analysis, n characteristic parameters are selected. Then, the different channel features are concatenated and normalized so that the dimension of the feature vector before dimensionality reduction is 1 mn . Further, the feature output dimension after dimension reduction is determined to be l according to the number of hidden layer network nodes of SSAE. Then, the cylinder head vibration signal under each valve clearance state can finally obtain a feature sample with dimension 1 l  . The multi-channel feature values fusion process is shown in Figure 4; after the multidimensional sensor signal is analyzed by adaptive noise reduction and feature sensitivity, the effective dimensionality reduction of multi-source feature vectors can be realized through the SSAE network to construct feature samples. One-Dimensional Convolutional Neural Network (1DCNN) Convolutional neural network (CNN) is a type of typical feedforward neural network with a deep structure that includes convolution calculation. As a type of deep learning model, it is widely used in various fields such as image recognition and speech recog- One-Dimensional Convolutional Neural Network (1DCNN) Convolutional neural network (CNN) is a type of typical feedforward neural network with a deep structure that includes convolution calculation. As a type of deep learning model, it is widely used in various fields such as image recognition and speech recognition. Its classic network structure models include LetNet-5 model, AlexNet network, GoogleNet network, and VGG network. The basic structure of CNN usually consists of an input layer, a convolution layer, a pooling layer, a fully connected layer, and an output layer [41,42]. The CNN network maps the data input into the high-dimensional nonlinear parameter space through the convolution calculation of the convolution layer, the dimensionality reduction feature mapping of the pooling layer, and the classification feature recognition of the fully connected layer to realize the deep learning and recognition of the data. According to the input dimension of CNN, input samples can be divided into twodimensional image samples and one-dimensional matrix samples. Since the feature samples processed in Section 2.3 are one-dimensional matrices, a one-dimensional convolutional neural network (1DCNN) was conducted in this paper to evaluate the state of diesel engine valve clearance. The 1DCNN consisted of four network layers, including two convolutional layers and two fully connected layers (only the network layers with parameter weights are calculated), and the network structure of 1DCNN is shown in Figure 5. After the multilevel operation of the convolutional layer, pooling layer, and fully connected layer, the characteristic samples constructed in this paper can finally obtain the prediction label of the classification layer and complete the status evaluation. The structural parameter settings of the network are given in the network training part of the experimental verification. According to the input dimension of CNN, input samples can be divided into twodimensional image samples and one-dimensional matrix samples. Since the feature samples processed in Section 2.3 are one-dimensional matrices, a one-dimensional convolutional neural network (1DCNN) was conducted in this paper to evaluate the state of diesel engine valve clearance. The 1DCNN consisted of four network layers, including two convolutional layers and two fully connected layers (only the network layers with parameter weights are calculated), and the network structure of 1DCNN is shown in Figure 5. After the multi-level operation of the convolutional layer, pooling layer, and fully connected layer, the characteristic samples constructed in this paper can finally obtain the prediction label of the classification layer and complete the status evaluation. The structural parameter settings of the network are given in the network training part of the experimental verification. Figure 6 shows the flow of the diesel engine valve clearance state assessment method proposed in this paper. Figure 6 shows the flow of the diesel engine valve clearance state assessment method proposed in this paper. Diesel Engine Valve Clearance State Assessment Flow (1) The preset simulation experiment of diesel engine valve clearance degradation and multi-channel sensor data acquisition of the cylinder head vibration signal are carried out; (2) Adaptive noise reduction is performed on the vibration signal of the cylinder head, and the target IMF component is selected according to the discrete entropy standard; (3) According to the time domain and frequency domain feature sets, the characteristic sensitivity analysis of the target IMF components under different valve clearance states is carried out, and the characteristic parameters that are most sensitive to the valve clearance states are selected to form the eigenvectors; (4) The multi-channel feature vector is input into the SSAE network for deep fusion and dimensionality reduction, and the feature samples are obtained that are used for the network input; (5) The feature samples are divided into a training set and test set according to the proportion, and they are input into 1DCNN to realize the training and assessment of the health status of diesel engine valve clearance. Preset Failure Experiments and Data Collection This paper relied on the diesel engine condition monitoring test bench to carry out the valve clearance simulation degradation experiment to verify the effectiveness of the method proposed in this paper. The test bench system was mainly composed of a diesel engine and a control panel, as shown in Figure 7; the ignition, flameout, and power output of the diesel engine can be controlled through the control panel. The model of the diesel engine is a six-cylinder high-pressure common rail diesel engine, and its basic parameters are shown in Table 2. To realize the multi-channel acquisition of the vibration signal of the diesel engine cylinder head, we installed six acceleration vibration sensors in parallel on the cylinder head surface of the diesel engine, and the fixing method was glue. The vibration acceleration sensor and its installation location distribution are shown in Figure 8. network input; (5) The feature samples are divided into a training set and test set according to the proportion, and they are input into 1DCNN to realize the training and assessment of the health status of diesel engine valve clearance. Preset Failure Experiments and Data Collection This paper relied on the diesel engine condition monitoring test bench to carry out the valve clearance simulation degradation experiment to verify the effectiveness of the method proposed in this paper. The test bench system was mainly composed of a diesel engine and a control panel, as shown in Figure 7; the ignition, flameout, and power output of the diesel engine can be controlled through the control panel. The model of the diesel engine is a six-cylinder high-pressure common rail diesel engine, and its basic parameters are shown in Table 2. To realize the multi-channel acquisition of the vibration signal of the diesel engine cylinder head, we installed six acceleration vibration sensors in parallel on the cylinder head surface of the diesel engine, and the fixing method was glue. The vibration acceleration sensor and its installation location distribution are shown in Figure 8. Considering that the valve clearance increase is the main problem in engineering practice, different valve clearance sizes of the intake valve were preset to simulate the valve clearance increase process caused by valve degradation. The valve clearance of the diesel engine was adjusted by the feeler gauge to simulate the abnormal increase of clearance in the process of valve degradation. In the process of adjusting the valve clearance, the cylinder head of the diesel engine was first disassembled, and then the valve clearance adjustment cylinder was selected. In the experiment, the third cylinder was set as the pre- Considering that the valve clearance increase is the main problem in engineering practice, different valve clearance sizes of the intake valve were preset to simulate the valve clearance increase process caused by valve degradation. The valve clearance of the diesel engine was adjusted by the feeler gauge to simulate the abnormal increase of clearance in the process of valve degradation. In the process of adjusting the valve clearance, the cylinder head of the diesel engine was first disassembled, and then the valve clearance adjustment cylinder was selected. In the experiment, the third cylinder was set as the preset simulation cylinder. The adjustment process of the valve clearance is shown in Figure 9. Considering that the valve clearance increase is the main problem in engineering practice, different valve clearance sizes of the intake valve were preset to simulate the valve clearance increase process caused by valve degradation. The valve clearance of the diesel engine was adjusted by the feeler gauge to simulate the abnormal increase of clearance in the process of valve degradation. In the process of adjusting the valve clearance, the cylinder head of the diesel engine was first disassembled, and then the valve clearance adjustment cylinder was selected. In the experiment, the third cylinder was set as the preset simulation cylinder. The adjustment process of the valve clearance is shown in Figure 9. To adjust the valve clearance when the diesel engine is cold, the lock nut on the upper part of the third cylinder valve needs to be loosened, and the adjusting screw needs turned with a screwdriver. The healthy valve clearance of a diesel engine is generally: intake valve is 0.25 mm~0.3 mm, exhaust valve is 0.3 mm~0.35 mm. This experiment simulated the valve clearance degradation process with a total of four preset health states, namely normal (healthy), slightly increased valve clearance (general), increased valve clearance (attention), and severely increased valve clearance (deteriorated); the settings are shown in Table 3. To adjust the valve clearance when the diesel engine is cold, the lock nut on the upper part of the third cylinder valve needs to be loosened, and the adjusting screw needs turned with a screwdriver. The healthy valve clearance of a diesel engine is generally: intake valve is 0.25 mm~0.3 mm, exhaust valve is 0.3 mm~0.35 mm. This experiment simulated the valve clearance degradation process with a total of four preset health states, namely normal (healthy), slightly increased valve clearance (general), increased valve clearance (attention), and severely increased valve clearance (deteriorated); the settings are shown in Table 3. During the acquisition of vibration signals, the sampling frequency was 20 kHz, the single sampling time was 10 s, and the sampling interval was 15 s. The format of the vibration signal data of the diesel engine cylinder head initially collected was ".tdms". We then converted it to the format of ".mat" in the Matlab software for further processing. For the vibration signal under each valve clearance state, we took 5000 points as 1 sample, and each took 54 samples. Adaptive Noise Reduction Firstly, adaptive noise reduction was performed on the cylinder head vibration signal under different valve clearance states. In the VMD decomposition process, the value range of the penalty factor a is generally 1.5~2 times the number of sample points, i.e., [7500, 10,000], and the value range of the decomposition level K is generally [2,7]. The number of individual butterflies and the number of iterations in the BFA-BOA algorithm were set to 20 and 100, respectively. We uniformly sampled the six-channel data under the same valve clearance state, respectively, and Table 4 lists the parameter averages of the optimization results. After adaptive noise reduction, the IMF component with the smallest discrete entropy was selected for each vibration signal sample for the next step of feature sensitivity analysis. Sensitive Feature Selection A total of 50 groups of samples were selected from four kinds of valve clearance health status IMF samples for feature sensitivity analysis. Due to space limitations, only the analysis results of mean, variance, kurtosis, and root mean square are listed here (see Figure 10). From the change trend results of the characteristic parameters in Figure 10, it can be concluded that some characteristics cannot well distinguish the health status of the valve clearance of the diesel engine. Therefore, we used the feature sensitivity s in Section 2.2 as the standard and selected the top 14 time-domain and frequency-domain feature parameters with higher sensitivity to improve the accuracy of the state evaluation. After obtaining the sensitivity s, the sensitivities of the IMF components of all valve clearance states were normalized according to the direction of the characteristic parameters, and the results are shown in Figure 11. As can be seen from the results of Figure 11, the 18 different types of time and frequency domain eigenvalues can distinguish the four states of the valve gap to a certain extent. However, it is still not possible to quantitatively select the more sensitive feature sets. Therefore, we further summed up and sort the sensitivity s, and the results are shown in Table 5; the top 14 characteristic parameters with the highest sensitivities were selected as the feature extraction objects of the IMF, and the feature vector was constructed. From the results in Table 5, it can be concluded that the selected characteristic parameters are: f 2 , f 1 , f 6 , f 8 , f 7 , f 9 , f 3 , f 18 , f 15 , f 13 , f 11 , f 10 , f 4 , f 5 . In this way, each IMF component sample can obtain a 14 × 1-dimensional feature vector. more sensitive feature sets. Therefore, we further summed up and sort the sensitivity s, and the results are shown in Table 5; the top 14 characteristic parameters with the highest sensitivities were selected as the feature extraction objects of the IMF, and the feature vector was constructed. From the results in Table 5, it can be concluded that the selected characteristic parameters are: f2, f1, f6, f8, f7, f9, f3, f18, f15, f13, f11, f10, f4, f5. In this way, each IMF component sample can obtain a 14 1  -dimensional feature vector. Figure 11. Sensitivity analysis of the feature set. Muti-Channel Feature Vectors Fusion After the feature analysis and extraction in Section 3.2.1, a 14 1  -dimensional fe vector was obtained for the six-channel data under each valve clearance state. We concatenated and normalized these six feature vectors to obtain a 84 1  -dimensiona ture vector. Then, the concatenated feature vectors were input into the SSAE networ fusion and dimensionality reduction. The hyperparameter settings of SSAE are show Table 6. Muti-Channel Feature Vectors Fusion After the feature analysis and extraction in Section 3.2.1, a 14 × 1-dimensional feature vector was obtained for the six-channel data under each valve clearance state. We first concatenated and normalized these six feature vectors to obtain a 84 × 1-dimensional feature vector. Then, the concatenated feature vectors were input into the SSAE network for fusion and dimensionality reduction. The hyperparameter settings of SSAE are shown in Table 6. It can be seen from the hyperparameter settings in Table 6 that the number of hidden layer nodes of the second layer of SAE is 25; thus, the dimension of the feature vector after fusion and dimension reduction was 25 × 1. Finally, 54 25 × 1-dimensional eigenvector samples can be obtained for each diesel engine valve clearance health state. Health Status Assessment Using 1DCNN The diesel valve clearance feature samples constructed in Section 3 were divided into a training set and a test set with a ratio of 5:1, i.e., the number of training samples was 180 and the number of test samples was 36. The training samples were input into 1DCNN for training, and the structural parameter settings of 1DCNN are shown in Table 7. Using the trained 1DCNN to evaluate the health status of the test samples, the accuracy of the 10 evaluation results was 100%, and the confusion matrix of the evaluation results is shown in Figure 12. It can be concluded that the state-of-health evaluation method for diesel engine valve clearance can achieve high evaluation accuracy and effect, which proves the effectiveness of the evaluation method. Figure 12. Confusion matrix of diesel valve clearance health assessment results. Small Sample Analysis To verify the effectiveness and stability of the evaluation method in this paper under Figure 12. Confusion matrix of diesel valve clearance health assessment results. Small Sample Analysis To verify the effectiveness and stability of the evaluation method in this paper under the condition of small samples, we set the total number of samples to 216,192,168,144, and 120, respectively, and the corresponding training set, test set number, and evaluation accuracy are shown in Table 8. From the variation trend of the evaluation accuracy with the sample size in Figure 13, it can be clearly seen that the evaluation method in this paper did not have a large fluctuation when the sample size was reduced, and it can still maintain a high evaluation accuracy. Even if the number of training sets is only 100, the average evaluation accuracy can reach 97.5%. This demonstrates the effectiveness and small-sample robustness of our evaluation method. Optimization Algorithm Comparison Analysis To verify the effectiveness of the BFA-BOA-based adaptive VMD noise reduction method in the signal preprocessing stage, in this section, common swarm optimization algorithms, including the particle swarm optimization algorithm (PSO) [43], gray wolf optimization algorithm (GWO) [44], and cuckoo search algorithm (CSA) [45], are used to carry out the adaptive noise reduction of the signal. At the same time, the separate BOA algorithm was compared to verify the effectiveness of the BFA-BOA algorithm that considers the BFA stimulus in this paper. The rest of the evaluation process was consistent with this paper, and the optimization ability of different algorithms was determined by the final evaluation accuracy. The average of the 10 evaluation results is shown in Table 9 and Figure 14. Optimization Algorithm Comparison Analysis To verify the effectiveness of the BFA-BOA-based adaptive VMD noise reduction method in the signal preprocessing stage, in this section, common swarm optimization algorithms, including the particle swarm optimization algorithm (PSO) [43], gray wolf optimization algorithm (GWO) [44], and cuckoo search algorithm (CSA) [45], are used to carry out the adaptive noise reduction of the signal. At the same time, the separate BOA algorithm was compared to verify the effectiveness of the BFA-BOA algorithm that considers the BFA stimulus in this paper. The rest of the evaluation process was consistent with this paper, and the optimization ability of different algorithms was determined by the final evaluation accuracy. The average of the 10 evaluation results is shown in Table 9 and Figure 14. As can be seen from the experimental results in Table 9 and Figure 14, compared with the common group optimization algorithms such as PSO, GWO, and CSA, the improved BFA-BOA optimization algorithm proposed in this paper can more effectively realize the adaptive noise reduction of the vibration signal of the diesel engine cylinder head; thus, a very high state evaluation accuracy can be obtained. In addition, the evaluation effect of using the BOA algorithm alone is worse than that of the BFA-BOA algorithm, which once again proves that the global optimization ability of BOA has been greatly improved after considering the stimulus factor of BFA. Method Comparison Analysis To further demonstrate the effectiveness of the health status assessment method proposed in this paper, the method was compared with common health status assessment methods. These methods include: (1) The vibration signal of the diesel engine cylinder head was decomposed by VMD with unified parameters, i.e., the value of the penalty factor a is 8000, and the value of the decomposition level K is 3. Then, the last IMF component for feature parameter extraction was selected, and the other processes were consistent with the method in this paper (M1). (2) Only adaptive VMD decomposition was performed on the vibration signal without feature fusion based on SSAE. The parameter settings and other processes are consistent with the method in this paper (M2). (3) The main analytic hierarchy process (PCA) was used to fuse the dimensionality reduction of the feature vector. In the dimensionality reduction calculation process, the first few eigenvalues with a cumulative contribution value of more than 95% were selected as the characteristic samples for training, and the other processes are consistent with the method in this paper (M3) [46]. (4) The one-dimensional vibration signal was directly input into the one-dimensional 1DCNN constructed in this paper for training evaluation (M4), and the timing signal was selected for the first channel. (5) The feature samples constructed in this paper were directly input into the support vector machine (SVM) for training and evaluation (M5). Among them, the combination of the four key parameters of SVM, weight coefficient, penalty factor, radial base kernel function parameters, and insensitive loss function parameters were [0.08, 2.80, 0.58, 0.01] [47]. (6) The feature samples constructed in this paper were directly input into the Softmax layer with a classification function for evaluation (M6) [48]. The parameter settings and other processes are consistent with the As can be seen from the experimental results in Table 9 and Figure 14, compared with the common group optimization algorithms such as PSO, GWO, and CSA, the improved BFA-BOA optimization algorithm proposed in this paper can more effectively realize the adaptive noise reduction of the vibration signal of the diesel engine cylinder head; thus, a very high state evaluation accuracy can be obtained. In addition, the evaluation effect of using the BOA algorithm alone is worse than that of the BFA-BOA algorithm, which once again proves that the global optimization ability of BOA has been greatly improved after considering the stimulus factor of BFA. Method Comparison Analysis To further demonstrate the effectiveness of the health status assessment method proposed in this paper, the method was compared with common health status assessment methods. These methods include: (1) The vibration signal of the diesel engine cylinder head was decomposed by VMD with unified parameters, i.e., the value of the penalty factor a is 8000, and the value of the decomposition level K is 3. Then, the last IMF component for feature parameter extraction was selected, and the other processes were consistent with the method in this paper (M1). (2) Only adaptive VMD decomposition was performed on the vibration signal without feature fusion based on SSAE. The parameter settings and other processes are consistent with the method in this paper (M2). (3) The main analytic hierarchy process (PCA) was used to fuse the dimensionality reduction of the feature vector. In the dimensionality reduction calculation process, the first few eigenvalues with a cumulative contribution value of more than 95% were selected as the characteristic samples for training, and the other processes are consistent with the method in this paper (M3) [46]. (4) The one-dimensional vibration signal was directly input into the one-dimensional 1DCNN constructed in this paper for training evaluation (M4), and the timing signal was selected for the first channel. (5) The feature samples constructed in this paper were directly input into the support vector machine (SVM) for training and evaluation (M5). Among them, the combination of the four key parameters of SVM, weight coefficient, penalty factor, radial base kernel function parameters, and insensitive loss function parameters were [0.08, 2.80, 0.58, 0.01] [47]. (6) The feature samples constructed in this paper were directly input into the Softmax layer with a classification function for evaluation (M6) [48]. The parameter settings and other processes are consistent with the method in this paper. (7) The feature samples constructed in this paper were directly input into the long short-term memory networks (LSTM) for evaluation (M7) [49]. (8) The feature samples constructed in this paper were directly input into the deep belief networks (DBN) for evaluation (M8) [50]. The method in this paper was denoted as (M9), and the total sample size was chosen as 216. The average evaluation results of 10 experiments are shown in Table 10 and Figure 15. Table 10 and Figure 15. From the evaluation results in Table 9 and Figure 14, the following conclusions can be drawn: (1) Since the M1 method fails to adaptively optimize its key parameters when the diesel engine cylinder head vibration signal is decomposed by VMD, the noise component and the effective health status component of the signal cannot be effectively separated; thus, the evaluation accuracy is lower than that of the method in this paper. (2) The M2 method does not fuse the feature vectors, resulting in redundant information between multi-channel signals drowning out the effective health status information, while the M3 method achieves the dimensionality reduction of PCA but fails to deeply reconstruct and enhance the data features. Therefore, the evaluation accuracy of the M2 and M3 methods is lower than that of the method in this paper. (3) The M4 method directly inputs the onedimensional vibration signal into the 1DCNN network. Since there are many noise interference components in the signal without noise reduction and feature extraction, the signal-to-noise ratio of the effective component is very low. Therefore, a good mapping relationship cannot be formed between the input and the two ends of the evaluation result, resulting in a poor evaluation result. (4) Although the M5 and M6 methods achieve an effective evaluation of fused feature samples, the accuracy rate also reaches more than 90%. However, since neither SVM nor Softmax layer have the powerful parameter space and nonlinear mapping learning ability of 1DCNN, the evaluation effect of these two From the evaluation results in Table 9 and Figure 14, the following conclusions can be drawn: (1) Since the M1 method fails to adaptively optimize its key parameters when the diesel engine cylinder head vibration signal is decomposed by VMD, the noise component and the effective health status component of the signal cannot be effectively separated; thus, the evaluation accuracy is lower than that of the method in this paper. (2) The M2 method does not fuse the feature vectors, resulting in redundant information between multi-channel signals drowning out the effective health status information, while the M3 method achieves the dimensionality reduction of PCA but fails to deeply reconstruct and enhance the data features. Therefore, the evaluation accuracy of the M2 and M3 methods is lower than that of the method in this paper. (3) The M4 method directly inputs the one-dimensional vibration signal into the 1DCNN network. Since there are many noise interference components in the signal without noise reduction and feature extraction, the signal-to-noise ratio of the effective component is very low. Therefore, a good mapping relationship cannot be formed between the input and the two ends of the evaluation result, resulting in a poor evaluation result. (4) Although the M5 and M6 methods achieve an effective evaluation of fused feature samples, the accuracy rate also reaches more than 90%. However, since neither SVM nor Softmax layer have the powerful parameter space and nonlinear mapping learning ability of 1DCNN, the evaluation effect of these two methods is still lower than that of the method in this paper. (5) Due to the small sample size of the features in this paper and because the parameter space of the LSTM and DBN networks is too large, the network cannot be fully trained, which leads to the evaluation accuracy being lower than the method in this article. To sum up, compared with other common state of health evaluation methods, the proposed method for evaluating the state of health of diesel engine valve clearance has good accuracy and superiority. Conclusions The valve train is one of the key components of a diesel engine. If the valve clearance of the diesel engine continues to increase abnormally, it may cause serious failures such as cylinder collision or valve breakage, and it will also threaten the safety of operators. Based on this situation, this paper proposed a health status assessment of diesel engine valve clearance based on BFA-BOA-VMD adaptive noise reduction and multi-channel information fusion, which effectively improved the accuracy of the diesel engine valve gap health state assessment. By carrying out the simulation experiment of diesel engine valve clearance degradation, the vibration signal of the multi-channel diesel engine cylinder head was collected, and the BFA-BOA-VMD method was used to realize the adaptive noise reduction of the signal and the selection of effective IMF components, which improved the signal-to-noise ratio of the signal. Further, the characteristic sensitivity analysis selected the time-domain and frequency-domain characteristic parameters that are sensitive to the state of health of the diesel valve clearance. Feature fusion and dimensionality reduction based on SSAE network further reconstructed and enhanced data features, highlighting state information. Finally, we conducted health status assessment experiments on the diesel engine valve gap preset experimental data set. Experimental results showing that, compared with the common state assessment algorithm, the proposed method has good evaluation accuracy and superiority. To sum up, the method for assessing the health status of diesel engine valve clearances proposed in this paper, as an important technical means in equipment health management, has very important application prospects and significance and also provides new theoretical support and ideas for equipment health management and assessment. In future research, more effective signal noise reduction methods, feature fusion methods with stronger reconstruction capability, and deep network models with stronger learning abilities can be further studied. Data Availability Statement: The study data used in this article can be obtained by contacting the corresponding authors. Conflicts of Interest: The authors of this article declare that there are no known competing financial interests or personal relationships that could influence the work of this article.
2022-10-26T15:04:48.433Z
2022-10-24T00:00:00.000
{ "year": 2022, "sha1": "6da4829f7a931d8fcaab3e980fb7cf00fca6b3c2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/22/21/8129/pdf?version=1667358301", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "003129f2cbca59a0c76a9de7ca538e950b6f773c", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
52177496
pes2o/s2orc
v3-fos-license
Molecular characterization of Mycobacterium tuberculosis isolates from pulmonary tuberculosis patients in Khartoum, Sudan Background: The aim of this study was to characterize the drug resistance profile, and the specific lineages of Mycobacterium tuberculosis (MTB) strains isolated from patients with pulmonary TB in the state of Khartoum in Sudan. Methods: Consecutive sputum samples and clinical data were collected from 406 smear-positive TB patients with pulmonary TB in 2007–2009. The samples were cultured, and drug susceptibility testing (DST) was performed using the proportion method (PM) on solid Löwenstein–Jensen medium, and species were identified using biochemical methods at the National Reference Laboratory (NRL) in Khartoum. Extracted deoxyribonucleic acid from a total of 120, 60 suspected multidrug-resistant isolates (MDR), and 60 non-MDR isolates were subsequently sent to the WHO supranational reference laboratory (SRL) in Stockholm at the Public Health Agency of Sweden, for confirmation of the drug resistance profile, examinations by line probe assay (LPA), and molecular epidemiology analysis with Spoligotyping. Results: LPA results correlated 100% for non-MDR and 62% for the suspected MDR strains when compared to the DST results obtained by PM at the NRL. Two strains were initially using the PM identified as MDR-TB but later shown by Hain GenoType Mycobacterium CM/AS to belong Mycobacterium avium complex (Mycobacterium intracelluare). These two strains were excluded from the study material for further analysis. The remaining 58 MDR strains were analyzed using LPA, and 36 strains were confirmed as MDR, 10 as rifampicin monoresistant, and eight as isoniazid-monoresistant. Spoligotyping for all the 118 MTB isolates revealed a total of 115 patterns in which four patterns represented major clusters with a total of 108 (91%) of the strains. The CAS1_Delhi/family was the predominant type and detected in 62 isolates (52%), of which 26 were MDR and 36 were susceptible. It was followed by H3/family with 19 (16%) strains, and 11 Latin American Mediterranean3/family, 16 T2/T1, and two strains each of the Beijing and S lineage. Conclusion: Comparison of DST results obtained using PM and LPA showed 100% agreement for the non-MDR strains but only 62% for the MDR strains. Taking in consideration the time, risk of contamination and the cost of labour to identify MDR TB, the LPA have clear advantages in early detection of MDRTB than the PM. Additionally in this study material Spoligotyping revealed the CAS1 Delhi as the most predominant family. We could not see no major difference in lineages between MDR and non-MDR strains. At the time of this study, the NRL, located in the capital city of Khartoum, was the only laboratory in the country with a population over 40 million, with the capacity to perform DST. In recent years, a significant reduction in TB-related mortality and a decline in incidence and prevalence in all forms of TB have been observed. [1,2] HIV positivity rate among TB patients in Sudan is highest (7.7%) in the EMR, necessitating an expanded HIV testing in all the TB Management unit (TBMUs). In 2016, only 3659 (17%) of all notified TB cases were screened for HIV. Almost all 279 (99%) patients who were also HIV positive started antiretroviral therapy. Today, the NRL has a molecular laboratory and in addition to routinely performed DST with proportion method on solid media (PM) also examine drug resistance with line probe assay (LPA) and GeneXpert. To upgrade the capacity to control TB countrywide, the national TB laboratory network comprises four regional TB culture laboratories equipped with GeneXpert and in addition 13 states laboratories with GeneXpert. Methods A total of 406 consecutive sputum samples were collected from smear-positive TB patients and cultured on Löwenstein-Jensen (LJ) medium in 2007-2009 at the Sudanese NRL in Khartoum. Biochemical species identification of the strains was done using nitrate and catalase tests. Confirmation of the DST results and further characterization of the drug resistance profile were done at the SRL for the 58 MDR and the 60 susceptible strains using Genotype MTB DRplus (Hain Lifesciences, Nehren, Germany), according to the manufacturer's recommendations. [4] Molecular characterization to identify clusters was performed with spoligotyping as described by Kamerbeek et al. [5] The whole direct repeat region of TB genome was amplified with Dra and Drb primers. The amplified products were then hybridized to a set of 43 oligonucleotides, and the hybridized polymerase chain reaction products were detected using the enhanced chemiluminescence (Amersham) visualized by a charge-coupled device-camera (MF-ChemiBis 3.2) after incubation with streptavidin-peroxidase and detected using the enhanced chemoluminescence system. [5] Spoligotypes in binary and octal format were entered into an Excel spreadsheet and compared to an updated Spoligotypes International Type VNTR International Types (SITVIT) Database of the Pasteur Institute of Guadeloupe. [6] The octal codes found for each strain were entered into the SITVIT2 database, which is the updated version of the previously released spoligotyping database. In this database, SIT designates spoligotypes shared by two or more isolates included in the database. The SITVITWEB incorporates multimarker data which gives a global vision of MTB complex genetic diversity. The database contains clinical isolates from >62,000 patients coming from 105 different countries. Major phylogenetic clades were assigned according to the signature provided in the database, which is defined in 62 genetic lineages/sublineages. The SITVITS contains around 3000 SITs with global genotyping information on about 54,000 clinical isolates from >153 countries of origin. [6,7] The patterns obtained were analyzed using visual examination and by sorting the results in Bionumerics software version 5.1 (applied Keistraat, Belgium). [8] A spoligotype cluster was defined as two or more strains sharing identical patterns. If no matching spoligotypes were identified in the database, the isolate was defined as orphan or unique. Finally, a dendrogram was constructed using the Unweighted Pair Group Method with Arithmetic Mean and Jaccard's distance method. Drug susceptibility testing and external quality assurance results In total, 406 samples were cultured from patients with pulmonary TB and subjected to DST using PM at the NRL. The results show that 41.4% (123/297) of the strains isolated from patients with no previous history of TB exhibited resistance to at least one anti-TB drug. In the earlier-treated patients, this number was 60.6% (66/109). The initially estimated rate of MDR was 14.8% (60/406), after exclusion of the two misclassified Mycobacterium intracellulare strains, it was corrected to 58/404 (14, 4%), and 28/297 (9.4%) in the newly diagnosed, and 30/109 (27.5%) in the patients with a relapse or treatment failure. Spoligotyping results Spoligotyping was performed on the 118 TB strains to examine lineages and clustering and showed 112 different patterns [ Figure 1 and Table 2]. The Central-Asian 1 (CAS1) Delhi was the most predominant, detected in 49 isolates (43%), 20 of which were MDR, and 28 susceptible [ Table 2 and Figure 2]. dIscussIon Two strains previously misclassified as MDR-TB at the NRL were shown to be M. intracellulare. Illustrating the risk of overestimating the MDR-TB problem in a setting where reliable species identification tools are missing. In addition, a number of isolates initially classified as MDR by phenotypic DST were shown by LPA to be RIF or INH monoresistant isolates. If these initial results are trusted, it means that several patients who can be treated by RIF or INH would have for them suboptimal therapy with second-line drugs. With the molecular tests used at the NRL in Sudan, these mistakes would not have happened, which illustrate that molecular tests not only offer a more rapid detection of MDR-TB but also a more specific. Since global reports indicate that MDR and extensively drug-resistant TB are on the rise, early identification of resistant strains, and prompt initiation of correct treatment have a great impact in the control of TB. [9] Today, we have molecular tools which can be easily integrated in TB microscopy centers located in remote areas. Strengthening the microscopy centers and having a strong central NRL, which can give training and support, can improve early detection of TB patients. These new rapid molecular tools in combination with the WHO recommended algorithms can strengthen the laboratory capacity in detecting resistant strains faster. Indirectly, this will improve the cure rate of TB patients and limit transmission of resistant strains. [10] Several studies have shown that molecular techniques, such as GenoType MTBDRplus, have clear advantages compared to using the time-consuming and laborious technique such as the PM. [11,12] In this study, we used spoligotyping to identify the most dominant clones of MTB circulating in Sudan. Since half of the strains were MDR and half of them susceptible, we aimed to have an insight of the ongoing transmission among MDR and non-MDR TB cases [ Figures 2-4]. A spoligotype study was done in Sudan in 2002, reported a dominance of a single genotype where more than half of the strains studied (29/49) shared the same spoligotype. [13] Another study done by the same author in 2011 found 35% belonged to the spoligotype CAS1_Delhi/family and 14% belonged to the same lineage but with different SIT numbers. [14] Using spoligotyping on TB isolates from the Sudanese patients show that highly diverse clades are circulating in the country. The CAS1_Delhi/family was the predominant shared type detected in 48 isolates followed by H3/family with 16 isolates, and four isolates with LAM3/ family, eight isolates with T2/T1/family type, and two isolates each with Beijing type. We found 36 of the isolates were not available in the SpolDB database and were defined as orphans. In previously published paper from Saudi Arabia and Ethiopia, the CAS1_Delhi/family was found as the major family circulating in the region. [15] Visually inspecting the spoligo patterns of Figure 3 (MDR) and Figure 4 (non-MDR) suggest that there are significant similarities within each group. This can indirectly indicate that there might be some active transmission within both groups. There is a need to further explore transmission rates among MDR and non-MDR patients by more in-depth studies. The main goal of this study was to compare the phenotypic results obtained at the NRL and with that of molecular techniques. In addition, spoligotyping was used to see which patterns circulated in Sudan and to investigate possible differences between the MDR and non-MDR groups and increase the knowledge of ongoing transmission in these two groups. Since the study material was on limited number of patients, we cannot give a concrete picture of the transmission rate. To give a clearer idea about transmission rates and which clones are dominant therefore a broader study material is needed. Such study designs need to have information of all TB cases detected including names of frequent contacts, sites of residence, healthcare, work and social activities. In this study material, we found only two Beijing strains, one MDR, and the other non-MDR. This explains that the Beijing type is not yet, so widely spread in the country. conclusIon Using spoligotyping on TB isolates from the Sudanese patients show that highly diverse clades are circulating in the country. The CAS1_Delhi/family was the predominant shared type detected in 48 isolates followed by H3/family with 16 isolates, and four isolates with LAM3/family, eight isolates with T2/T1/family type, and two isolates each with Beijing type. We found 36 of the isolates were not available in the SpolDB database and were defined as orphans [ Figure 4]. In previously published paper from Saudi Arabia and Ethiopia, the CAS1_Delhi/family was found as the major family circulating in the region. [15] Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2018-09-16T05:47:12.671Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "be873924fe7faebb2829398edbf52cf9a1068c29", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijmy.ijmy_82_18", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "619ca8f8ce848436ff45e6ed5e752d9ce7f7a445", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
245249304
pes2o/s2orc
v3-fos-license
Juridic Review of Criminal Actions of Sexual Violence Against Women (Case Study in the Women and Children Service Unit of the Tangerang Police) ABSTRAK Violence against women in recent years has increased. Violence occurs anywhere, anytime and to anyone. Sexual violence against women often occurs because of the value system that places women as weak and inferior beings compared to men. Likewise with the view that the female body is a medium or a tool to satisfy male lust. With the large number of sexual violence that occurred, the Women and Children Service Unit of the Tangerang Police tried to resolve all cases without any case arrears. This study aims to determine the handling of cases of Sexual Violence Against Women in the Women and Children Service Unit of the Tangerang Police. To find out the obstacles in the process of handling Sexual Violence Against Women in the Women and Children Service Unit of the Tangerang Police. Researchers use the Juridical Empirical method which is also called field research. That is the main source of data obtained from the field. The results of the study revealed that the Women and Children Service Unit had carried out the handling in accordance with Articles 20 and 21 of the Regulation of the National Police Chief Number. 14 of 2012 concerning Management of Criminal Investigations. Preliminary Sexual violence against women is not only a domestic or personal problem, but has become a societal problem. Currently, sexual violence against women can happen anywhere, in the form of sexual harassment, rape accompanied by torture and murder and so on. 1 Although the issue of sexual violence against women has been revealed as a serious social problem, it still does not receive an adequate response, both from the government, law enforcement officials and the community in general. Law enforcement efforts carried out by the government cannot be separated from the police in accordance with the main tasks of the Indonesian National Police which are regulated in Article 13 of Law Number 2 of 2002 concerning the Indonesian National Police, namely maintaining public security and order, upholding the law and providing protection, protection and protection. service to the community. The program mentioned above has been followed up by Stuan Reskrim Poresta Tangerang and his staff, but the journey to success does not always go in a straight line with the intended goal. The fact is that the crime rate of sexual violence against women continues to increase. The high crime rate is suspected to be constrained by economic problems and social problems of the community. In addition to the economic problem of sexual crimes against women, it is also suspected that the lack of public attention has resulted in the intention of the perpetrators to commit these crimes. The fact is that the crime rate of sexual violence against women continues to increase. The high crime rate is suspected to be constrained by economic problems 1 Tenripadang Chairan, "Tentripandang, Analisis Yuridis Tindak Kekerasan Seksual Terhadap Permpuan," jurnal legalitas 8, no. 2 (2010): 2. and social problems of the community. In addition to the economic problem of sexual crimes against women, it is also suspected that the lack of public attention has resulted in the intention of the perpetrators to commit these crimes. The level of sexual violence against women can also be caused by economic and social problems that have a very large impact on these crimes. Research methods Legal research as sociological (empirical) research can be realized in research on the effectiveness of the law that is currently in effect or research on legal identification. 2 This research is also often referred to as research on the work of law (law in action) in society. 3 The approach in this empirical legal research is a socio-legal approach. This approach requires various social and legal disciplines to examine the existence of positive law (state approach is important because it is able to provide a more holistic view of legal phenomena in society. 4 Handling cases of Sexual Violence Against Women in the Child and Women Protection Unit of the Tangerang Police The case of sexual violence against women that occurred in the jurisdiction of the Tangerang Cases of sexual violence against women which are the scope of the Women and Children Service Unit, especially the Women and Children Protection Unit of the Tnagerang Police Criminal Investigation Unit as described in the article above, namely immorality which is the task of the Women and Children Protection Unit of the Tangerang Police Criminal Investigation Unit in handling and resolving all criminal cases that exist and occur in the jurisdiction of the Tangerang Police. like the theory taken by the author in analyzing the problems that exist in this thesis the author uses the Management Theory of Criminal Acts as regulated in the Chief of Police Regulation No. 14 of 2012 which consists of planning, organizing, implementing and controlling control. With the selected theory then the author describes as follows: 1. Planning In carrying out their duties, all reports that are filed from the community that enter the Women and Children Service Unit of the Criminal Investigation Unit will require an Lidik Sprin (Investigation Warrant) and Sidik Sprin (Investigation Order). After the Sprint Lidik and Sprin Sidik are required, it will be continued with the making of an investigation plan in order to determine the next steps and goals. The Women's and Children's Service Unit of the Tangerang Polresta stated that: Every report that comes in, we will definitely make an Internal Examination and Investigation so that the case can be handled and resolved quickly, but we must also always make an investigation plan so that our activities are directed and every steps we take. do according to the rules. 5 So every activity of the function of the Women and Children Service of the Tangerang Criminal Investigation Unit, in this case the Women and Children Service Unit, in carrying out activities does not come out of the specified steps so that it does not make the Head of Sub-unit, as well as members, pre-trial by the community. Each activity plan is delivered both verbally and in writing as an example of 5 "Interview with the KANIT for Women and Children Services at the Tangerang Police, IPTU Suharjo, SH., M.Si.," n.d. verbal delivery, namely during the morning assembly of the function union or Leadership Direction Event when carrying out activities. In addition to verbal delivery, orders and plans are also delivered in writing by pouring them into the operational activity tabulation data panel. In carrying out every activity carried out by the Women and Children Service Unit above, the author concludes that it is in accordance with the Planning Activities contained in the Regulation of the National Police Chief Number 14 of 2012 concerning the management of criminal investigations, namely: making an investigation plan and making an investigation plan. Organizing For organizing every 5 (five) days, a member of the Women and Children Service Unit carries out a joint picket with another function picket, both the General Crimes, Special Crimes, Corruption Crimes and Crim. Reports that come in through the Integrated Police Service Centerguard are then given to the respective unit functions in accordance with the existing functional areas. After entering the respective functions, such as entering the Women and Children Service Unit, the picket function at that time was responsible for handling the case until it was in the status of p21. The Women's and Children's Service Unit of the Tangerang Police stated that: Usually the division of tasks assigned by the Head of the Subunit for an incoming case is charged to the picket at that time, so that the member who receives the case really understands what happened, making it easier to resolve the case. However, policewomen are usually prioritized to accompany and examine child and female victims. For now, there is a shortage of 1 Polki, so at the time of arrest, they asked for help from members of Unit 1 (1). 6 The organization carried out by the Head of the Sub-unit for Women and Children Services at the Tangerang Police Criminal Investigation Unit in organizing members to carry out investigations and investigations is in accordance with Articles 20 and 21 of the Regulation of the National Police Chief No. 14 of 2012 concerning Management of Criminal Investigations which stipulates that organizing activities are carried out by superiors. In the Women and Children Service Unit of the Tangerang Police, in organizing the handling of each incoming case based on picket plots. According to the author, such an organization is not in accordance with Article 21 of the National Police Chief Regulation No. 14 of 2012 which stipulates that plotting must be based on the ability and competence of investigators. If the plotting per picket does not produce maximum results because it is not necessarily the implementer of the picket on that day has the competence to resolve the cases that come in on that day. Implementation The handling of cases of sexual violence against women will be carried out when a report is submitted to the Women's and Children's Service Unit, both from victims who have experienced criminal acts or from family members who are victims of sexual violence against women. The Service Unit for Women and Children of the Tangerang Police stated that: We handle every case that comes in, especially cases of sexual violence against women when we get reports from families or victims who experience acts of sexual violence. In order to obtain complete and maximum information without any cover-up, we place policewomen in the examination of child and female victims. 7 By being given a policewoman member with a case, especially in the case of sexual violence against women, the leadership hopes that in giving testimony it is open and not awkward to tell everything that has happened even though it is a disgrace in order to expedite the process of resolving existing cases. A member of the Women's and Children's Service Unit of the Tangerang Police Criminal Investigation Unit stated: In handling cases of sexual violence against women, the complainant, both the victim and the victim's family, will be served by members of the Women and Children Service Unit who receive the complaint. After collecting complete testimonies and evidence, then proceed with the arrest of the suspect. 8 From the results of interviews conducted with the Tangerang Police Criminal Investigation Unit, IPTU Suharjo, S, H., M. S, i and members of the Women and Children Service Unit of the Tangerang Police IPTU Maskuri above that the implementation of the settlement of cases that occurred began with receiving reports from the Integrated Police Service Center, investigation of cases to determine whether or not the case occurred, preparing investigators, making Sprin (Warrant), examination of victims, delivery of visas, summoning witnesses, arresting suspects, case titles, filing and sending files to the Public Prosecutor. In addition to the above procedure, the Women and Children Service Unit will also make a Notification Letter on the Progress of Investigation Results which is sent to the victim and the victim's family who reports it. The Women's and Children's Service Unit of the Tangerang Polresta stated: Every time we conduct an investigation, we from the police will provide to the victim so that the victim feels that the case they report is continuing to be resolved. 9 4. Supervision and Control Supervision and control, usually many people call it Wasdal, which is an analysis and evaluation of the leadership on the duties of members. The supervision and control carried out by the Criminal Investigation Unit is a form of participation and a form of control carried out by superiors on the implementation of tasks that are accountable for the results of the implementation of members in handling cases of sexual violence against women. The Criminal Investigation Unit performs analysis and evaluation of the results of the members' implementation twice a week. 10 The results of the above writing did not find any violations committed by members of the Women's and Children's Service Unit of the Tangerang Police Criminal Investigation Unit in handling cases of sexual violence against women. From all these explanations regarding the management of case investigations on cases of sexual violence against children by the Women and Children Service Unit of the Tangerang Police, which consists of Planning, Organizing, Implementing and Supervising the Control, it is in accordance with the investigation management guidelines contained in the National Police Chief Regulation No. 14 of 2012 concerning Management of Criminal Investigations. Barriers Faced in Handling Sexual Violence Against Women in the 9 "Interview with the KANIT for Women and Children Services at the Tangerang Police, IPTU Suharjo, SH., M.Si." 10 "Interview with the KANIT for Women and Children Services at the Tangerang Police, IPTU Suharjo, SH., M.Si." Women and Children Service Unit of the Tangerang Police In handling cases of sexual violence against women carried out by the Women's Protection Unit of the Tangerang Police Criminal Investigation Unit, it is not always smooth, of course there are obstacles during the investigation process. Obstacles experienced by the Women's and Children's Service Sub Unit Polresta. Based on the results of interviews conducted by the author with investigators at the Women and Children Protection Unit of the Tangerang Police, the authors found several obstacles faced by investigators in uncovering criminal acts of sexual violence against women, namely: The first obstacle is the perpetrator of the crime of sexual violence against women knowing that he has been reported by the victim to the police. The perpetrators who have been reported will usually run away and hide in certain areas/cities before being caught by investigators. Based on the results of interviews, investigators experienced problems when the perpetrators of the crime of sexual violence against women fled to a city. The second obstacle is that the lack of information about the perpetrator also makes it more difficult for investigators to find the perpetrator. Investigators have difficulty tracking the whereabouts of the perpetrators who fled without knowing the face and signal of an inactive cell phone. The information obtained by the investigators is only information about his physical characteristics, home address, telephone number, temporary whereabouts of the perpetrator so that investigators find it difficult to know clearly. with the results of the investigation by the investigators in the field. In the summary of data from early January, there were 21 cases of violence. The evidence used for the crime of sexual violence against women is the clothes worn by the victim at the time of the crime the crime of sexual violence occurred, the post-mortem was carried out by the victim and the confiscation of evidence from the victim. At the Tangerang Police, investigators from the Women's and Children's Protection Unit often encounter some of the obstacles listed in the discussion above from the first problem. The following will explain the efforts of investigators to uncover criminal acts of sexual violence against children. The efforts made are: The first attempt, the investigators took steps to cooperate with the police from various regions to find the whereabouts and secure the perpetrators. If the perpetrator is in a location that is quite dangerous, the investigator brings sufficient troops to help secure the area when the perpetrator is arrested. The next effort is for women who are victims of criminal acts of sexual violence against women, especially for cases of rape and have experienced severe physical and psychological trauma, the investigator provides assistance from a psychologist. Assistance by a psychologist, the victim's family, a lawyer or someone trusted by the victim is very helpful for the victim in the ongoing investigation process so as not to cause fear. The second prevention effort carried out by the Tangerang Police investigators is to carry out a movement or socialization activity carried out in collaboration with various urban villages, sub-districts, various villages, various universities and institutions. Non-governmental organizations in the city of Tangerang. The purpose of socialization about sexual violence against women is so that the public understands and knows information about these crimes and to increase community participation, public legal awareness of the dangers of criminal acts of sexual violence against women that have occurred a lot by providing counseling, sticking posters in public places such as malls, train stations, terminals and cooperate with the mass media. The third effort, the investigators also carry out search activities by visiting places in certain areas where the crime can occur. This location becomes a vulnerable point because starting from a location like that, the crime of sexual violence against women can also occur according to the environment. The search activity is also routinely carried out . 11 Closing Based on the description of the results and discussion, conclusions are drawn, including: 1. The handling of cases of sexual violence against women in the Women and Children Service Unit of the Tangerang Police is in accordance with the investigation management guidelines contained in the National Police Chief Regulation No. 14 of 2013 concerning the Management of Criminal Investigations (Das Sollen). However, in the handling process there are several factors that are not in accordance with the management, namely the difficulty of the perpetrators and witnesses in questioning so that it is difficult for investigators to obtain information. And making investigators have difficulty in completing the criminal case file (Das Sein). 2. Obstacles faced by investigators in handling criminal acts of sexual violence against women are the difficulty of extracting information
2021-12-17T16:32:49.116Z
2021-12-15T00:00:00.000
{ "year": 2021, "sha1": "4f402b51285a20987566546c8aacf1969e126b7d", "oa_license": "CCBYSA", "oa_url": "https://jurnal.untirta.ac.id/index.php/nhk/article/download/12417/8281", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4f358aaabe457fed97058881680eaabf239f8911", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [] }
57742372
pes2o/s2orc
v3-fos-license
Hepatitis C-related membranoproliferative glomerulonephritis in the era of direct antiviral agents Abstract Membranoproliferative glomerulonephritis (MPGN) is the most typical Hepatitis C virus (HCV)-associated glomerulopathy, and the available data about the utilization of direct-acting antivirals (DAA) in HCV-associated glomerulonephritis is inadequate. We evaluated the renal and viral response in two cases of HCV-related MPGN; the first caused by cryoglobulinemia while the second was cryoglobulin-negative. Both patients received immunosuppression besides DAA in different regimens. They achieved partial remission but remained immunosuppression-dependent for more than 6 months after DAA despite sustained virological response, which enabled safer but incomplete immunosuppression withdrawal. Both patients were tested for occult HCV in peripheral blood mononuclear cells and found to be negative. Hence, the treatment of HCV-related MPGN ought to be according to the clinical condition and the effects of drug therapy. It is important to consider that renal response can lag behind the virological response. Resumo Membranoproliferative glomerulonephritis (MPGN) is the most typical Hepatitis C virus (HCV)-associated glomerulopathy, and the available data about the utilization of direct-acting antivirals (DAA) in HCV-associated glomerulonephritis is inadequate. We evaluated the renal and viral response in two cases of HCV-related MPGN; the first caused by cryoglobulinemia while the second was cryoglobulin-negative. Both patients received immunosuppression besides DAA in different regimens. They achieved partial remission but remained immunosuppression-dependent for more than 6 months after DAA despite sustained virological response, which enabled safer but incomplete immunosuppression withdrawal. Both patients were tested for occult HCV in peripheral blood mononuclear cells and found to be negative. Hence, the treatment of HCV-related MPGN ought to be according to the clinical condition and the effects of drug therapy. It is important to consider that renal response can lag behind the virological response. IntRoductIon Hepatitis C virus (HCV) can cause extra-hepatic manifestations including nephropathy and mixed cryoglobulinemia (MC). Glomerular diseases are the most common type of nephropathy related to HCV 1 . The foremost common variety of HCV associated glomerulopathy is immune-complex-mediated membranoproliferative glomerulonephritis (MPGN), which is linked to type II MC. It can also happen less commonly in the absence of cryoglobulinemia 2 . Generally, patients with HCV infection have a greater chance of end-stage renal disease (ESRD) (4.3/1000 person-year), compared to patients without HCV infection (3.1/1000 person-year) 3 . Timely and effective management of HCVrelated glomerular disease is paramount to improve the outcome. Various approaches have been recommended including immunosuppressive therapy (corticosteroids, cytotoxic agents, and monoclonal antibodies) and antiviral therapy. These regimens should be considered according to the level of proteinuria and kidney failure 4 . In light of the Kidney Disease Improving Global Outcomes (KDIGO) suggestions, patients with minor or moderate kinds of HCV-associated glomerulonephritis with the steady renal condition and additional non-nephrotic proteinuria ought to be treated first with a direct-acting antiviral (DAA) regime. Patients with significant cryoglobulinemia or serious glomerular disease caused by HCV (i.e., rapidly developing renal dysfunction or nephrotic range proteinuria) should be considered for immunosuppressive therapy with or without plasmapheresis plus DAA therapies. Furthermore, patients, who do not improve with, or can't tolerate DAA have to be managed with immunosuppressive regimen 1 . Inadequate evidence exists on the usage of DAAs in addition to the safety and efficacy of using Mycophenolate Mofetyl (MMF) in patients with HCV-associated glomerular disease 5,6 . Therefore, we intended to evaluate two cases of HCV-related MPGN who had acute worsening of renal function and were managed with immunosuppression and DAA treatment and to assess the effects of their management on renal function and HCV viral load. cAse studIes Case 1: A 42-year-old lady presented with a 1-month history of lethargy, low-grade fever, large joint arthritis, vasculitic lower limb skin rash, bilateral ankle edema, and facial puffiness. Her creatinine increased from 1.8 mg/dL (eGFR = 34 mL/min/1.73 m 2 ) to 2.3 mg/dL at presentation. She had proteinuria of 1275 mg/d and microscopic hematuria. Serology for HBV and HIV were negative with positive HCV antibodies and a viral load of 29,300 copies/mL. Cryoglobulins were positive with consumed C3 and C4 and positive rheumatoid factor. Kidney biopsy revealed MPGN with fibrocellular crescents. Creatinine and proteinuria then progressively declined to 1.6 mg/dL and 1.33 gm/d respectively and the patient achieved partial remission by the 6 th dose of cyclophosphamide ( Figure 1). She could also achieve sustained virological response (SVR) with undetectable viral RNA 12 weeks after DAA therapy. However, when prednisone dose reduction was attempted to 10 mg/day, the patient had a relapse with a rise in proteinuria to 6.3 gm/day. This was 4 months after DAA therapy and 1 week after the last cyclophosphamide dose. Prednisone was then increased to 30 mg/day and MMF was instituted (1 gm bid). Proteinuria then declined to 1.56 gm/day and partial remission could be resumed. Five months later, she had another relapse also during steroid withdrawal. Occult HCV (HCV-RNA in peripheral blood mononuclear cells (PBMCs)) and cryoglobulins were undetectable when tested about 10 months after DAA therapy. After a follow-up period of 2 years and 6 months, the patient is on 5 mg/day prednisone and 1500 mg/day MMF. Case 2: A 62-year-old lady presented with acute deterioration of kidney function while being evaluated for HCV treatment with DAA, associated with lethargy, pallor, extensive vasculitic eruption over her lower limbs, and marked splenomegaly. The patient was diagnosed with HCV in 2004 and had a skin biopsy for vasculitic rash revealing leukocytoclastic vasculitis. She was then treated with a combination of standard interferon and ribavirin, which failed to clear the virus. Her creatinine, which was 1.05 mg/ dL, increased in 2013 to 1.7 mg/dL (eGFR= 33.4 mL/ min/1.73m 2 ), and continued to increase gradually thereafter. In 2015, during the current presentation, creatinine increased from 2.05 mg/dL to 3.18 mg/dL over 5 months. She started immunosuppression with tapering steroids (methylprednisolone 500 mg/d for 3 days then prednisone 60 mg/d) and MMF (1 gm bid) (Figure 2). Creatinine declined to 2.19 mg/dL (eGFR = 23.9 mL/) and proteinuria reached 1814 mg/day within the 1st month. Gradual steroid withdrawal was then implemented. MMF dose was reduced and then switched to mycophenolate sodium (720 mg bid) due to resistant diarrhea. Over the next 6 months, her creatinine remained stable with declining proteinuria to 488 mg/d. However, the patient had a relapse when steroid reduction was attempted to a daily dose of 15 mg prednisone. Her creatinine increased to 2.9 mg/dL and the proteinuria to 3823 mg/d, thus the previous daily dose of 20 mg was resumed and successfully kept her kidney function and proteinuria back to the previous values. Nine months after starting immunosuppression, she started DAA (ombitasvir (12.5 mg), paritaprevir (75 mg), and ritonavir (50 mg) (2 tablets od) "Querivo ® "). RBV (200 mg od) was added initially and discontinued after 2 months due to severe resistant anemia despite erythropoietin therapy, frequent blood transfusion, and RBV dose reduction to 200 mg EOD. The patient HCV RNA was undetectable by the end of DAA therapy and after 12 weeks. After the antiviral therapy, the patient went in partial remission with stable kidney function (creatinine 1.96 mg/dL; eGFR 27.1 mL/min/1.73 m 2 ), and proteinuria of 446 mg/day. She had a relapse when mycophenolate dose reduction was attempted 6 months after the antiviral therapy. However, careful steroid withdrawal was feasible until reaching alternating daily doses of 5 and 7.5 mg. Occult HCV was also undetectable when tested in PBMCs. dIscussIon We report the cases of 2 patients with HCV-related MPGN; the first had MPGN caused by cryoglobulinemia while the second was cryoglobulin-negative. Both patients were indicated for treatment with immunosuppression besides the anti-viral therapy, and both could then achieve partial remission. However, the patients remained immunosuppression-dependent for more than 6 months after DAA despite achieving SVR which enabled safer but incomplete immunosuppression withdrawal. The beneficial impact of viral clearance after antiviral therapy on proteinuria and kidney function in patients with HCV-related MPGN has been well established 7,8 . However, clinical recovery with immunosuppression withdrawal is expected to be delayed after the viral response given that clearance of cryoglobulins can lag behind viral clearance by 4-6 weeks 9 . Moreover, it is agreed among studies that the attainment of HCV RNA negativity does not always confer the resolution of HCV-related cryoglobulinemia. According to the case series reported by Sise et al. 8 , cryoglobulin levels decrease reaching a nadir at a median of 4.6 months (range 22 days to13 months). Six of seven patients with HCV-related cryoglobulinemia complicated by MPGN achieved SVR and 3 of them had persistent cryoglobulin positivity. Patients with active GN experienced an improvement in eGFR and reduction in proteinuria after the successful treatment with DAA therapy. Six of the seven patients received immunosuppression before initiation of DAA therapy and only one received immunosuppression concurrent with antiviral therapy 8 . Another case series reported five patients with HCV complicated by MC who had persistence of cryoglobulins with the resultant untoward clinical ramifications despite achieving SVR after having triple antiviral therapy. More interestingly, like in our first case, clearance of cryoglobulins did not necessarily ensure the resolution of clinical symptoms 10 . There is no clear mechanism to explain the lag in immunologic and/or clinical response behind the viral clearance. It can be argued that it takes some months for significant declines in cryoglobulin concentrations following successful antiviral therapy 11 . This period may be longer in patients with more advanced chronic HCV who may have decreased ability to clear immune complexes 10 . Another explanation is the presence of occult HCV infection despite negative serum viral load. A strong association was evident between occult HCV and immune-mediated GN when patients with negative serum HCV-RNA were tested for occult HCV-RNA in PBMNCs or in serum after ultracentrifugation 12 . Similarly, the persistence of MC after HCV clearance is associated with the detection of occult HCV in PBMNCs 13 . Interestingly, MPGN could even be induced by the persistence of HCV antigen in the kidney in patients with HCV-negative viral load and negative occult HCV in hematopoietic cells. In 3 patients with MPGN associated with MC and a history of HCV infection confirmed by the presence of serum anti-HCV antibodies with a negative viral load, no evidence of occult HCV infection was found in PBMCs and in the cryoprecipitate, but HCV-NS3 antigen was present in the kidney biopsy in one of them 14 . Our patients were tested for occult HCV as a putative mechanism driving the ongoing immune activity and viral RNA was undetected in PBMNCs 15 . We suggest that some forms of immune dysregulation can be responsible for the persistent immune activity after viral clearance. The presence of "point of no return" in the natural history of such lymphoproliferative disorder, with progressive independence from the etiologic agent cannot be excluded 13 . On the other hand, immunosuppression prescription for patients with severe forms of HCV-related glomerular disorders leads to improved outcome. In a prospective study, rituximab combined with Peg-IFN-α/ribavirin, in HCV-associated MC, had a better outcome than Peg-IFN-α/ribavirin alone 16 . Patients with nephrotic-range proteinuria and/or rapidly progressive kidney failure or an acute flare of cryoglobulinemia should receive plasmapheresis, rituximab, or cyclophosphamide in conjunction with methylprednisolone and concomitant antiviral therapy 17 . Choosing the appropriate immunosuppressive regimen should be individualized taking into account many factors: age, the severity of liver and renal involvement, extra-renal manifestations, any previous treatment, contraindications or adverse events, and the balance between immunosuppression and virus activity 6 . Rituximab is recommended as the first line and cyclophosphamide as the alternative immunosuppressant 1,17 , while the use of MMF was reported as an effective and safe alternative in few cases 5,6 . In our first patient, we used cyclophosphamide as the initial immunosuppressive and then used MMF as the alternative when the patient had a late relapse. In the second, we used MMF as the initial immunosuppressive given her old age, prolonged history of chronic hepatitis C, long duration of kidney affection, and the presence of chronic anemia; all the clinical decision was made to avoid the high dose-cyclophosphamide regimen. In both patients, we experienced MMF as safe and effective with no major side effects except recurrent upper respiratory tract and urinary tract infections in the first patient, and diarrhea in the second, which settled by replacing MMF with mycophenolate sodium. In conclusion, the management of HCV-related MPGN should be customized by the clinical seriousness and reaction to treatment. The renal reaction can delay after the virological reaction. Along these lines, it is critical to begin immunosuppression with or before the antiviral treatment in extreme cases and to maintain immunosuppression even after negative HCV viral load.
2019-01-09T14:06:37.564Z
2018-05-01T00:00:00.000
{ "year": 2021, "sha1": "a9e43ef572f478cf6b1244995d1d8086ff2073ac", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/jbn/a/B68Rq3qPdNzwry49Zn6thhx/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5b71a8a25ed3d838a7db8bff81ca931edd75d693", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
242385246
pes2o/s2orc
v3-fos-license
REMOTE LEARNING IN THE CONTEXT OF SCHOOL EDUCATION: CHALLENGES AND OPPORTUNITIES : The COVID-19 pandemic brought a revolution in teaching learning process at all levels of education. School education is not an exception to it. Its impact found all around in varieties of ways. Educational institutions, teachers, teacher educators responded to this situation actively which ultimately forced the students and parents make a transition from conventional mode of classroom teaching learning process to online mode of remote teaching-learning process. A virtual learning environment is created for the students to manage the situation, as there were no other options available. The present piece of work is an attempt to assess the situation of online education (teaching and learning practices at a distance) at school level in Odisha. The findings of the study reflect a comprehensive view of various stakeholders (teachers, students and parents) associated with education of children at school levels and the need for an effective and efficient pedagogy for online education. It also established the fact to strengthen the resources required for making the online teaching learning process effective from the point of view of all stakeholders. The findings of the study also revealed how and why integration of technology (blending) is essential for capacity building of teachers and to support teaching-learning in the context of school education in Odisha. Also, the implications of this study are of very important for designing, development and implementation of online education for quality improvement in school education sustainably. Introduction With the spread of corona virus schools, colleges, universities all around the globe are closed. Millions of students are locked in homes. There are no other options other than online virtual mode including schools in our country as well. Now everybody is looking for creating a network of online resources and infrastructure. Every school is working with their management, head teacher, teachers and parents to interact with their children. The question before the teaching community was "how to continue teaching-learning process at a distance? Quality instruction (teaching-learning) is very important for overall cognitive and affective development of students. The need of the hour is to have a complete and comprehensive digital infrastructure to ensure effective learning students at a distance in the context of school education. With closure of schools across the country institutions and individuals are exploring alternative possible ways to provide education to students using ICTs such as Internet, TV, and radio. However, access to ICTs technologies is limited and a distant dream for many in our county especially among poor households, rural & remote places. All educational institutions and individuals are making experiments with various elearning tools and platforms to deliver content online effectively. Importance is on active learning approaches with appropriate instructional design to deliver the content online. Many of the institutions are now able to manage to certain extent to provide quality inputs online to our students and many are also still finding a way to make use of the platform effectively. India, with more than 260 million students each year is the largest in the world in the school system. It's a new experience for the teachers to teach at a distance, understand the concept of remote teaching-learning, communicate with students in real-time, make use of a variety of e-resources, assign them to individual and/or facilitate collaborative work, assess learners' understanding, making formative and summative assessments, provide feedback to individual students for further improvement etc. The present study comprehensively covers all aspects to understand the whole and suggest a road map to make this initiative a great model of sustainable development in teaching learning process. COVID-19 has shown a huge gap in digital divide among our students. In order to learn online via these Platforms, a student would need to have access to the platform and guidance from parents. In terms of access, the prerequisite to learn via the online platform would be to have internet and a device such as computer, laptop, tablet or smartphone. In urban areas, there are complaints that children have to share devices and they could not fully utilize the lessons online due to slow internet connections or lack of devices. However, in rural and remote areas such as the Orange Asal villages in Peninsular, Sabah and Sarawak, the coverage of the internet is limited. Some villages do not even have access to electricity, or it is very limited for nighttime usage. In order to help and provide guidance to the children, parents would need to be educated. One of the unintended consequences is the mental well-being of children during this crisis. As highlighted by Winthrop, there is a need to support children to ensure their well-being and reduce anxiety during a restless physically and emotionally. Therefore, it is important to ensure that there are strategies to help them cope with this "new normal" so that they could express their feelings on this new experience COVID-19 has shown a huge gap in digital divide among our students. In order to learn online via these platforms, a student would need to have access to the platform and guidance from parents. About the Study The whole system of education at all levels from school to university has been collapsed due to COVID-19 pandemic. It effects adversely to nearly 320 million students in India. They have transitioned to e-learning. There is a huge gap in digital divide among the students and parents. With huge regional, local and household disparities in access technology and internet, this transition has resurfaced long-standing issues of inequality that need to be addressed first to make the digitalization fruitful for all. Access to the online platform and appropriate guidance from parents to make use of the digital platform is very important for students to learn online. The present study is an attempt to assess the situation in providing education to school children in Odisha during pandemic. It would help us in decision-making to strengthen the online platform to provide quality education in one hand and also prepare a road map for effective and efficient implementation of New Education Policy 2020 with optimum utilization of technology. An attempt has been taken to collect both quantitative and qualitative data in the study, perceptions of various stakeholders (head teacher, teachers, students, parents and PRI members) on teaching-learning at a distance and the issues & concerns of field functionaries in effective use of the technology, in the context of delivery of course content. It's essential to have a holistic overview of online teaching-learning activities during the lockdown for school children and establish a linkage between change in policy, administration and implementation to strengthen the ground with basic infrastructure, planning training, orientation and capacity building activities for teachers in the new paradigm of technology mediated world. Assessment and evaluation of ground realities is important for providing basic inputs to policy makers and educational administrators to have a sustainable system of technology mediated platform for facilitating teaching-learning at a distance, in the school education system of our country. Review of Related Literature Sudden imposition of lockdown due to COVID-19 pandemic and rapid transition of the whole teaching learning process from face-to-face regular classroom process to online remote learning process made us to face number of challenges, issues and constraints in real life situation. At the same time, it is (lockdown during COVID-19 pandemic all around) considered being full of opportunities in terms of wise use of ICTs in teaching learning process under the domain of online education. Issues like 'emergency remote teaching' (Bozkurt and Sharma 2020) or 'emergency eLearning' (Murphy 2020) and difficulties associated with poor online teaching infrastructure, lack of exposure and experience of teachers, information gap and the complex environment at home (Zhang et al. 2020) are some of the common issues before all stakeholders. At the same time, lack of mentoring and support (Judd et al. 2020) and issues pertaining to teachers' competencies in the use of digital instructional formats (Huber and Helm 2020) have also been identified. In the context of teacher education, how institutions and various stakeholders adapted to the new situation created by COVID-19 pandemic (Bao 2020; Flores and Gago 2020; Quezada, et al. 2020;Zhang et al. 2020) as well as training strategies and experiences of innovation (Ferdig et al. 2020) have been reported. For meaningful and productive online education, it is important to learn more about its potential and use, how the present context has forced all institutions to move to an online mode may provide a comprehensive understanding of adopted practices. It is necessary to ensure that these practices are effective and useful. This is, therefore, a crucial to study the present practices to have a road map with plan of action for future. The major emphasis was on online environments that enable teachers to focus on teaching and interact with students to provide them a variety of learning experiences in ONLINE mode. Therefore, issues of agency, responsibility, flexibility and choice are key elements as are 'careful planning, designing and implementation to create an effective learning ecology' (Bozkurt and Sharma 2020). As such, teaching and learning online entails a specific process which is visible in the roles, competencies and professional development approaches (Ní Shé et al. 2019) as well as in the curriculum, pedagogy, assessment and the nature of interaction among participants. It is, therefore, important to highlight how online education (technology mediated teaching-learning process) designed, developed and implemented for children at school level and to explore its implications, particularly in the context of rural and remote areas of Odisha to have sustainable quality school education system for all. Rational of the Study With imposition of lockdown nationally, all educational institutions in general and schools in particular are forced to make a choice of online mode for providing educational opportunities to students at their door steps. Though it was a burden on all stakeholders, but there were no other options before them. However, gradually teachers, students and parents used to adapt to these practices as a means of regular and routine system of education, in spite of all differences, issues and concerns of all kind. As it was not as an option for them, rather a compulsion and only option. Therefore, it was, necessary to ensure how feasible and effective these measures and practices are for the students? What extend the students benefit out of these practices? What comes in the way of their effective implementation? At the same time, it was felt essential to identify the issues and difficulties before various stakeholders (teachers, students and parents) in the whole process. This is; therefore, a study has been undertaken in the context of Odisha to obtain the ground realities to further strengthen future practices. The present study is an attempt finds the online strategies and interventions that works and does not work in various types of Odisha. At the same time to understand the characteristics, the processes, the outcomes and the implications of online interventions for making the teaching learning process qualitative meaningful. Thus, the study would provide how online education occurred in the context of school education in rural and remote areas of Odisha and how to make it more effective and efficient for the future with blended approach as a means of sustainable quality enhancement initiative in the state. Research Questions The findings of the study would answer to the following research questions: 1. In view of COVID-19 pandemic, what is the status of implementation of online mode of education for school children in Odisha? 2. What are the perceptions of various stakeholders (teachers, students and parents) associated with the transition of education from face-to-face classroom practices to online remote mode of teaching learning process at school level in Odisha? 3. What are the fundamental issues and challenges faced by the stakeholders in adapting to the online education during COVID-19 pandemic? Objectives The following objectives are set forth in the present study: 1. To find the overall status of online mode of teaching-learning adopted during COVID-19 pandemic in various schools. 2. To study the perceptions of various stakeholders (teachers, parents and students) on online education during COVID-19 pandemic. 3. To identify the issues and challenges faced by the stakeholders in adapting to the online education during COVID-19 pandemic. Methodology of the Study Method: Descriptive survey method was used in the present study. Descriptive research is used to describe characteristics of a population (stakeholders)/phenomenon being studied. It is meant to addresses the "what" question not answer questions about how/when/why the characteristics occurred. This method of study aims accurately and systematically to describe a population, situation or phenomenon and can use a wide variety of research methods to investigate one or more variables Sample: The study was conducted in three rural districts such as Koraput, Kalahandi and Kondhamal and three coastal districts such as Ganjam, Khorda and Cuttack of Odisha. From each district 100 students (50 from secondary level, 30 from upper primary level and 20 from primary level), 50 parents (25 parents of the students' secondary level, 15 from upper primary and 10 primary), 50 teachers (25 teachers of secondary level, 15 from upper primary and 10 from primary level), and 20 head teachers (10 from secondary schools, 5 from upper primary and 5 from primary level) identified randomly for the present study. Accordingly, 600 students (300 from secondary level, 180 from upper primary level and 120 from primary level), 300 parents (150 parents of the students' secondary level, 90 from upper primary and 60 primary), 50 teachers (150 teachers of secondary level, 90 from upper primary and 60 from primary level), 120 head teachers (60 from secondary schools, 30 from upper primary and 30 from primary level) and identified randomly for the present study from six districts of Odisha. Sampling Strategy: Multistage stratified random sampling technique was adopted for selection of sample of the stud. Multistage sampling divides large populations into stages to make the sampling process more practical. A combination of stratified sampling or cluster sampling and simple random sampling is usually used for the convenience of the study. Tools & Techniques: Separate Questionnaire (Google Forms) for each stakeholder (Student, Teachers, Head Teachers, Parents and PRI Members) was prepared for collection of relevant information. Also, telephonic interviews were conducted with 360 stakeholders (60 from each district@20students, 10 teachers, 10 Head teachers, 10 parents and 10 PRI members) on various aspects of teaching learning process, issues, and concerns to get their feedback. A focus group discussion was made through virtual mode with each kind stakeholder separately to have a direct interaction with the stakeholders to assess the situation with more clarity and with better understanding. Statistical Technique: Mean and percentage were used to have an analysis of the whole situation as per the objectives pertaining to various aspects of technology mediated interventions made during the critical situation of COVID-19 all around. Analysis and Interpretation A comprehensive analysis helps to understand the data by describing general trends in the data and pointing out differences and similarities among data points. Interpretation is a means to relate data to the objectives of the study. Analysis and interpretation address what do the data say about your sample of the study? It's an attempt to address the research questions comprehensively. In the present study the analysis and interpretation have been made in three different heads in view of types of stakeholders associated with the study. A. Perspectives of Head Teachers and Teachers of Various Schools i) Management of available resources and monitoring: With the lockdown and abrupt closing of schools, colleges, universities and all other educational institutions there was a feeling of fear and uncertainty among all stakeholders at all levels about the education of their words. It was a great challenge for educational administrators, policy makers and all other stakeholders associated with education of children at different levels. There was no planning, no preparation, no resources, no training and no movement all around. At the same time there was no other alternative, other than to switch over to virtual (online) mode from face-to-face classroom situation. About 43.6 percent head teachers of various schools viewed that they switched to virtual mode to continue their teaching-learning process and about 18.2 percent viewed that they made some arrangement to provide intervention to students of selected classes only. Whereas 38.2 percent head teachers viewed that no such shift to online mode has been made, often inputs are provided to students through telephonic conversation and one to one interaction through home-based care to some extent. With the beginning of lockdown (third week of March 2020) till one month (about the end of April 2020), around 97.2 percent of head teachers (out of the whole who could switch to online mode) did not provide services of any kind due to unavailability of resources, lack of proper directions, guidelines from their department and lack of knowledge and understanding about the new mode (online) of education. Privately managed school, their administration, their head teachers (principals) and teachers were found to be at one step ahead in this aspect, pertaining to mutual sharing of information and mutual learning about the ways and means teaching/learning etc. School fee was not an area of concerns for the parents of children studying in Govt. Schools. However, it's a big concern for the parents of children of private schools. In one way there is closure of schools, no classless, no activities & other interventions, no plan/schedule for the alternative mode. On the other side there are huge monthly fees. Also, some of the private schools increased their fee in the new session. The time is challenging for the school administration to plan, generate resources and implement, for the teachers to acquire new skills of teaching learning to reach to their children. The major issue in private schools is providing salary to school staff. About 89.7 percent head teachers expressed that it's difficult to manage the salary of school staff, as they are all used to meet out of the school fee only. Very few parents (less than 30 percent) made their payment towards the school fee. There is no other source of income and grants for the private schools to manage their expenses. It becomes increasingly difficult to meet the recurring expenses. However, they are all trying and taking steps to make some arrangements of resources, providing some sort of training to training to teachers and gradually managing the system for online classless with structured schedule for each class. However, the situations in the Government schools are quite different in real sense. There is hardly any means of synchronous online process/mode to deal with the children. There are many other conditions and factors which are in the way of planning, designing and implementation of online sessions in the context of Got schools as coming out during telephonic discussion and focus group interaction with various stakeholders. Some of the issues and concerns are genuine such as geographical conditions, financial conditions of parents, unavailability of resources at the schools etc, Besides, decision making at multiple levels, competencies of stakeholders, interest, motivation and enthusiasms etc also a major hurdle in the way of implementation and taking initiative with a new thought and new level. About 53.8 percent head teachers of made provisions of technology mediated communication (discussion and interaction) with their teachers during the lockdown period. In case of private school, it was 74.6 percent and in case of Govt. School it was only 33 percent. This shows the extent of communication gap between the head teachers and teachers during the course of lockdown situation. Guidance and counseling to students and their parents on mental health and hygiene is considered to be very important and key factor to ensure sound mental health as the system, structure and interventions are all come to an end abruptly. There is no other means of guidance and services for the parents and their word about their education and to make them understand about their course of action. Only 7 percent head teachers viewed that they made some provision to communicate with their parents and students to provide feedback and guidance on sound mental health and hygiene. These aspects should have been given more emphasis and considered to be prerequisite for all school children, which was found almost neglected. The situation is very poor almost NIL so far as students and parents of Govt. schools are concerned. There are no specific guidelines and COVID-19 protocol for schools to prevent the spread of this novel corona virus. Only 27.4 percent head teachers pointed that they are aware of the general guidelines & protocol released from Got of India and Got of Odisha to prevent the COVID-19. They are of the opinion that government should release specific guidelines for the schools indicating roles responsibilities and preventive measures to be taken at all level by all stakeholders. At the same time about 69.7 percent of head teachers viewed that network connectivity is a big issue in implementation of online sessions to provide academic support services to learners. Even teachers and parents are ready to spend on devices. However, the ultimate utility and its effectiveness depend on the connectivity. Therefore, the role of Govt. is very important inuring speed of connectivity at all locations. More than 86.8 percent of students and parents of rural and remote districts (particularly in Koraput, Kalahandi and Kondhamal districts) did not have smart phone to access the online sessions. This is reality and another major hurdle in implementation of online sessions in the context of Odisha. Implementation of Online education requires specific set of skills and competencies in designing, development and implementation of course content. Teachers are to be trained and exposed in these areas. Management of face-to-face classroom is different and online classroom is different. This is clearly pointed by more than 91.3 percent of head teachers. They all require a different set of skills, training and orientation for making their classes effective and more efficient. They all agreed that COVID-19 taught them to make use of technology for the cause of teaching learning process and making their teaching learning more qualitative. The facilitates and exposure being provided to teachers in private schools by their management and the initiatives initiated for students of private schools should also be made at Govt. schools to address to the issues of inequality in school education. ii) Understanding obstructions, feasibility and comfort in imparting education online: Schools, colleges, universities and many other forms of educational institutions have faced numerous challenges during this critical situation of pandemic all around. More than 67 percent teachers of private school manage the situation gradually with more ease of adaptation to new form of teaching learning process with use of technology without even formal training and orientation. However, the situation in Government schools is very poor. Only 8 percent of the teachers viewed that they could manage the new form of teaching learning process using technology mediated interventions. The major causes as cited were non availability of devices (appropriate resources) and internet connection, lack of institutional approach and lack of motivation from the higher authorities. It is also found that more than 21.4 percent of teachers from Got schools did not give any response to this item. It's a fact that more than 90 percenter teachers from Govt. School have been struggling to cope up with the situation. Similar situation is found among teachers of private schools in rural/remote areas than that of the teachers from cities/towns. Training, orientation and capacity building is very important for effective and efficient use of technology in teaching learning process. It's found that very few teachers explore their own ways and means to learn and equip themselves to acquire the knowledge and skills in using technology during the COVID-19 pandemic. However, at some of the places they tried to manage the situation using some of the interventions through asynchronous mode without even direct interaction with students for months. Content development (e-content) is very important for effective and efficient use of technology mediated interventions. It's an art to design and develop e-content and required specialized skills to develop through training and orientation. Such facilities and provisions are hardly made for school teachers at various levels. About 89 percent teachers from Govt. schools and 67.5 percent teachers from private schools viewed that its very time consuming and they required training and orientation to handle it. Very few teachers from private schools (about 9.82 percent) and government schools (7.1 percent) expressed their willingness to get involve in e-content development and their implementation through online classes and other means of technology mediated interventions. Institutional initiatives and organized efforts were not found in design and development of e-content and its implementation. Which is found to be a serious issue & concern as far as quality of online teaching learning process is concerned. As success of online teaching and technology mediated teaching learning depends upon the success of team not on an individual. These aspects should be explained to teachers and the school administrators should internalize it to make use of it in the present context of teaching learning process. Availability of appropriate resources (devices) and sound internet connectivity is the central need for the success of online/technology mediated teaching learning process. At many of the Govt. schools (more than 77.4 percent) they are not available. However, in many of the private schools (more than 76.9 percent), though these resources are available, however, they are yet to be utilized optimally for teaching learning process. This shows that, in-service teacher education programmes need to be redesigned in the context of teaching learning through technology mediated interventions. This seems to be the new norms of teacher education with a paradigm shift in teaching learning process. Even, there should be programmes for trainers, master trainers and teacher educators to train them and to orient them in the context of online T-L process. The basics of four quadrant approach of online teaching learning process should be a part of teacher education programme (in service and pre service) at all levels. Most of the teachers from Govt. schools (71.3 percent) and Private schools (94.3 percent) have smart phone with them for their day-to-day use. However, they had never utilized the smart phone for teaching learning process. Use of smart phone for online classes and for other means of teaching learning process is a new experience for all the teachers and a big surprised to them (teachers, head teachers and administrators as well) at the beginning, because, the situation of online classes were never thought of and never visualized as a normal routine practice for education of children at school level. Success of a new initiative depends on effective and efficient monitoring mechanism. Various aspects/dimensions of online education should also be thoroughly examined and monitored, as it is a new learning experience for all stakeholders. Participation of learners and their interaction is most important aspect of online teaching learning process. It was found that more than 90 percent students from private schools participated in online sessions. However, in case of Govt. schools it was found to less than 34 percent. This shows the level of preparedness of parents and teachers of Govt. schools for online education. This gap needs to be addressed to minimize the learning gap among the children. Non availability of resources is considered to the biggest hurdle for the students and parents of government schools. At the same time more than 42 percent of the parents expressed that there is lack of organizational planning and execution for development of awareness among the children and parents. This need to be addressed so as to have a workable plan of action keeping in view of issues, problem of the students and availability of resources around them. In Govt. schools, there were no online sessions (at synchronous mode) planned and organized by the teachers for routine teaching learning process. WhatsApp and SMS are the only the means of communication of teachers (more than 96 percent) with their students However, in case of private schools, WhatsApp and SMS are some the means to supplement the online sessions/interactions planned through open education resources. In private schools only 21.7 percent parents are concerned for the safety and security related measures as their Email and mobile number is shared during the course of online sessions. More than 43.3 percent parents are not aware about the issues related to security and safety measures. About 35 percent parents found to be casual towards these particular areas of safety and security as their priority was education of their children and they found no other alternatives for them. B. Perspectives of Students Students of Govt. schools (more than 86.5 percent) found to be deprived of the facilities of online education and failed to have direct discussion and interaction with their teachers for a long period. Even at many schools this situation still in place since March 2020. It's a serious issue for the students who would appear 10 th and 12 th board examinations in 2021. Though there are provisions teaching and learning through television, 77.8 percent of the students viewed that they are not as per their syllabus and text book. Secondly, it requires pre broadcast and post broadcast discussion to help them to understand the concept and clear their queries. Continuity of the content and subject is very important, which was hardly found in programmes broadcasted in television and radio etc. as reported by more than 61.4 percent of students. Nearly 30 percent of the students did not give any response on such items. Only 11.2 percent students from Govt. schools reported that they watched the programmes broadcasted through television. Use of online interventions for delivery of course content is a new experience for all stakeholders (teachers, students as well as parents) at school level in general. About 69.2 percent of students of private schools and 88.3 percent from Govt. schools viewed that they prefer offline (face to face) class room process for teaching learning activities. Very few students (12.8 percent from private schools and 3.6 percent from Govt. schools) expressed their satisfaction on online system of education and stated that in view of the current situation they would prefer the online sessions as it gives variety of experiences. At the same time a group of students of about 18 percent from private schools and 8.1 percent from Govt. schools preferred integration of both online and offline mode for learning (blended mode). This shows that preferred mode of learning for the students was found to be face to face regular classroom situation. It may be due to lack of understanding of the users to use the technology mediated components effectively and lack of planning for delivery of online content and online resources as a means of teaching learning process. This shows that all teachers require specialized skillbased training to handle the online platform and the components to meet to the expectations of the learners and making the whole process more meaningful and productive. About 57.6 percent of students stated that their doubts and queries are taken up and addressed properly through online mode. At the same time about, 31.4 percent started that it's difficult to clear the doubts and queries online because of multiple problems/hindrances (loss of connectivity/ loss of audio/ loss of video etc) during the course of online sessions. There is no appropriate and systematic strategy/mechanism to raise the query and get their answers. Though there are various options available, but is not being utilized/ integrated by the teachers to address to individual queries through online sessions. About 11 percent students started that it's too difficult or even impossible to get clarity of the content in terms of clarification of doubts and queries. This shows that lot of efforts need to be made to equip the teachers to use facilities available to facilitate and promote one to one discussion and interactions with students during online sessions. The planning of sessions should be made more comprehensive in view of all components of teaching-learning to promote interaction of students with the teachers. This is important to establish the effectiveness of online sessions by enabling the students to develop a feel of classroom like situation. About 48.3 percent students viewed that they are comfortable in making the basic necessary arrangements and preparedness for the online sessions. However, a large chunk of students about 50.7 percent viewed that they are in stress in terms of making the basic necessary arrangements and preparedness for the online sessions due to multiple reasons such as financial problems to arrange the resources, lack of knowledge and expertise to use the device/system, non availability of onsite support etc. It was found that more than 62.8 percent students from private schools are more concerned for exemption of school fees, purchase of books, cover up of syllabus etc. However, only 8.9 percent of students from Govt. Schools are concerned on these areas. Most of the students (about 79.2 percent) of government schools are more concerned for non availability of resources such as devices/system, internet connections and their use etc. Very few students from private schools (about 12.8 percent) are found to be concerned for these aspects. Only 39.7 percent students from private schools and 14.9 percent from Govt. schools are comfortable with the e-content. Majority of the students from each category of schools (51.4 percent from Govt. and 77.9 percent from private) stated that they need hard copies for preparation of content and for other purpose of evaluation. C. Perspectives of Parents Parents are important stakeholders in the whole teaching learning process. Particularly, during the time of pandemic, it's the parents who started learning first to help their children to learn out of the online interventions. While making an analysis it was found that parents of private schools are slightly in comfortable side than that of the parents of Govt. schools. It's because the initiatives and interventions of online services started early at the private schools and the Govt. schools started them at a later stage. On the other hand, issues of parents of both types of schools are found to be different. Parents of private schools are found to have more options to handle to their issues, queries and difficulties. However, in case of the parents of Govt. schools it very limited, due to lack of initiatives at the school level and inordinate delay in implementation at grass root level. About 89.3 percent patents of Govt. schools stated that teachers were also found to handicapped as because there were no clear directions and guidelines for them from the higher authorities. This was not an issue for the teachers of private schools as started by most of the parents (more than 90 percent) of private schools. About 77 percent of the parents of private schools are comfortable in terms of their understanding pertaining to online learning. However, this status is very low (about 21.3 percent) in case of parents of Govt. schools. Here comes the difference in providing adequate guidance and appropriate support services to their children by the parents for their education during COVID-19 pandemic all around. This is an important indicator and deciding factor to the extent of support services to children for their study and also increasing stress level of students. Parents are of the opinion that students are comfortable and ready to adjust with the online mode and have an interest and willingness towards the new mode. But the major problem is providing those adequate resources, guidance and onsite support. It requires appropriate skills to handle the situation. Some of the problems are local, can be handled gradually by the parents. However, some of other problems are out of the hand of the parents, such as connectivity issue. About 61.9 percent parents of private schools have these issues. These issues are equally found with the parents of private schools in rural areas. So, what is required is development of base to strengthen the foundation with adequate resources and resolving the issues related to internet connectivity. More than half of the students in private schools are left out due to this particular issues. The issue is more prominent in rural areas and hilly areas. More than 80 percent parents in rural areas do not have the resources to avail online sessions. Very few, about 7 percent of parents of Govt. schools in rural areas have access to mobile device sessions. In case of parents of private schools in rural areas this situation is slightly better with availability of devices, with about 15 percent of parents. In rural areas about 83.2 percent parents of private schools started that there is financial burden on them to manage the resources for the online mode of education. In case of parents of private schools, the problem is remaining the same. However, the situation is slightly better i.e., about 72 percent parents of private school started that it's an additional burden for them to manage the resources for online mode of education. In case of urban areas, it is found that 48.9 parents of private school expressed that there is financial burden on the family to manage the resources. Only 27.8 percent parents of private schools in urban areas viewed that financial burden on family lightly increases due to online mode of education. The important aspect that is revealed from the study was that, parents (more than 64 percent) are agreed and convenience to spend a little out of their routine monthly family expenses on this (towards resources and internet connection) for the online mode of education. But they expect quality inputs and better academic services of all kind from the school management and expect involvement and participation of their children during the teaching learning process. About 71.2 percent parent viewed that it was just flow of information from one side (teachers) to another side (students) with some directions and assignments. Many other components of teaching learning process such as discussion, involvement, participation, interaction in real time is missing and neglected. It a serious concern leading to demoralize the students and students feel them isolated in the whole process. This shows the need of training, orientation and capacity building of teachers by the school administration in handling online resources for education and designing, development and implementation of e-text effectively and efficiently to make the online education more participatory and interactive. Overall Analysis On the basis of the findings of the present study it was found that a very small proportion of parents, about 23 percent (31 percent parents from private schools and only 15 percent from Govt. schools) are fully satisfied with the online education provided by the schools for their children. About 47 percent parents (52.1 percent from private schools and 41.9 percent from Govt. schools) are partially satisfied with the interventions provided by the schools during COVID-19 pandemic situation. About 19 percent parents (12 percent from private schools and 26 percent from Govt. schools) are not at all satisfied with the present form of online interventions. At the same time about 11 percent parents (6 percent from private schools and 16 percent from Govt. schools) did not respond to this particular item. From this it is concluded that a meticulous plan needs to be made to cater to the needs and expectations of all stakeholders for making the online education and various online interventions more meaningful and productive for the learners. While interacting with parents it was found that about 68.9 percent of parents (51.3 from private schools and 86.5 percent from Govt. schools) did not have any idea and clarity of blended approach and about 21.2 percent teachers did not respond to this aspect. This shows the level of knowledge and understanding among various stakeholders who are responsible and accountable for implementation of online interventions for learners at different levels in school education system. Ministry of Education, Govt. of India took all initiatives to provide online services to students to enable them to continue their learning during the lockdown period. There are various learning platforms like SWAYAM, Diksha, SWAYAM Prabha, e-pathasala, NROER (National Repository of Open Educational Resources) etc designed and developed under GOI. Many of these platforms are run by the National Council of Education Research and Training (NCERT), NIOS, IGNOU and NITTT etc. and many other autonomous organizations of the Ministry of Education. In the present study an attempt was taken to access the level of awareness of various stakeholders about the initiatives of Govt. of India for facilitating online education for technology mediated quality interventions at school levels. It was found that only 41.3 percent (54.3 percent from private schools and 28.3 percent from Govt. schools) of headmasters of various schools are aware about SWAYAM portal of Govt. of India. SWAYAM Prabha channels are important in terms of reaching at the door steps of student's through DTH channels. About 38 percent of parents (51.6 percent from private schools and 24.4 percent from Govt. schools are aware about this online initiative of Govt. of India. Though the content delivered through this channel are very important and useful for the students, however, it was, found that there is limited awareness among the parents and many other stakeholders such as teachers and students in rural areas about this initiative and its use for school children and teachers. DIKSHA portal is an important online platform for training and orientation to school teachers. About 68.7 percent of the teachers (66.3 percent of teachers from private schools and 71.1 percent from Govt. schools) are aware about this portal, as this portal is being utilized by state Govt. for online training & orientation of teachers from time to time. However, it is, yet to be utilized optimally in the state for teachers of all schools. Similarly, NISTHA is a capacity building programme for improving quality of school education through integrated teacher training. The ultimate objective is to competency enhancement of teachers and head teachers at elementary school level at state, district, block and cluster level. The finding of the study revealed that only 33.6 percent teachers from Govt. schools and 16.4 percent from private schools are aware about this particular initiative. At the same time, it was also found that teachers from rural areas (about 76.7 percent) are not aware of it and the matter was never been discussed during the training sessions being organized at cluster/block levels. This is truly a great set back to the important initiatives of Govt., particularly, the digital initiatives that are meant for grassroots level functionaries for quality improvement in school education. Discussions and Conclusion With sudden lockdown all around in March 2020, due to COVID-19 there were no other options before the educational institutions other than making a shift to remote learning through online mode from their routine classroom process. This was a great challenge for all at levels of education as the transition to online mode happened faster and there was no time to think and understand the dynamics of its utility. However, teachers in higher classes (IX, X, XI and XII) somehow managed as there was no option for them. But teachers of lower classes in general and teachers of lower classes of Govt. schools in particular struggled and started them at a very late. In private school this transition started at very fast in comparison to Govt. schools. It's true that this unexpected transition, from conventional face to face mode to a new online mode has put administrators, policy makers, teachers, students and parents at a great disadvantage and also at a great risk. School education suffered a lot as most of the schools has not been properly organized. The present paper assessed and described the issues and difficulties in remote learning in the wake of the COVID-19 pandemic. Like many other studies (Bao, 2020;and Henaku, 2020), the present study also found that poor internet connectivity as one of the main issues in remote learning. It is a major problem in rural areas of Odisha in terms of availability, speed telecommunication systems and ICT, not being properly developed (Aboagye eand stability as well. The findings of the studies (Coleman, 2011;and Henaku, 2020) corroborate the other result of this study in which inadequate learning resources are serious hurdles for making the online learning effective and efficient for the students. As a result, student failed to participate, involve and interact in the process of online teaching learning optimally. It has also come to the notice that financial burden on parents enhanced. This corresponds to the findings of Matswetu et al. (2013) in which financial problems faced by students in Zimbabwe in a distance learning set-up. The situation arises due to COVID-19 has made it more difficult for the parents who lost their jobs and are finding difficulty to get a job in hand to support their families on one hand and meet the expenses for education of their children on the other hand. The situation is worse for poor families in rural and remote places during the outbreak due to an unprecedented economic shutdown (Adle, 2020). Students and parents raised the issues of electric power and its interruptions during online learning situation. It's an inevitable problem in virtual classroom set-ups (Castillo, 2020). Students located in rural, remote and hilly areas find it not only difficult but also impossible to stay connected online due to lack electricity and also due to poor internet connectivity. Issues of design, development and implementation of e-content come next to it, which are also major constraints of learning and issues before the teachers and the head teachers. In the name of online mode of learning, almost all teachers use mere text books as "learning content", meant for regular classroom situation. This shows lack of understanding, experience and lack of training & exposure of teachers in remote learning (Chen et al., 2020) situation. Therefore, It's high time to train, orient the teachers to produce appropriate supplementary materials for online learning of students (Burgess & Sievertsen, 2020). At the same time, it has come to the notice that teachers simply assign more and more assignments/questions to student (in the process of online learning) in each and every subject. So, it leads to develop more stress and anxiety in students. They find no discussion, involvement, participation in the process of online mode. Similar findings were obtained in the study of Sarvestani et al., (2019) where students voiced about the extensive and the large number of modules that they need to answer, as a part of online teaching learning process. Students are having a hard time coping with online learning situation because of the poor communication between them. It effects students' motivation and their interest towards learning at a distance. Appropriate learning environment is very important for students to ensure their active participation in teaching learning process. This is also equally required in online mode. The sudden shift to online mode has hardly considered this aspect which ultimately affects the motivation, interest and also the performance of students. It's also a fact that schedule of online learning hardly considers general home responsibilities of students. This may not be an issue in metros and cities. However, this is one of the major issues in remote and rural areas because children used to engage and participate in routine household activities. Students' engagement in household responsibilities negatively affects their academic achievement (Poncian, 2017; and Amali, Bello & Adeoye, 2018). Students spend almost the whole day for online classes and answering assignments and activities, thus giving them less time or no time to engage in physical activities, so experience strains (Sundarasen et al., 2020) of engaging about 6-8 hours/day in online sessions. On the matter related to selection of subjects covered, almost all schools focused merely on core subjects (Science, Mathematics & English) in online mode. Co-scholastic activities which are very important for all round development and integrated development are essential for socialization of children are worst effected during COVID-19 with implementation of online mode of education. It's found that many teachers (most of the teachers from Govt. schools) are struggling with real-time interaction with students in online mode. This may be due to lack of exposure and proper training on online interventions. This is the learning point for administrators and policy makers to make a shift in teachers training programme for future. It's essential to equip the teachers with knowledge and competencies to make use of online interventions effective and efficient not just an alternative to face-toface mode of teaching learning process but for integration with conventional face to face mode of teaching learning. Majority of the students and parents preferred conventional mode of learning through face-to-face mode over the online mode. Implementation of blended mode is the need of an hour. So, there should be organization of awareness programmes for parents and workshops for teachers at different levels for effective and efficient use of online mode of education and help them to appreciate the interventions of online mode. Assessment is an important aspect of the whole teaching learning process. How best online tool and techniques of evaluation can be used by the teachers (for formative and summative evaluation) is an important point. Formative assessments are essential to assess the effectiveness of our teaching strategies, plan for future teaching-learning activities, to keep track of individual learner's progress, and to provide students with useful feedback for improvement. Google Forms can be use for summative assessment as they provide variety of questions and design concept-appropriate questions. Even teacher can use images, graphs, audios, and videos from YouTube, and design questions around these media. This would help teachers to assess and evaluate learners' understanding of students in the ways that were not possible with conventional mode of assessment and evaluation. Online tools along with Google's Classroom app could help teachers to streamline and automate much of administrative work, so that teachers can concentrate and focus on design, development and implementation of online tools for their classroom process. Development of a good online support system with effective network with all educational institutions for mutual support is essential for quality enhancement and addressing to the issues of inequality. Teachers should learn to transform the difficulties and challenges into opportunities for their personal and professional growth. It is essential for effective and efficient use of online education in the context of our system of education. There should be a common platform where teachers can discuss and interact to resolve their queries and address the challenges they face. At this point of time, it's not important to complete the syllabus that had been planned at the beginning of the academic year. It's important to learn and comfortable with new tools and methods require for learning at a distance. Varieties of online activities should be planned during the course of online deliberations. This is a means to alleviate emotional and mental stress of students and maintain a positive outlook. Teachers require technical support and assistance to make their teaching learning process meaningful and productive in the new mode of imparting education. Success of online education is a team work with equal responsibilities and accountabilities on all stakeholders such as school management, head teacher, teachers, students and also parents.
2021-09-29T15:26:11.961Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "3cccf3e9a73c4a9a3c6d2c9f77a9f53525d06fcd", "oa_license": null, "oa_url": "https://doi.org/10.46587/jgr.2021.v07i02.018", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e0a585fde1039029992d97cab07c816f8e775528", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
265160043
pes2o/s2orc
v3-fos-license
Strategies for addressing conflicts arising from blue growth initiatives: insights from three case studies in South Africa South Africa has vigorously embraced the concept of the ‘blue economy’ and is aggressively pursuing a blue growth strategy to expand the ocean economy, create jobs, and alleviate poverty. However, many of these ‘blue initiatives’ are leading to conflicts amongst various stakeholders with different histories, relationships with resources and areas, worldviews, and values. Investment in the ocean economy is being prioritized by government and planning, environmental assessment, and decision-making processes are being fast-tracked. Consequently, historical inequities as well as environmental and social justice considerations are not being given due consideration. Communities are not being effectively consulted. This has resulted in tensions and conflicts amongst proponents of these projects and local communities living in areas affected by these initiatives. We examine the drivers of conflict and then explore the strategies that local communities and their social partners have employed in these case studies to challenge contentious developments, defend coastal and marine areas, and make their voices heard. The cases involve conflicts over air quality in an expanding marine industrial zone at Saldanha Bay, prospecting and mining applications in the vicinity of the Olifants Estuary in the Western Cape, and the expansion of the Richard’s Bay Port, mining activities, and conservation initiatives in KwaZulu-Natal. The barriers and potential opportunities to opening up deliberative spaces, shifting values and views, and co-producing knowledge, in contexts that are characterised by structural inequality, poverty, and power asymmetries, are discussed. Introduction The concept of the 'blue economy' has been enthusiastically embraced by politicians and pro-growth proponents.The concept of 'blue economy' or 'blue growth' has been associated with the notion of sustainable development which promotes the idea of balancing social, economic, and ecological goals in development (Eikeset et al. 2018).The Third International Conference on Sustainable Development in Rio de Janeiro (Rio +20) acknowledged the potential of oceans to contribute to national economies and revitalise coastal economies and proposed some form of international co-operation around blue economy strategies (Senaratne and Zimbroff 2019).Amongst the prominent narratives underpinning support for blue growth strategies include 'opportunities for growth, development and job creation', 'social equity', and 'protection of threatened and vulnerable species'.As such, blue growth narratives and strategies are said to promote 'triple-benefit' solutions or 'triple bottom line' objectives that tackle economic development, environmental sustainability, and social equity/justice, where everyone (i.e.coastal communities, the environment, and the economy) is meant to win (Voyer et al. 2018). However, despite enthusiastic adoption of the blue economy by various stakeholders and politicians in many countries, the concept, narratives, and vision remain ill-defined and contested (Silver et al. 2015;Eikeset et al. 2018;Voyer et al. 2018).In addition, while the notion of 'triple-benefit' goals of blue economy strategies is enticing, implementation of these blue growth initiatives has led to environmental and social injustices, tensions, and conflict amongst different actors and sectors (Fisher et al. 2018;Tafon 2019;Bennett et al. 2019;Österblom et al. 2020).This is not surprising since governance of these marine resources and spaces usually involves a constellation of actors and sectors with 51 Page 2 of 16 competing and conflicting interests, claims, values, and worldviews and unequal power relations amongst actors (Chuenpagdee and Jentoft 2009;Voyer and van Leeuwen 2019;Bennett et al. 2019).Poor and marginalised communities are usually left out of the planning and decision-making process or when included have limited voice and no power to influence decisions (Bond 2019;Bennett et al., 2019). There is an increasing literature critiquing the blue economy concept, narratives, and strategies at the global scale (Eikeset et al. 2018;Adjei and Overå 2019;Bennett et al. 2019;Cohen et al. 2019;Österblom et al. 2020;Ertör and Hadjimichael 2020).Various scholars argue that these accelerated blue growth plans and strategies, embedded within current neo-liberal economic paradigms and power asymmetries, do not promote fair and equitable outcomes, nor do they produce sustainable jobs and deliver local benefits (Silver et al. 2015;Österblom et al. 2020;Ertör and Hadjimichael 2020).In fact, there is increasing evidence that these initiatives often exacerbate economic inequality and loss of access to resources and lead to displacement of communities and cultural impacts as well as environmental damage and biodiversity loss (Bennett et al. 2019;Bennett et al. 2021;Bond 2019;Cohen et al. 2019;Tafon 2019).Furthermore, those with financial means to invest in blue economy initiatives tend to be influential and well-resourced actors, mainly driven by economic interests (Cohen et al. 2019).As ocean territories and resources have been allocated and reallocated to private investors, the rights of local resource users to access and control the ocean space, resources, and coastal land have been undermined (Barbesgaard 2018;Childs andHicks 2019: Jentoft et al. 2022;Bennett et al. 2019 andBennett et al. 2021).This has been termed ocean, coastal, and/or blue grabbing (Bennett et al. 2015;Bavinck et al. 2017;Barbesgaard 2018) and refers to the commodification and privatization of ocean spaces and common pool resources.These government-supported 'blue growth' initiatives often negatively impact poor coastal communities and deprive groups such as small-scale fishers and farmers of their rights to resources and fair share of ocean benefits (Adjei and Overå 2019;Bennett et al. 2015;Cohen et al. 2019;Österblom et al. 2020;Bennett et al. 2021).These inequities together with the general exclusion of local communities in planning and decision-making processes regarding the allocation of ocean resources and spaces have led to rising tensions, conflicts, and even violence (Bavinck et al. 2014;Tafon et al. 2022). This paper examines the conflicts arising from the implementation of blue economy initiatives, using three case study examples from the coast of South Africa.The cases involve conflicts over air quality in an expanding marine industrial zone in Saldanha Bay, conflicts linked to prospecting and mining applications on land near traditional fishing grounds, and conflicts arising from coastal communities being 'squeezed out' by expansion of the Richards Bay Port, mining expansion, and conservation initiatives.The paper explores the strategies that local communities and actors employ to challenge blue growth proposals, plans, and decisions that threaten their environments, livelihoods, and culture.The barriers and potential opportunities to open up deliberative spaces, shift values and views, and co-produce knowledge, in contexts that are characterised by structural inequality, poverty, and power asymmetries, are discussed. Methods informing the study The data that informed this paper has been gathered by the authors who are all involved in research on various aspects of the blue economy in each of the case study sites (see Fig. 1).In the case of the Olifants Estuary and Saldanha Bay, the researchers have been involved in research and providing technical support to the communities for several years and thus, long-standing relationships exist with community members and some of the organisations and stakeholders concerned with blue economy projects and plans in the area.While research has been conducted on community access and benefit sharing in relation to mining in the Richards Bay Area (Mbatha and Wynberg 2014), research with the Dube community has only commenced in the past 4 years. In the case of the Olifants Estuary case study, data was gathered from participation in six community meetings conducted over the 5-year period 2018-2022 and attendance at the Olifants Estuary Management Forum (OEMF) meetings usually held four times per year 1 where stakeholders discuss issues related to management of the estuary and surrounding environment including mining applications and operations.In cases where the researcher could not be present at the OEMF meetings, minutes of the meeting were reviewed.In addition, individual meetings were held with community members during this 5-year period to discuss responses to mining applications in the vicinity of the estuary.Informal discussions were also held with local fishers from the Ebenhaeser, Papendorp, and Doring Bay communities in relation to their views on the mining proposals and plans for the area. In the case of Saldanha Bay, data was collected as part of several engagements with stakeholders involved in the GCRF Mine Dust and Health Network (www.minedust.org,EP/T003588/1) from 2018 to 2023.These included 5 discussion sessions with all stakeholders, as well as individual ad hoc meetings with various community representatives and community members.In addition, two focus groups were organised with vulnerable groups from low-income communities, including unemployed youth, who were identified as being missing from the previous events.Several hundred people have taken part in the engagements and interviews over the last 5 years. Research on the impacts of blue economy activities within coastal communities around Richards Bay has been ongoing for over a decade (Mbatha and Wynberg 2014).The data collection process for this study focused on the Dube community located within the wider Richards Bay area.This research was initiated through long-standing relationships with small-scale fishers and other local stakeholders who have been affected by blue economy activities, including mining, for a long period of time.Building on these existing relationships was important for establishing trust between researchers and community members since coastal-related conflicts are extremely politically charged in this area.Pilot visits were conducted in 2021 before fieldwork commenced in order to introduce the research project to local communities and to gain support for the study from relevant local leaders.Researchers also engaged with non-governmental organisations operating in the area, as well as private sector organisations during the pilot phase.Data that informed this study was largely drawn from semi-structured interviews with local knowledge holders and workshop attendance, as well as key informant interviews with local non-governmental organisations and stakeholders involved in blue economy activities, i.e. mining and port development. Transforming ocean conflicts-rhetoric and reality Ongoing debates in the natural resource governance literature suggest that both scarcity and an abundance of natural resources can catalyse or fuel conflict (Mildner et al. 2011;Fisher et al. 2018).Various scholars have offered an analysis of how an abundance of resources, especially where extractive resources are concerned, perpetuates the current economic paradigm and power structures and leads to conflict (Le Billon 2001;Mildner et al. 2011;Fisher et al. 2018).In such contexts, powerful economic players including foreign investors, supported by national government, drive the narrative and processes and local communities are often not consulted and bear the burden of environmental and social harm resulting from these projects (Masie and Bond 2018;Bennett et al. 2019;Tafon 2019;).Consequently, tensions and conflicts amongst proponents of these extractive industries and those living adjacent to marine environments and reliant on ocean resources for food and livelihoods have arisen (Masie and Bond 2018;Tafon 2019).However, these conflicts are taking place in a variety of different contexts and at various scales and are made more complex by a host of historical, socio-economic, political, and increasingly environmental change factors (Fisher et al. 2018;Tafon et al. 2022).There is thus an increasing consensus amongst scholars that conflict is multi-faceted, multi-causal, and multi-level and involves multiple actors (Bavinck et al. 2014;Tafon et al. 2022). Several large collaborative interdisciplinary projects on conflict have documented key insights and lessons learned regarding root causes and drivers of conflict and contextual factors that exacerbate or reduce conflict as well as governance processes that enable a shift from conflict to co-operation (Bavinck et al. 2014;Berry et al. 2018). Another key focus has been on how to transform conflicts and build resilience (Ratner et al. 2013;Fisher et al. 2018;Matin et al. 2018;Tafon et al. 2022).A common theme throughout much of the resource conflict literature is the need for appropriate and effective governance approaches, in particular inclusionary and democratic processes, that are mindful of power imbalances and provide an enabling environment for interested and affected parties to interact and deliberate on contentious issues (Scholtens and Bavinck 2018;Flannery et al. 2018;Kelly et al. 2019).However, there is an emerging realisation that conventional conflict resolution approaches and techniques will not resolve deep-rooted conflicts which are characterised by structural inequalities, inequities, and injustices and are usually hidden (Tafon et al. 2022).A current collaborative research project, OCEANSPACT2 , seeks to understand and transform ocean conflicts by adopting a more radical approach of agonistic knowledge co-production and conflict transformation into 'constructively co-produced' and 'institutionalizable' yet contestable and provisional knowledge-action (Tafon et al. 2022).In their framework, assessing root causes, promoting meaningful knowledge co-production amongst actors, and governance approaches that support both top-level and local contributions are seen as key actions to transforming conflicts and moving towards sustainability (Tafon et al. 2022). While the promotion of meaningful co-production of knowledge through 'iterative collaborative processes has the potential to create an environment for transforming interactions amongst conflicting groups and fostering respect for different knowledge sources and values, getting 'everyone to the table' and having diverse voices heard may not be feasible in contexts where historical injustices have not been addressed, marginalisation of poor communities persists, and power differentials between actors remains skewed in favour of privileged actors.This paper examines the realities and responses of coastal communities confronted by a rapidly expanding blue growth agenda in three cases in South Africa and explores the strategies they adopt to counter and challenge processes and decisions that disregard their rights and potentially undermine their livelihoods and way of life. South Africa's blue economy initiative-an increasing site of conflict According to protagonists of South Africa's blue economy agenda, its 3500-km coastline and resource rich waters have the potential to create thousands of jobs and boost the country's economy (Potgieter 2021).The country's oceans are said to have the potential to generate as much as R177 billion towards the gross domestic product (GDP) and provide up to a million employment opportunities although this claim is being increasingly questioned (Potgieter 2018;Masie and Bond 2018;Bond 2019).In 2014, the South African government introduced Operation Phakisa as the country's blue economy strategy aimed at increasing employment opportunities, promoting social equity, and alleviating poverty by 2033 (Findlay 2018).Derived from the local language Sesotho, 'Phakisa' translates into 'hurry up' in English and highlights the urgency for delivery of fast results to grow the economy, create jobs, and alleviate poverty.Inspired by Malaysia's 'big fast methodology', Operation Phakisa is an expansive multi-sectoral program involving marine transport and manufacturing, coastal and marine tourism, offshore oil and gas, construction, and aquaculture to marine spatial planning.While coastal marine mining is not explicitly included in Operation Phakisa and the more recent Ocean Economy Master Plan (DFFE 2022), the rapid increase in prospecting and mining applications and operations in the marine environment qualifies as a blue growth sector in every respect and has been included in our analysis. Increasingly, Operation Phakisa projects and the modus operandi regarding their approval and implementation are being severely criticised and challenged by communities, local residents, researchers, non-governmental organisations (NGOs), and community-based organisations (CBOs) across South Africa.In particular, South Africa's rapid pursuit of economic growth and foreign direct investment are at odds with South Africa's international climate change commitments, social justice, and environmental sustainability imperatives that underpin South Africa's Constitution, as well as various environmental policies and laws (Bond 2019;Masie and Bond 2018;Potgieter 2018;Rogerson and Rogerson 2019;Isaacs 2019).Masie and Bond (2018:320) have criticised Operation Phakisa for adopting planning methodologies that are 'helter-skelter, nonconsultative, elite, navel-gazing, and ultimately unrealistic… devoid of awareness of the capitalist crisis bearing down on South Africa's two oceans'.Other critiques highlight that the fast-track methodology underpinning Operation Phakisa has undermined the effectiveness of Environmental Impact Assessments (EIA), failed to incorporate the values and views of local communities, and centre social justice and equity considerations in decision-making (Satgar 2018;Sunde 2022). Fishing communities are concerned about the rapid increase in mining due to their dependence on coastal and marine resources and their need to gain access to their fishing grounds.The recognition of the rights of small-scale fishers was only formalised in 2012 when a policy for smallscale fishers in South Africa was promulgated (Sowman and Sunde 2021).The allocation of rights to this sector of fishers has been slow due to various legal and other bureaucratic delays (Sowman and Sunde 2021), and by mid-2023, fishing rights had been allocated to fishers in three of the four coastal provinces although the outcome of the rights allocation process remains contested. As the effects of South Africa's blue economy proposals and plans become more apparent, conflicts amongst different stakeholders have become more acute, as local communities become more aware of their rights and challenge government decisions.This paper explores the underlying causes of conflict, the actors involved, and the strategies adopted by local communities in response to blue economy proposals in three coastal sites in South Africa.In the following section, we provide an overview of three cases from South Africa where tensions and conflicts have arisen due to blue economy proposals and plans as well as the decision-making processes that have failed to address values, rights, and priorities of local communities.In each case, we outline the context, the drivers of conflict, and the main actors involved in addressing the conflict.Thereafter, we explore the strategies that different actors have employed to challenge the proposals and decisions that they regard as environmentally unsound and socially inequitable. Case study context The Olifants River Estuary is located on the west coast of South Africa approximately 350 km northwest of Cape Town (see Fig. 1).It is one of the largest estuaries in the country and comprises a unique and productive ecosystem and is considered an area of high conservation value (Turpie and Clark 2007).The people now residing in Ebenhaeser and Papendorp have a long history of fishing in the Olifants River (Sowman 2009).Today, approximately 120 fishing families rely on fishing for food and as a source of livelihoods (Williams 2013).This community was forcibly removed from their farmlands near the town of Lutzville in 1926, and due to poor soils and lack of water at the resettlement sites, many people became increasingly reliant on fishing as a main source of food and livelihoods (Sowman 2009).Fishers use simple row boats and gillnets to catch fish, mainly mullet (Liza richardsonii), commonly known as 'harders'.For many of the people of Ebenhaeser and Papendorp, fishing in the estuary is not only a source of food and livelihood but is integral to their lives, culture, and their identity (Sowman 2017). Over the past 25 years, traditional small-scale fishers at the Olifants estuary have been facing threats from government scientists and conservationists to close the gillnet fishery.However, with support from researchers and non-governmental organisations (NGOs) (hereafter social partners), the community reached an agreement with government in 2013 to continue fishing in the estuary and to work with government and other stakeholders to declare a community conservation area.The process of demarcating and declaring this conservation area is still underway but has been delayed due to various administrative and legal barriers. Since about 2014, mining for heavy mineral sands on coastal land and on beaches to the north of the Olifants estuary by an Australian mining company, Mineral Sands Resources (MSR) operated by Tormin in South Africa, has gained pace.A decision to allow an expansion of the current Tormin mine in 2018 led to an appeal to the Minister of Environmental Affairs.As the appeal was not upheld, an environmental NGO, Centre for Environmental Rights (CER), lodged an administrative appeal against the decision to approve the expansion of the Tormin Mine and a judicial review in the high court to set aside the Minister of Environment's refusal to uphold the appeal and grant environmental authorization.While these activities were taking place north of the Olifants estuary, fishers raised concern regarding the impacts of mining on beaches, on marine habitats, and fishery resources, as well as access to coastal areas. Page 6 of 16 A further prospecting application by MSR in April 2016, on land adjacent to the north bank of the Olifants Estuary, measuring approximately 40,000 ha in extent and bordering on the estuary for approximately 15 km upstream meant that MSR would potentially hold mining rights for nearly 80 km of coastline.In addition, a large area of this coastal land has been categorised as a critical biodiversity area (CBA).Despite appeals from NGOs, local communities, and researchers, the Minister of Environmental Affairs has supported the expansion of prospecting and mining along this coast and in the vicinity of the sensitive Olifants Estuary.These approvals by both Ministers of the Departments of Mineral Resources and Energy (DMRE) and Forestry, Fisheries and the Environment (DFFE) have angered the fishers who are particularly concerned that prospecting and mining activities will affect their environment, local livelihoods, and plans for conservation. Then, in May 2022, diamond mining activities commenced on a beach in the vicinity of Doringbaai, a coastal town south of the Olifants estuary, which is an important fishing and recreational area for fishers and local people living in Doringbaai and the Olifants River communities.They were aggrieved that they had not been consulted and were concerned about the impact of the beach and nearshore mining activities on the environment, their livelihoods, access to resources, and way of life.They expressed concern at community meetings and during informal discussions with one of the researchers that there had been no environmental impact assessment (EIA) process and that they had not been consulted before the decision was taken to reissue mining rights for a further 30 years.Other NGOs and researchers were equally concerned and began gathering information to challenge this decision.In November 2022, fishers from Doringbaai and the Olifants Estuary joined Protect the West Coast, a not-for-profit organisation, as co-applicants and lodged a semi-urgent interdict to stop mining at Doringbaai until the decision had been reviewed (PTWC and 4 others vs the Minister of Mineral Resources and Energy and 7 others 2022).This matter was due to be heard in the Cape High Court in August 2023, but an out-of-court agreement was reached just prior to the court hearing.The court order required that certain conservation-worthy areas, including a portion of the sensitive Olifants Estuary in the vicinity of the river mouth, would be protected from mining.The agreement also confirmed that an updated and amended Environmental Management Plan (EMPr) including a fishery specialist study would be conducted that addressed the interests of small-scale fishers in the area.The fishing communities and other stakeholders would also be given an opportunity to comment on the draft EMPr prior to finalisation. Despite this court, the ongoing ad hoc approval of an increasing number of prospecting and mining applications along the west coast of South Africa has angered local fishing communities who are of the opinion that their concerns are not being heard and their rights are being disrespected.The weak socio-economic circumstances of the Olifants River fisher communities, where poverty is deep and unemployment is high (Williams, 2013;Sowman 2017), mean that some residents are vulnerable to projects that promise jobs and improved socio-economic conditions regardless of environmental and social impacts.While a few community members do support mining, our research shows that an overwhelming number of fishers are against mining on the beaches and in the vicinity of the Olifants Estuary.There are thus some tensions between community members who support mining and those against mining. Key actors and relationships amongst actors The key actors involved in this mining conflict are the local fishing communities living at Ebenhaeser and Papendorp, who depend on the estuary for food and livelihoods and have a strong cultural connection to the estuary and coastal environment (Sowman 2021).They have been working collaboratively with researchers from the University of Cape Town (UCT) for several years on different issues related to the fishery (Sowman 2009(Sowman , 2017;;Rice 2021), as well as with various NGOs including Masifundise Development Trust (MDT), Abalobi, the Legal Resources Centre, and more recently Protect the West Coast, and have built strong and trusting relationships with these social partners over an extended period of time.Then, there are various environmental departments at provincial level, namely Cape Nature and the Department of Environmental Affairs and Development Planning (DEADP) as well as the local environmental officer within the local municipality who have expressed concerns and written objections regarding the prospecting and mining applications.Local farmers and recreational users, although less organised and vocal, have also expressed concerns about mining in the vicinity of the Olifants Estuary at various community workshops and meetings with EIA consultants over the research period. On the other hand, the mining companies as well as national government regard mining as an opportunity to improve the regional and local economy and provide jobs for rural communities and are supportive of expansion of mining in this region (DMRE 2022).These include the mining companies, certain departments in the local municipality, the national and provincial DMRE, and the Minister of Environmental Affairs.Local fishers and farmers are largely distrustful of government especially given the high levels of corruption that has been exposed over the past 10-15 years, often referred to as 'state capture' (Madonsela 2018;Chipkin et al. 2018) as well as the slow pace of socio-economic reform in rural areas. Current status and strategies employed to challenge decisions The conflict has largely been between the local fishing communities working with their social partners and the mining companies and their consultants who are supported by the DMRE and the Minister of Environmental Affairs.The conflict between these groups was evident at a public meeting in 2020 in Ebenhaeser, when fishers became angry with the environmental consultants and mining representatives and eventually stormed out of the meeting.Fishers with support from researchers and NGOs have written objections to DMRE and lodged appeals with the Minister of Environmental Affairs to these various prospecting and mining applications.One of the appeals for prospecting on the northern bank of the Olifants estuary was upheld and the Minister of Environmental Affairs required the applicants to do further studies and conduct meaningful public participation.Although the communities were dissatisfied with the quality of the additional reports and the participation process, the appeal delayed the decision, forced the consultants to recognise the community as a key stakeholder, and enabled the community to strategize on next steps.They have also voiced their concerns at various national fisher forums, on various social media platforms, at workshops, and conferences including at the 4th World Small-Scale Fisheries Congress, held in Cape Town in November 2022. In the case of diamond mining on the beach near Doringbaai, fishers have sought legal advice and prepared affidavits and joined an NPO and PTWC, in their application for an interdict and review of the decision to renew mining rights for a further 30 years without environmental authorisation (Protect the West Coast (PTWC) and 4 others vs Minister of Mineral Resources and Energy (MRE) and 7 others 2022).However, there have also been some tensions within the communities as some members supported prospecting and mining due to its potential for job creation.However, as more information about the actual number of jobs and skills required for these jobs and the impacts of mining became known, communities have become more united in their opposition to mining. Case study context Located on the Indian Ocean coast of South Africa, Richards Bay is one of the central business districts of the uMhlathuze Local Municipality in the KwaZulu-Natal province.The town and surrounds are zoned as an industrial area and are home to heavy-duty industries including (i) a mining company, Richards Bay Minerals (RBM) (a subsidiary of the Australian trans-national mining company Rio Tinto) which is mining titanium off the coast of the Richards Bay area); (ii) Transnet (a parastatal managing the Richards Bay port/ harbour); and (iii) Foskor (an industry responsible for producing phosphates and phosphoric acid).These industrial developments are taking place adjacent to marginalised rural coastal communities who have lived in the area and relied on the coastal and marine environment for generations. Since 1976, the Richards Bay Estuary has become South Africa's largest cargo handling port and includes associated industrial facilities such as a coal multi-purpose terminal, as well as a small craft harbour.The bay continues to function as an estuary of high biodiversity value and has been described as a unique and productive ecosystem that supports complex food webs and functions, including vital spawning grounds for a diverse range of marine fish and estuarine organisms (Van Niekerk and Turpie 2012).Under Operation Phakisa expansion plans, developments including marine aquaculture development, a ship repair terminal, and a dry-docking facility-all within the geographical boundaries of the Richards Bay harbour-have been developed. Over the past decade, the livelihoods and way of life of coastal communities in the Richards Bay area have faced uncertainty and insecurity due to the rapid industrial expansion as well as plans to expand the harbour and extend mining operations.The extension of mining operations south of Richards Bay is an issue of grave concern to the Dube community, who have relied on rich coastal and estuarine resources in the Richards Bay area for food and livelihoods for generations.Commercial agriculture, subsistence farming, some tourism, and small-scale fishing are key livelihood activities of this community.In the 1970s, a portion of the estuary was converted into a deep-water harbour, now the port, while the remaining estuarine area was left undeveloped.Although the Dube community has a long history of fishing in the estuary, on the lake and in coastal waters, since the establishment of the port, they are limited to fishing in the lake only, as access to their traditional fishing grounds has been restricted due to developments related to the port, increased fishery regulations, and coastal mining.The environmental authorization for these developments in the harbour has been approved with great speed and without adequate public participation and consideration of the environmental and social impacts on resource-dependent communities.Interviews with members of the Dube community conducted in 2022 revealed that they have never been consulted about the port and extension of coastal mining activities. RBM has been mining north of Richards Bay since 1976, but since there is a limited amount of area left to mine, they are expanding their operations to the south of Richards Bay, in the Dube and Mkhwanazi areas.Research conducted by Mbatha and Wynberg (2014) demonstrated that the cumulative impacts of RBM mining have been detrimental to local 51 Page 8 of 16 livelihoods and the benefits promised by RBM have not been realized.The impacts and implications of this extension southwards threaten access to resources as well as the livelihoods and way of life of the Dube community.There is a lot of uncertainty about whether mining expansion will result in the relocation of local communities and how their rights will be protected.For example, the community utilizes sacred mountains within the mining lease area as burial sites as well as for specific rituals and cleansing ceremonies.In addition, the indigenous trees and plants are a source of edible and medicinal leaves, fruit, and herbs.As expected, the uncertainties regarding community resettlement have caused tensions and resulted in community mobilisation and activism within the community against the developers leading to threats against local activists.The murder of a prominent activist opposing the Dube relocation in 2018 highlights the dangers facing community members and raises questions about local people challenging plans and decisions that affect their lives and livelihoods. Key actors and relationships amongst actors There are three categories of actors involved in the Richards Bay conflict.The first category is government: i.e., the Department of Mineral Resources and Energy (DMRE), the parastatal agency Transnet (who are responsible for the port and its expansion), the national fishery authority DFFE, and the provincial conservation agency, Ezemvelo KZN Wildlife.The second category is the private sector including RBM and developers of a commercial aquaculture project.The small-scale fishing community at Dube is the third group of actors and most likely to be impacted by the proposals and expansions.However, at this stage, it is not clear how their traditional fishing rights will be affected by the expansion of these blue economy activities.There are coalitions between national government departments and the private sector who both support industrial development in the area.While RBM obtained the rights to mine the area from the government during the apartheid era, they are required to negotiate with the Dube traditional authority and community regarding the proposals and explain how the community will benefit from mining.This has not occurred yet, and the community remains in the dark regarding how the mining proposals will impact their livelihoods.The community is poor and rural and lack the resources, capacity, and skills to engage with those managing blue economy projects and those in decision-making positions. Current status and strategies employed to challenge decisions Conflict in the Dube area between coastal communities, the mining company, Transnet, and government is mounting as blue economy interventions are being implemented and expanded.The Richards Bay harbour is one of the strategic Operation Phakisa projects in the KwaZulu-Natal province as it is significant for mineral exports, but the Dube community is increasingly insecure about the implications of the port expansion and mining and aquaculture developments on their land ownership and livelihoods secured on the coast, as well as use and benefit from these coastal resources.Fishery managers have excluded the community from fishing in the estuary and have labelled them as 'illegal fishers' while the community claim to have a long history of fishing in the area.Local people are fearful of challenging these blue economy plans and decisions in view of the threats and dangers experienced by community activists.These impacts have a disproportionate impact on vulnerable coastal communities and in particular those facing relocation and reliant on resources for their livelihoods. The Dube community is a poor, rural community that has lacked capacity, resources, and external support to engage with and challenge government and private sector actors regarding blue economy plans and activities that affect their lives and livelihoods.In this area, there is poor civil society presence.Although a local fishery co-operative has made efforts to mobilise and organise community members, local people have indicated that it is difficult to sustain these efforts because of poverty and marginalisation and also due to fear of intimidation and violence.Thus, the community has not been very effective in having their voices heard and influencing planning and decision-making. In 2021, the community reached out to one of the authors who had started working in the area, requesting information and assistance with access to public participation processes and links to NGOs working with small-scale fishers in the area.At about the same time, an NGO, operating in Durban known as South Durban Community Environmental Alliance (SDCEA), opened an office in Richards Bay and started working with communities affected by blue economy interventions.Our research team has engaged with SDCEA on various issues regarding blue economy initiatives and recently explored possible interventions to support the Dube community.Based on SDCEA's long history of advocating for the rights of marginalised communities living in the industrial area south of Durban, lessons learned from challenging environmentally harmful developments in this area are being shared with communities in the Richards Bay area.SDCEA has organised several workshops and community exchange visits in the area in order to raise awareness and build solidarity and trust amongst communities concerned about expansion in the Richards Bay area.One of the authors has worked with SDCEA to facilitate engagements with the community and provide research inputs that can strengthen the activism work done by the NGO. Case study context Saldanha Bay and Langebaan Lagoon are situated on the west coast of South Africa approximately 120 km northwest of Cape Town (Fig. 1).The Saldanha Bay-Langebaan Lagoon system consists of a natural deep-water port at Saldanha Bay, with the Langebaan Lagoon extending 17 km to the southeast.Saldanha Bay Municipality has a population of approximately 100,000 people (Statistics South Africa 2012) and includes the towns of Saldanha Bay, Langebaan, and Vredenburg, and the area is recognised for its conservation, tourism, fishing, and industrial importance.The southern section of the lagoon system includes the West Coast National Park, parts of which were declared a Ramsar site in 1988.The Langebaan Lagoon also has a long history of supporting small-scale fishing communities (Sunde 2014).The Bay also hosts a sea-based Aquaculture Development Zone (ADZ) as part of the Operation Phakisa initiatives which is set to undergo further expansion (Clark et al. 2020). Industrial development in the area increased significantly with the construction of a deep-water export port between 1973 and 1976.The port was intended to create a regional node for economic development with the opening of the bulk iron ore terminal in 1976.In the last decade, diversification of the Saldanha economy has been a priority with the listing as the region as one of the presidential priority development regions (The Presidency 2012) due to its strategic location to serve the oil and gas sector along the west coast of Africa.With the launch of Operation Phakisa in 2014, Saldanha Bay was identified as one of the government's Strategic Integrated Projects (SIPs) with the aim to fast track development and growth which will result in further industrialisation of this area (South African Government n.d.). The iron ore terminal (IOT) currently has the capacity to handle approximately 60 million tonnes per year, and plans are in progress to upgrade the infrastructure to increase the throughput tonnage to 80 million tonnes.Additional ore exports including lead, zinc, copper, and manganese as well as zircon and rutile from heavy mineral sand mining have also increased exponentially from the Multi-purpose Terminal (MPT) over the last decade.South Africa holds approximately 78% of the world's identified manganese resources (Steenkamp et al. 2018) and handles 15% of the total manganese exports from South Africa. The combination of conservation, tourism, fishing, and industrial development has resulted in years of conflict between different stakeholders.This conflict is particularly evident in the air quality space with the iron ore terminal being one of the most contentious issues as a result of the dust generated by the transport, handling, and stockpiling of the ore.The different sectors operating in the town are regarded by many actors as incompatible.For many, industrial activities make the region an unattractive tourist destination, while health concerns from poor air quality make the town an unsuitable place to live.The fishery and aquaculture sector have raised concerns about water contamination from the ore (Clark et al. 2018) and other shipping-related discharges.Many stakeholders are concerned that environmental and health issues are not being taken seriously and not being incorporated into decision-making (WSP 2018). A number of community members have organised themselves as the Red Dust Action Group with the main aim of getting the polluter (Transnet) to pay for the damage to buildings caused by the dust from the iron ore that is transported, stockpiled, and handled in the port (Red Dust Action Group 2021).In addition, several community members believe that their health is also negatively impacted by the port operations, despite dust from iron ore not being regarded as toxic.The recent proposal to increase throughput of other minerals from the Multi-Purpose Terminal has resulted in additional conflicts due to a distrust of the environmental authorisation process and what many residents believe to be inadequate monitoring of dustfall and ambient concentrations of particulate matter (WSP 2018). Low-income communities in the region who are dependent on the jobs generated from the port activities are more likely to be severely impacted by the dust generated from the handling and transporting of the ore.They are exposed to the dust in an occupational setting, as well as potentially along the railway lines running through their communities in the low-income settlements.Members from these poor communities are least likely to voice any concerns, as they are dependent on employment from the port.Lack of participation stems in part from not being informed about proposed developments and also from a lack of confidence to attend and participate in meetings to address air quality concerns.The more affluent residents and stakeholders from other industries (e.g.aquaculture) are the most likely groups to voice their concerns about future development plans. Key actors and relationships amongst actors The key actors in the Saldanha Bay air quality space include the government, industry, conservationists, and local communities.The government includes the three spheres of government involved in driving the BE strategy and those departments monitoring and enforcing environmental legislation and regulations.These spheres include the Saldanha Bay local municipality, the West Coast District Municipality, and provincial Western Cape Government as well as the national departments of DFFE and DMRE.With regard to the management of air quality, the roles of national, provincial, and municipal government are well-defined in the National Environmental Management: Air Quality Act 51 Page 10 of 16 (DEAT 2004).However, an important change was made to the regulations in 2014 when the designated authority for approving all atmospheric emission licences (AELs) for all state-owned enterprises was changed from the municipal level to the national level.This change was viewed by many in the community as the national government taking further control of driving economic growth in the area. The application for environmental authorisation from Transnet in 2017 to increase storage of manganese ore at the Multi-Purpose Terminal from 90,000 to 200,000 t added fuel to the conflict amongst key actors.The national DFFE ruled that an EIA was not necessary as there was no additional infrastructure being constructed and that only an AEL was required (Malaza 2017).DFFE approved the AEL based on the Air Quality Impact Assessment (AQIA) conducted by an environmental consultant (WSP 2017).A number of concerned residents and local businesses such as the Bivalve Shellfish Farmers Association and the Saldanha Bay Water Quality Forum (SBWQF), an NGO involved in water quality monitoring as well as representatives of local and provincial government, raised objections in response to the AQIA (WSP 2018). Current status and strategies employed to challenge decisions The conflict around dust generated by the iron terminal has been ongoing for years.The recent increase in other ores that are potentially more toxic (e.g.manganese) has aggravated the conflict.The AEL was issued in 2019 despite receiving extensive comments on the AQIA from various parties (WSP 2018).This prompted the opposing parties to lodge an appeal against the provisional approval for the increased manganese storage with the Minister of DFFE.The appellants included a wide range of actors and local residents, as well as the local Saldanha Bay Municipality and the West Coast District Municipality.The Minister upheld the grounds for appeal and set aside the provisional AEL in January 2020 (Creecy 2020). However, the withdrawal of the AEL has resulted in stockpiling of the ore on privately owned land next to the port.This has led to an increase in the number of applications for approval for smaller quantities of ore storage by different operators.Concerns regarding the cumulative impact of the handling and storage of the ore are not being taken into consideration.Transnet has again applied for increased storage of manganese ore on the MPT (Jones and Armstrong 2022), and a number of stakeholders have raised concerns about the increase in open stockpiling of manganese ore (Jones and Armstrong 2022). The local residents and organisations that are aware of these applications and associated impacts are well-networked and are very involved in the public participation processes that are mandated by law.The same people are also very actively involved in the Environmental Stakeholders Forum which is a requirement as part of the Environmental Management Plan (EMP) dustfall monitoring plan for the Iron Ore Terminal.However, many of them expressed concern regarding the inadequacy of public participation process for the AQIA undertaken in 2017.In addition, many workers involved in handling and storage of iron ore and other minerals as well as other marginalised groups, such as the small-scale fishers, are largely unaware of the applications and processes underway and the potential impacts of these activities on the environment and health. Various strategies are being employed by concerned citizens working with researchers and NGOs to address the conflicts over deteriorating air quality in the Saldanha Bay area.Researchers are working closely with the municipal officials, some local residents, and the port authority (Transnet) to determine how cumulative impacts of the increased quantities of ore shipments can be assessed and to improve the monitoring capability and reduce the potential exposure on people and the environment in the region.The GCRF Mine Dust and Health Network (GCRF MDHN, www.minedust.org) organised a stakeholder event (including Transnet), in March 2022, to raise awareness and explore strategies to address the environmental and health concerns.However, representation from workers at Transnet and the poorer communities, including local fishers, has been limited.Two recent workshops with researchers from UCT in the Health Sciences, Environmental and Geographical Sciences, and Chemical Engineering Departments and youth from the area provided a forum to raise awareness regarding the environmental and health concerns associated with an increase in mine dust and the platforms that exist to learn more about plans and projects for the area, as well as procedures for submitting comments and objections. The dialogue initiated by the GCRF MDHN has created a mediated safe space for stakeholders to take part in an initiative that relies on credible data and information with the express goal of solving the dust problem.This dialogue is still developing but is clear from these initial engagements that different stakeholders have diverging ideas of what the solutions might entail.Despite the differences, there appears to be a willingness to engage and find a mutually agreedupon strategy facilitated by independent researchers to address the conflict. Findings and discussion An examination of the root causes and drivers of these conflicts suggests that while there are context-specific factors in each case that have exacerbated the tensions and conflicts, there are a number of general root causes that pertain to all cases (see Table 1). Page 11 of 16 51 Firstly, the structural inequalities and injustices associated with South Africa's colonial and apartheid past have left a legacy that continues to render marginalised coastal communities vulnerable to plans and decisions that ignore their current socio-economic conditions and vulnerability context (Sowman and Raemaekers 2018).Mining is a major driver of the South African economy (Broadhurst et al. 2014) and despite its legacy of environmental degradation and social injustice, it remains a key focus of governments' economic growth and recovery plan.Thus, the national Department of Minerals Resources and Energy (DMRE) is actively encouraging investors to apply for rights to prospect, explore, and develop these mineral resources and promising jobs and a boost to economic growth through this sector as well as the other blue economy initiatives. There are clearly divergent narratives across actors regarding the blue economy in South Africa.Government and the private sector are aligned in their narratives regarding the benefits for South Africa and in particular, poor unemployed communities from this fast-track economic growth model (Bond 2019;Potgieter 2021).Local communities, local businesses, and various NGOs are much more cautious and opposed to this economic model especially in relation to the growth of sectors such as oil and gas and mining (Masie and Bond 2018;Bond 2019; PTWC and 4 others vs Minister of MRE and 7 others 2022; Christian Adams and Others versus the Minister and Others 2022; Sowman 2022; Sunde 2022).The focus on oil and gas as a growth sector and the support for mining and industrial expansion does not align with South Africa's global commitments to reduce our carbon emissions, yet government has developed a narrative that defends its growth model and assures the public it will meet its climate targets (Bond 2019;DFFE 2022). While there is a policy and legislative framework in place to regulate the mining sector and safeguard environmental and human rights, the new amended procedure, referred to as the One Environmental System (Humby 2015) for assessing impacts and fast-tracking approvals, has raised concerns about the adequacy and robustness of these procedures and decision-making processes.An added concern is that DMRE has both the mandate to facilitate the exploitation of mineral and oil and gas resources and the authority to approve or reject the EIA conducted for an application.If civil society is aggrieved by the decision granted by DMRE, they may lodge a formal appeal with the Minister of Environmental Affairs.However, based on the environmental minister responses to prospecting and mining appeals over the last 2 years, it would appear that DFFE is fully behind the blue growth agenda regardless of concerns expressed by local people and potential environmental and social impacts. Public participation, and in particular involvement of local and indigenous communities in these planning and environmental assessment processes, has been weak.In the case of the Olifants and Richards Bay communities, local people were not directly consulted and only learned about the prospecting and mining applications via their social partners.In Saldanha Bay, while there is an active group commenting on proposals and plans, the workers at the MPT and poorer sectors of society have not been adequately consulted about the increased ore being transported and stockpiled in and around the port, nor have they been informed of the environmental and health impacts of this expansion.The public meetings held to consult interested and affected parties are usually dominated by the applicant and his/her consultants who present the project in a very favourable light.These public participation sessions tend to be information giving sessions and do not provide a forum for meaningful engagement and a safe space for communities to raise concerns.Furthermore, consultants and applicants often present information in a manner that is not accessible to local communities who may feel intimidated by the data presented and reluctant to ask questions or voice concerns. Communities are also distrustful of the information provided by consultants and in particular of the benefits promised and their assessment of environmental and social impacts.A deep distrust for applicants and their consultants has developed amongst communities due to the failure to take account of their views and inputs during the planning and assessment processes.This lack of meaningful consultation has angered community members who are of the opinion that their concerns and rights are not being heard and respected.They are also concerned about the government's lack of transparency regarding information about current and future planned blue economy projects and the ongoing allegations of corruption in awarding tenders and approving projects.The weak socio-economic circumstances and high unemployment in these communities mean that some residents are vulnerable to projects that promise jobs and improved socio-economic conditions regardless of environmental, social, and health impacts.While a few community members do support mining and growth of other sectors (e.g. oil and gas), this research as well as our involvement in various other workshops and forums with coastal fisher groups suggests overwhelming opposition to coastal mining and rampant expansion of the blue economy. Failure to undertake meaningful consultation with local and indigenous communities prior to decisionmaking associated with blue economy plans and projects has been identified as a major shortcoming of by various scholars investigating the impacts and implications of implementing such projects and initiatives across the world (Jentoft et al. 2022;Bennet et al., 2021 and2022;Sunde 2022).Adoption of a blue growth agenda without due consideration of the rights, socio-economic needs, and voices of local communities has exacerbated ocean grabbing, displacement, and social inequity especially amongst poor and vulnerable communities (Childs and Hicks 2019;Cohen et al. 2019;Tafon 2019;Das 2023).While various papers and technical reports have put forward principles and recommendations for improving participation of poor and vulnerable groups in planning and decision-making regarding the blue growth (Bennet., et al., 2022;Österblom et al. 2020;FAO 2022), translation of these principles and guidelines remains challenging in many countries.The power asymmetries amongst protagonists of blue economy projects and those affected by these developments also reduce the potential for conflict resolution through deliberative and collaborative processes (Bennett et al. 2019;Bennett et al. 2022;Ertör and Hadjimichael 2020;Tafon et al. 2022). Based on our research, it was evident that communities are employing an array of strategies to challenge mining proposals and related industrial expansions which they consider harmful to the environment including their socio-economic environment, cultural heritage, and health.Different strategies are being employed by communities and civil society depending on their resources, capabilities, and skills as well as their networks and strength of relationships with social partners.A summary of the strategies employed by communities and civil society in the three case studies is presented in Table 2. Where communities and other local stakeholders have developed trusting partnerships with researchers and NGOs in the case studies examined, they are kept abreast of plans and proposed developments and are informed about opportunities for public involvement.In these contexts (Olifants Estuary and Saldanha Bay), social partners are able to communicate via WhatsApp or cell phone to community leaders or members of local forums (e.g.Environmental Stakeholder Forum in SB) or community structures (e. g. local fishing committee) to inform them of new proposals or developments and assist with access to documentation, preparation of objections, and appeals to challenge information presented or decisions they consider unfair or unsustainable.These social partners are also able to link communities to legal experts who can assist in advising them on their rights and appropriate legal strategies to consider.Where communities are isolated and have not developed strong partnerships and networks (such as in Dube), their ability to participate in these assessment and decision processes has been very limited, and decisions are taken without their involvement or consent-even where customary rights are at stake.These relationships with social partners provide the community with access to information, expertise, resources, and networks.Through these relationships and interactions, communities become aware of their rights and familiar with the procedures for participation in planning and decision-making processes. The important role of NGOs, researchers, and human rights organisations in working with communities and civil society groups to challenge blue growth that is deemed to be unjust and unsustainable and facilitate conflict resolution has been well-documented by researchers (Bennett et al. 2022;Jentoft et al. 2022;Sunde 2022).Reports from various national and international workshops and high-level panels have also highlighted concerns about the focus on economic growth at the expense of social equity and environmental integrity (Masifundise 2021;Österblom et al. 2020).In particular, several scholars have highlighted the failure to take proper account of social equity considerations in blue growth plans and initiatives and the unfair burden that these projects place on poor communities.(Bond 2019;Cohen et al. 2019: Tafon 2019;Voyer et al. 2018;Das 2023).Proposals and recommendations to place social equity at the centre of blue economy initiatives is key to facilitating fair and equitable outcomes and reducing conflict. The current environmental regulatory framework for prospecting and mining applications and expansions provides opportunities for interested and affected parties to raise their concerns and provide comments on the environmental assessment reports.Researchers are working with communities to enhance their understanding of the issues and impacts associated with proposals and are encouraging them to draw on their local knowledge to provide a more integrated and place-based understanding of how these proposed developments will impact on local communities and their livelihoods.Through these knowledge exchange and co-production processes (e.g.Olifants and Saldanha Bay), comments and appeals submitted reflect the deep knowledge that local people have about their environment.Through these submissions, the rights, needs, and priorities of communities are also communicated.However, despite these efforts, these concerns are seldom integrated into revised plans and the finalisation of EIA reports in South Africa.Communities and civil society are thus increasingly forming coalitions with other concerned stakeholders in the area and enlisting assistance from legal NGOs to challenge decisions.Aside from the potential to overturn a decision or delay the start of a mining project, being involved in these legal processes provides a space for communities and their social partners to collaborate, share, and co-produce knowledge and form strong alliances.While litigation is often regarded as the last resort in a conflict situation, in a context of historical injustices, structural inequalities, asymmetrical power relations, and weak public participation processes, legal mobilisation by local communities and their social partners is a necessary strategy in response to governments approach to blue growth.This has been the response of NGOs and local fishers in the mining case explored in the Olifants Estuary case study.The increase in number of legal cases being brought to the courts in South Africa by civil society regarding unsustainable development is indicative of failure of more consultative and deliberative processes to address coastal conflicts (see, for example, Christian Adams and others vs Minister of MRE and others 2022 and PTWC and 4 others vs Minister of MRE and Energy and 7 others 2022) Communities are essentially having to employ these strategies to safeguard their environments, fishing livelihoods, and cultural heritage.Our research suggests that while conflict can be a catalyst for initiating transformation, in the context of historical and ongoing structural inequalities and injustices, poverty, and exclusion from decision-making processes, communities are forced to challenge several blue economy processes and decisions through strategies such as protest action, networking with other communities, strengthening partnerships with researchers and NGOs, and taking legal action.Through these engagement processes, new alliances are forged amongst groups and individuals who share a common vision for their local environment and who may previously have been at odds regarding environmental marine access and use.However, engagement with the proponents (often inclusive of certain sectors of government) of blue economy projects is not a feasible approach in contexts of structural inequities, poverty, and power asymmetries.Certainly, for local communities to engage in discussions with proponents and their consultants about these projects, without a full understanding of what is being proposed and what the implications of such proposals may be for them, places them at risk of being co-opted into supporting plans and projects that promise benefits and underplay environmental and social impacts. While the strategies adopted in the cases outlined in this paper are locality specific, they signal a growing social movement of coastal communities working collaboratively with social partners (academics, researchers, NGOs, professionals, legal experts) to challenge blue economy proposals, 51 Page 14 of 16 plans, and decisions that exclude them and may lead to significant impacts on their lives and livelihoods. Conclusion This paper has set out to explore the strategies that local communities and actors employ to challenge blue economy plans and projects that they regard as unsustainable and unjust.We argue that tackling conflict in contexts characterised by a history of structural inequality, oppression, marginalisation, and significant power imbalances requires local communities to employ innovative strategies to challenge processes and decisions, forcing opponents to acknowledge them as key players in the ocean space with rights, needs, and priorities that need to be respected.Only then can communities consider engaging with proponents of these blue economy projects in an equal and meaningful way.Strategies employed by communities in these cases have fostered local partnerships across actors that have not previously worked together (e.g.local fishers and landowners) and forged alliances amongst groups (e.g.local fishers, conservation departments, landowners, local businesses) that share a common vision for the environment under threat.Building networks with social partners including researchers, legal experts, NGOs, and other civil society organisations has enhanced knowledge sharing amongst these partners and strengthened the capacity of local communities to engage more confidently in these processes.Furthermore, challenging plans, environmental assessments, and decisions in the form of protests and submissions of comments as well as media releases often slows down these decisions-processes and enables civil society to strategize with social partners regarding next steps.In South Africa, legal mobilisation has increasingly been employed to challenge unfair processes and decisions.Threatening and taking legal action have required proponents of blue economy projects to acknowledge local communities as key players in the ocean economy and that their rights (substantive and procedural) need to be acknowledged and respected.We conclude that for communities to be heard and empowered to engage in planning and decision-making processes, the various strategies employed in these cases including building awareness about rights (human and environmental), engaging in protest action, fostering trusting partnerships, and forging alliances and networks with social partners, as well as legal mobilisation, can be effective in slowing down decision-making processes, demanding recognition, and levelling the playing fields.These strategies are especially necessary in a context where structural inequality, inequitable access to resources, and extreme power imbalances amongst blue economy actors persist. Fig. 1 Fig. 1 Location of case study sites Table 1 Root causes of conflict in 3 case studies in South Africa Root causes Evidence from cases Structural inequalities Legacy of apartheid persists; marginalisation; and lack of services, facilities, and social protection in all 3 cases; limited education and social support Social injustices Govt.encourages extractive industries without adequate consideration of social, cultural, and environmental impacts leading to conflicts, protests, and litigation Policy mismatches Contradictions across policy arenas; e.g.govt.supports growth of oil and gas sector despite strong opposition from civil society and commitments to uphold climate change commitments Divergent narratives of the BE Govt.claims BE is addressing poverty, unemployment, and inequality; communities claim BE leading to infringement of human rights and social and environmental harm Political agendas and alliances Govt.collaborates with private sector to fast track BE projects; main focus is economic growth and increasing revenue flows.Increasing concerns regarding corruption in awarding tenders Lack of consultation Public participation is inadequate especially where poor and marginalised communities concerned.Views of local communities seldom are heard and integrated into planning and decision-making Unequal power relations Govt.and private sector voices powerful in meetings with communities, decisions taken unilaterally Knowledge/data disputes Communities distrustful of Govt.and consultant's data (e.g. in EIAs); govt.fails to take account of local and indigenous knowledge in decisions Distrust amongst actors Historical factors mitigate against fostering trusting relations; govt.has not delivered on promises of jobs, employment, and improved quality of life for poor coastal communities Table 2 Strategies employed by actors to address conflict in study sites
2023-11-15T15:02:32.988Z
2023-11-14T00:00:00.000
{ "year": 2023, "sha1": "043c346fb1b79b21e23d2f020a672e75ad1f2c32", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40152-023-00341-1.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "1895c80403eb8dc7ec5f17c387714aec5cb0dc3c", "s2fieldsofstudy": [ "Business", "Economics", "Environmental Science" ], "extfieldsofstudy": [] }
16833691
pes2o/s2orc
v3-fos-license
Implementation of a cystic fibrosis lung transplant referral patient decision aid in routine clinical practice: an observational study Background The decision to have lung transplantation as treatment for end-stage lung disease from cystic fibrosis (CF) has benefits and serious risks. Although patient decision aids are effective interventions for helping patients reach a quality decision, little is known about implementing them in clinical practice. Our study evaluated a sustainable approach for implementing a patient decision aid for adults with CF considering referral for lung transplantation. Methods A prospective pragmatic observational study was guided by the Knowledge-to-Action Framework. Healthcare professionals in all 23 Canadian CF clinics were eligible. We surveyed participants regarding perceived barriers and facilitators to patient decision aid use. Interventions tailored to address modifiable identified barriers included training, access to decision aids, and conference calls. The primary outcome was >80% use of the decision aid in year 2. Results Of 23 adult CF clinics, 18 participated (78.2%) and 13 had healthcare professionals attend training. Baseline barriers were healthcare professionals’ inadequate knowledge for supporting patients making decisions (55%), clarifying patients’ values for outcomes of options (58%), and helping patients handle conflicting views of others (71%). Other barriers were lack of time (52%) and needing to change how transplantation is discussed (42%). Baseline facilitators were healthcare professionals feeling comfortable discussing bad transplantation outcomes (74%), agreeing the decision aid would be easy to experiment with (71%) and use in the CF clinic (87%), and agreeing that using the decision aid would not require reorganization of the CF clinic (90%). After implementing the decision aid with interventions tailored to the barriers, decision aid use increased from 29% at baseline to 85% during year 1 and 92% in year 2 (p < 0.001). Compared to baseline, more healthcare professionals at the end of the study were confident in supporting decision-making (p = 0.03) but continued to feel inadequate ability with supporting patients to handle conflicting views (p = 0.01). Conclusion Most Canadian CF clinics agreed to participate in the study. Interventions were used to target identified modifiable barriers to using the patient decision aid in routine CF clinical practice. CF clinics reported using it with almost all patients in the second year. Background Cystic fibrosis (CF) is one of the most common inherited fatal diseases [1]. Although survival has improved dramatically over the last 50 years, many patients with CF develop end-stage lung disease in young adulthood or middle age [1]. For patients with end-stage lung disease, lung transplantation can improve exercise tolerance, quality of life, and survival [2]. However, there are significant risks or inconveniences with lung transplantation including infection, implant rejection, survival beyond 5 years limited to 50%-60%, relocation to only one of five transplant centers in Canada, and need to build a relationship with a CF transplantation healthcare team [3]. Adults with CF experience considerable difficulty making the decision about lung transplantation [4,5]. To help these patients, we previously developed the CF lung transplant referral patient decision aid based on the Ottawa Decision Support Framework and the International Patient Decision Aid Standards [6,7]. Elements in the decision aid include focus on an explicit decision, best available evidence on treatment options for end-stage CF lung disease (transplant versus supportive care), probabilities of benefits and risks, an explicit values-clarification exercise, and structured guidance in making the decision. This patient decision aid includes a one-page summary report to facilitate sharing the patients' knowledge, values, and preferences with healthcare professionals. This one-page report can be filed on the patients' health record [8]. We conducted a randomized controlled trial to evaluate this CF lung transplantation referral patient decision aid at nine CF clinics in Canada and five in Australia [9]. Compared to usual care, patients randomized to the patient decision aid had greater knowledge, more realistic expectations about the benefits and risks of lung transplantation, lower decisional conflict, and higher satisfaction with the decision-making process. Our findings were consistent with other trials of patient decision aids [10]. Our findings were disseminated in presentations at conferences, journal publications, and by adding the patient decision aid (English, French) to the international A to Z patient decision aids inventory [http://decisionaid.ohri.ca/ decaids.html] [4,9,11,12]. Despite positive patient outcomes from using the patient decision aid and the use of passive dissemination of the study findings, there was no evidence indicating that it was routinely being used in adult CF clinics. Common barriers consistently interfering with implementation of patient decision aids across multiple studies were that healthcare professionals: had inadequate training regarding their use, were indifferent about using it, lacked confidence in the patient decision aid content, and were concerned about disrupting established workflows [13]. Another systematic review showed that healthcare professional training increased shared decision-making and use of patient decision aids [14]. Importantly, research has shown that successful implementation of evidence into clinical practice requires targeted and tailored interventions based on identified barriers to changing healthcare professionals' behaviors [15,16]. The purpose of this study was to evaluate a sustainable approach for implementing the lung transplant referral patient decision aid into clinical practice in adult CF clinics. Specific objectives were: a) to monitor the change in use of the CF lung transplant referral patient decision aid after exposure to the interventions targeted to overcome identified barriers; and b) to assess change in healthcare professionals' perceived barriers to using the patient decision aid with CF patients. Methods A prospective pragmatic observational study was conducted from September 2010 to August 2012 and was guided by the Knowledge-to-Action Framework [17]. This framework is designed to enhance uptake of evidence-based innovations in clinical practice. After identifying an evidence-practice gap (i.e., CF lung transplant referral patient decision aid was not being used in clinical practice), next steps in the framework involve: a) assessing for adaptations, barriers, and facilitators to using the patient decision aid; b) choosing implementation interventions to overcome identified barriers; and c) monitoring use including sustained use of the patient decision aid. Sustainability is most often measured 2 years or more after initial implementation to determine continued use after initial efforts to increase adoption [18,19]. Our study was approved by The Ottawa Hospital Research Ethics Board. Setting and participants All 23 accredited adult CF clinics within eight different provincial healthcare systems in Canada were eligible to participate. Patients with CF attend these clinics routinely (e.g., every 3 to 4 months) and have their lung function measured at each clinic visit. Most CF clinics have one specialized physician and one nurse coordinator. Usually, both are responsible for counseling patients making decisions about referral for lung transplantation. Other healthcare professionals such as pharmacists and social workers may be involved in some clinics. Healthcare professionals who routinely counseled patients from these 23 clinics were invited to participate in our study. To increase awareness of the study, a 1-h presentation on shared decision-making was provided to Canadian CF healthcare professionals at the North American CF Conference in October 2010. Surveys We conducted two types of surveys: patient decision aid use and barriers assessment (see Figure 1). At the start of the study, CF clinics were emailed to determine the number of patients in their clinic during the year prior to study initiation (September 2009-August 2010) who had: a) engaged in a lung transplant referral discussion; b) received a lung transplant referral; and c) received the patient decision aid. Whether or not healthcare professionals participated in the study interventions, participating CF clinics were re-surveyed from September 2011 to August 2012 to determine these same outcomes. At participating clinics, tracking logs were used to monitor all potential referrals to transplant centers and whether or not the CF lung transplant referral patient decision aid was used. CF nurse coordinators sent the tracking logs to the study coordinator (KV) when referral for lung transplantation was discussed with a new patient. The tracking log included date of the discussion, use of the patient decision aid (yes/no), timing of patient decision aid use, date of follow-up discussion, and any general comments. This survey and the tracking logs provided data on the monitoring use phase of the Knowledge-to-Action Framework. The barriers assessment survey and a copy of the patient decision aid were sent to nurse coordinators and CF physicians at the start of the study (September-November 2010). The 'barriers survey' measured healthcare professionals' perceived barriers and facilitators towards using the patient decision aid using five items at the level of the healthcare professionals (e.g., knowledge, confidence, skills) and six items at the level of the organization (e.g. adequate time, fit with workflow, ease of use in practice). Respondents were instructed to rate each statement on a 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree) with neutral in the center. The survey items were previously validated in a study of physicians using principal component analysis [20] and subsequently validated in studies of nurses and other non-physician healthcare professionals [21,22]. At the end of the study, the survey was re-administered to those who responded to the baseline survey (July-August 2012). The barriers assessment survey was conducted to be consistent with the Knowledge-to-Action Framework, and previous studies were more successful with implementing shared decisionmaking when barriers were assessed [14]. Implementation interventions Results from the barriers survey and evidence on effective interventions were used to tailor the interventions for implementing the patient decision aid into routine clinical practice [23]. Table 1 maps interventions onto the barriers and Figure 1 presents the timeline schematic for delivery of these interventions. A five-hour workshop was provided to increase healthcare professionals' knowledge and skills in using patient decision aids and addressed several of the barriers. An expert in patient decision aids and training healthcare professionals facilitated the workshop (DS). The workshop objectives are available at https://decisionaid.ohri. Figure 1 Interventions and timeline illustrating change in use of the patient decision aid over the 2-year study period (n = 15 CF clinics). Each arrow indicates the timing of a separate study procedure or intervention. ca/training.html. Two role play scenarios were used by all participants to build skills and increase their confidence in using the patient decision aid with patients. Informal performance feedback was provided within the role playing exercises using the Decision Support Analysis Tool [24]. The workshop was free of charge, but participants were reimbursed for travel costs and received an honorarium ($200 Canadian). At the end of the workshop, participants were asked to complete a knowledge test and satisfaction survey. The knowledge test administered post-workshop was the same ten questions used in the online tutorial [25]. Consistent with other training programs, the satisfaction survey aimed at assessing participants' overall rating as well as whether or not the workshop achieved the learning objectives and provided adequate information, new information, and enough role play exercises [26]. A similar workshop was shown to improved healthcare professionals' knowledge and skills compared to a control group [27,28]. The Ottawa Decision Support Tutorial in English and French was offered to healthcare professionals who were unable to attend the workshop [25]. This online tutorial provides a series of ten modules with review questions for each module. It takes about 90 min to complete. The tutorial has similar knowledge content to the workshop described above but did not include the skills building exercises or group discussions. The tutorial has been shown to improve healthcare professionals' knowledge [27,28]. Access to the patient decision aid was enhanced by sending participating centers' printed copies of the patient decision aid in English and French at the beginning of the study and anytime during the 2-year period when additional copies were requested. Online access to the patient decision aid was also provided [http://decisionaid.ohri.ca/decaids.html]. To facilitate implementation, use of the CF lung transplant referral patient decision aid was encouraged for patients with CF having a forced expiratory volume in one second (FEV 1 ) of 30 to 40% predicted or when they experienced a rapid decline in lung function necessitating a hospital admission. The patient decision aid was implemented by having a healthcare professional ask patients to complete the patient decision aid on their own, discuss patients' responses to the patient decision aid at the subsequent encounter, and collect the completed one-page summary report for inclusion on their clinic record. Given the content and structured decision-making guidance in the patient decision aid, it was also an intervention that could enhance healthcare professionals' knowledge and skills in supporting patients facing this decision and change the way clinic time is used for discussing decisions [10]. Conference calls were used to reinforce learning and provide ongoing support. The calls were consistently structured by asking participants to share their positive and negative experiences with using the patient decision aid and discussing strategies to address implementation issues. Calls occurred every 3 months in the first year and every 6 months in the second year. During the calls, notes were taken to summarize the discussion. By encouraging participants to share their experiences using the patient decision aid, the calls were able to discuss several identified barriers (see Table 1). Statistical analysis Characteristics of healthcare professionals were assessed using frequency distributions and univariate descriptive statistics. The primary outcome, sustained use of the patient decision aid by healthcare professionals for 80% of eligible patients at the end of year two, was analyzed using McNemar's test to assess annual trends in use of the patient decision aid. Barrier survey responses were reclassified into agree (strongly agree, agree), disagree (strongly disagree, disagree) and neutral. This data was then analyzed using Friedman's test taking into account that the baseline and post-intervention scores were not independent. All analyses were conducted using SAS software, version 9.1 (SAS Institute, Inc., Cary, North Carolina). Content analysis was used for qualitative survey feedback and conference call notes. Need to enhance ability to support patients handling conflicting views X X X Perception that using the PtDA will require major changes in current practice X X Results Of the 23 adult Canadian CF clinics, 18 agreed to participate, 3 agreed to provide data on the use of the patient decision aid only, and 2 clinics did not participate because they routinely referred all their CF patients with poor lung function for a lung transplant assessment without eliciting patients' informed preferences. For the three who agreed to provide data on patient decision aid use, no rationale was given for not participating in the study interventions and they subsequently reported that the patient decision aid was not used with any of the 20 patients who were considering referral for lung transplantation. Baseline barriers survey Thirty-one healthcare professionals completed the barriers survey with at least one from each of the 18 participating CF clinics. There were 18 nurses, 12 physicians, and 1 pharmacist, and their characteristics are presented in Table 2. The main barriers to using the patient decision aid in clinical practice were: a) healthcare professionals' lack of knowledge and skills in supporting patients making decisions, clarifying values for outcomes of options, helping patients handle conflicting preferred options of others; b) lack of time, and c) needing to change how transplantation is discussed as a team (Tables 3 and 4). Facilitators at baseline were healthcare professionals feeling comfortable discussing bad transplantation outcomes, agreeing the patient decision aid would be easy to use in the CF clinic, and indicating that using it was unlikely to require reorganization of the CF clinic (Tables 3 and 4). Modifiable barriers are listed in Table 1 along with the multi-faceted interventions tailored to address these barriers. Training From the 18 CF clinics, 13 clinics had healthcare professionals participate in training. Twelve clinics had healthcare professionals (15 nurses, 1 pharmacist) attend the training workshop in November 2010 and two clinics had healthcare professionals (2 nurses) complete the online tutorial. Fifteen healthcare professionals completed the knowledge test post workshop and two completed the knowledge test post tutorial. Their median knowledge score was 7 out of 10 (mean 6.9; 1.85 SD). Twelve workshop participants completed the satisfaction survey indicating that the workshop met the objectives (n = 11-12 out of 12) by providing just the right amount of information (n = 10) and role play exercises (n = 11), with new information about decision support (n = 12). The overall rating of the workshop was excellent or good (n = 9; n = 3). In the open satisfaction survey questions, workshop participants indicated that they would determine who was ready for transplant discussions, inform team members of who should be targeted, use the patient decision aid, and make sure to also consider patient values. Conference calls Twelve CF clinics had nurse coordinators participate in one or more conference calls with the principle investigators of the study (SA, KV, DS). For each conference call, 6-12 nurses participated and they varied across calls according to who was available when the call was scheduled. Nurses described the decision-making process as the CF physician typically introduced the topic of lung transplant referral and the CF nurse coordinator provided and discussed the patient decision aid with the patients. At least one site discussed having filed the patient decision aid one-page summary report on the patients' clinic chart for future reference. Nurses also discussed the sensitivities about timing for introducing the patient decision aid. For example, if patients were feeling positive about their ability to manage their CF despite poor lung function, nurses reported that the CF team held off on introducing the patient decision aid at that clinic visit. Several nurses shared the advantages of using the patient decision aid with patients during a hospital admission. For example, hospitalized patients were described as sicker and more likely to be reflecting upon their disease progression. As well, nurses stated that there was time to have longer discussions with patients who were hospitalized and it was easier to have followup discussions within 1-2 days of introducing the patient decision aid in hospital. Sustained use of the patient decision aid Of the 18 participating clinics, 15 responded to the baseline patient decision aid use survey and provided tracking logs for the 2 years of the study and three participating clinics did not provide tracking logs. At baseline, the 15 CF clinics reported that the patient decision aid was used by 18 of 62 CF patients (29%) who were considering referral for lung transplantation within the previous year. After initiating our implementation interventions, the 15 CF clinics reported on tracking logs that the patient decision aid was used by 58 of 68 CF patients (85%) considering lung transplantation referral during the first year of the study and 54 of 59 (92%) during the second year of the study (Figure 1; Table 5). There was a statistically significant increase in uptake in the first year (p < 0.001), and this uptake was sustained in the second year (see Figure 1). Change in perceived barriers to using patient decision aids Twenty-eight of 31 healthcare professionals completed the survey at baseline and the end of the study (90%), and three completed the survey at baseline only (10%). Tables 3 and 4 show the changes in perceived barriers to using patient decision aids. Significant changes in barriers at the end of the study compared to baseline were: more healthcare professionals felt confident in their ability to support patients making decisions (84% pre to 96% post; p = 0.03) but were more likely to need to enhance their ability to support patients handling conflicting views about the decision (71% pre to 50% post; p = 0.01). Other remaining barriers were: the need to further The decision aid will be easy to experiment with before deciding to adopt it in our CF program enhance their knowledge (55% pre to 50% post) and skills (58% pre to 43% post) in supporting patients making health decisions and perceived lack of time (48% pre to 50% post). Although most (>88%) thought the patient decision aid would be easy to use without requiring reorganization of the CF clinic, only two-thirds thought it was likely to be used by most of their colleagues. Discussion Our pan-Canadian study systematically evaluated the implementation of a patient decision aid in clinical practice. There was good across-Canada participation with 78% of CF clinics involved and an additional 13% that provided data on the patient decision aid use without participating in the implementation interventions. Our study demonstrated that implementation interventions tailored to overcome identified barriers helped us reach our goal of ensuring regular and sustained use of the patient decision aid for 80% or more of adults with CF considering referral for lung transplantation. In fact, there was sustained use of the patient decision aid with over 90% of eligible patients using it during the second year for 15 CF clinics that provided tracking logs. Our implementation intervention used training to enhance healthcare professional knowledge and skills in using patient decision aids. At the end of the study, more healthcare professionals reported that they felt confident in their ability to support patients making health decisions. However, some participants continued to identify the need to further enhance their knowledge and skills. The median knowledge score of 70% is consistent with trials of healthcare professionals but lower than those completing the test as part of a universitybased training program [29]. Our findings are similar to other studies describing the challenges of implementation and the difficulty determining what interventions and intensity of interventions are actually required [13,23]. The differences between our study and previous randomized controlled trials was that healthcare professionals' performance in our study was not objectively measured using simulated patients, and we added use of conference calls to provide ongoing support beyond the educational workshop [21,28]. These conference calls are similar to reinforcement sessions shown to be effective in other studies focused on implementing shared decision-making [23]. During calls, nurses discussed how the patient decision aid was being used within the CF clinic, ways of introducing the patient decision aid to the patient as part of the process of care, and the timing of follow-up patient discussions. Sharing their experiences provided ideas on how to facilitate patient decision aid use and how to better support patients making this decision. Our findings appear to support the use of the Knowledge-to-Action Framework. According to this framework, patient decision aids are third-generation knowledge translation tools aimed at presenting secondgeneration knowledge in user-friendly implementable formats [30]. Second-generation knowledge uses synthesis to aggregate first-generation knowledge of individual studies. The Knowledge-to-Action Framework hypothesizes that implementation is more successful when using a systematic process that includes adapting patient decision aids to the local context and tailoring implementation interventions to overcome known barriers. Healthcare professionals participating in our study identified barriers to using the patient decision aid that appeared to be addressed by the interventions. Another key assumption of this framework is that planning for sustainability is initiated when implementation interventions are being chosen [31]. However, remaining barriers identified at the end of the study require additional interventions to support ongoing implementation. Despite improved use of the patient decision aids with 15 participating CF clinics, the other 8 CF clinics in Canada that did not fully participate in the study had no or unclear use of the patient decision aid. Therefore, the tailored interventions were not adequate for all eligible CF clinics and qualitative research could be helpful for exploring reasons for non-participation and/or non-use of the patient decision aid. Other countries have policy level interventions to facilitate implementation of patient decision aids. For example, in the United States, there is legislation requiring use of patient decision aids for elective surgical procedures and this would include lung transplantation [32]. The National Health Service in the United Kingdom has a large initiative to implement patient decision aids to enhance shared decision-making across a wide range of health conditions [http://sdm. rightcare.nhs.uk/]. However, there has been no evaluation of the influence of these new health policy-driven initiatives on uptake of patient decision aids. There are four key limitations of our study. First, there is the possibility of reporting bias. We had no mechanism to independently verify the information that was submitted in the tracking logs by the nurse coordinators. Our original intent was to survey the adult CF patients in each clinic after the first and second year of the study; however, this was not feasible because of privacy and ethical issues. Second, we were unable to engage 100% of CF clinic in Canada. Our intervention appeared to be successful in clinics that fully participated in the study, but some clinics that did not participate in the interventions reported no use of the patient decision aids. Further research should be conducted to explore why some clinics provided data but chose not to participate in the interventions and why other clinics did not participate. Third, use of English language primarily may have limited recruitment and delivery of the study interventions. Although our patient decision aid, study measures, and online tutorial were available in French, the face-to-face workshop and conference calls were in English only. Future implementation studies designed to reach sites across Canada need to ensure all implementation interventions are available in both official languages. Finally, the interventions mostly targeted healthcare professionals and it would be helpful to better monitor the influence of organizational level factors and patients' responses. Conclusion Despite having presented and published our previous work demonstrating that the CF lung transplant referral patient decision aid improved quality of decisions made by patients, these forms of passive dissemination were inadequate for ensuring uptake of this patient decision aid in routine clinical practice. Our findings suggest the value of systematically investigating the process of implementation of patient decision aids with CF healthcare professionals in order to identify and address barriers perceived to interfere with using the patient decision aid. Our findings indicate that providing tailored interventions to specifically target the modifiable barriers appeared to result in more healthcare professionals feeling confident in their ability to support patients making health decisions and uptake of the patient decision aid within the participating Canadian CF clinics. Our findings also appeared to show sustained use of the patient decision aid during the second year of the study. Further evaluation is required to better understand barriers influencing routine use of patient decision aids in clinical settings resistant to participate.
2015-03-19T23:44:59.000Z
2015-02-07T00:00:00.000
{ "year": 2015, "sha1": "37a12251c0d39b6f81c84f372c4358672c51e4c6", "oa_license": "CCBY", "oa_url": "https://implementationscience.biomedcentral.com/track/pdf/10.1186/s13012-015-0206-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "37f7f2aca9bbcf5e5f61984e1e9571ecda240b41", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1868476
pes2o/s2orc
v3-fos-license
Chronic Alcohol-Induced microRNA-155 Contributes to Neuroinflammation in a TLR4-Dependent Manner in Mice Introduction Alcohol-induced neuroinflammation is mediated by pro-inflammatory cytokines and chemokines including tumor necrosis factor-α (TNFα), monocyte chemotactic protein-1 (MCP1) and interleukin-1-beta (IL-1β). Toll-like receptor-4 (TLR4) pathway induced nuclear factor-κB (NF-κB) activation is involved in the pathogenesis of alcohol-induced neuroinflammation. Inflammation is a highly regulated process. Recent studies suggest that microRNAs (miRNAs) play crucial role in fine tuning gene expression and miR-155 is a major regulator of inflammation in immune cells after TLR stimulation. Aim To evaluate the role of miR-155 in the pathogenesis of alcohol-induced neuroinflammation. Methods Wild type (WT), miR-155- and TLR4-knockout (KO) mice received 5% ethanol-containing or isocaloric control diet for 5 weeks. Microglia markers were measured by q-RTPCR; inflammasome activation was measured by enzyme activity; TNFα, MCP1, IL-1β mRNA and protein were measured by q-RTPCR and ELISA; phospho-p65 protein and NF-κB were measured by Western-blotting and EMSA; miRNAs were measured by q-PCR in the cerebellum. MiR-155 was measured in immortalized and primary mouse microglia after lipopolysaccharide and ethanol stimulation. Results Chronic ethanol feeding up-regulated miR-155 and miR-132 expression in mouse cerebellum. Deficiency in miR-155 protected mice from alcohol-induced increase in inflammatory cytokines; TNFα, MCP1 protein and TNFα, MCP1, pro-IL-1β and pro-caspase-1 mRNA levels were reduced in miR-155 KO alcohol-fed mice. NF-κB was activated in WT but not in miR-155 KO alcohol-fed mice. However increases in cerebellar caspase-1 activity and IL-1β levels were similar in alcohol-fed miR-155-KO and WT mice. Alcohol-fed TLR4-KO mice were protected from the induction of miR-155. NF-κB activation measured by phosphorylation of p65 and neuroinflammation were reduced in alcohol-fed TLR4-KO compared to control mice. TLR4 stimulation with lipopolysaccharide in primary or immortalized mouse microglia resulted in increased miR-155. Conclusion Chronic alcohol induces miR-155 in the cerebellum in a TLR4-dependent manner. Alcohol-induced miR-155 regulates TNFα and MCP1 expression but not caspase-dependent IL-1β increase in neuroinflammation. Introduction According to the WHO the harmful effects of alcohol are major public health concerns across the world [1]. The effects of alcohol on the brain include neuroinflammatory and neurodegenerative changes mediated partially via innate immune responses [2,3]. Recently microRNAs (miRNAs) have been implicated in the pathogenesis of predominantly neurodegenerative or neuroinflammatory diseases, such as Alzheimer's or neuroviral infections [4,5]. MiRNAs are evolutionally conserved, small non-coding RNAs which are involved in various biological processes such as development, differentiation, innate and adaptive immune responses [6]. Mature miRNAs regulate posttranscriptional gene expression mainly via repressing translation or inducing mRNA degradation [7]. Recently other mechanisms, such as posttranslational stabilization of mRNA enabling increased translation, have also been proposed, however the exact mechanism is not fully understood [8]. MiR-155 (miR-155) plays an important role in inflammatory conditions and malignant cell growth [9] and is upregulated in the brain in multiple sclerosis and a cerebral ischemia model [5,10]. Many miR-155 targets are pro-apoptotic and anti-or proinflammatory, and miR-155 expression leads to cell survival and modification of inflammation [9]. At present, there is an ongoing debate whether miR-155 plays a pro-or anti-inflammatory role, but the studies agree that miR-155 does play an important regulatory role in inflammation. Among many anti-inflammatory proteins, miR-155 targets phosphatidylinositol-3,4,4-triphosphate 5 phosphatase-1 (SHIP1) (a negative regulator of TNFa) and suppressor of cytokine signaling-1 (SOCS1) (a negative regulator of cytokines), which subsequently leads to increased inflammatory responses [11]. Furthermore, miR-155 is induced in macrophages, dendritic cells, B-and T-cells after Toll-like receptor (TLR) stimulation [9,12]. A recent report has shown miR-155 induction upon lipopolysaccharide stimulation in a microglia cell line [6]. However there is evidence that the effect of miR-155 is not solely pro-inflammatory, in dendritic cells miR-155 silencing resulted in increased IL-1b production [12]. Pro-inflammatory targets of miR-155 include myeloid differentiation primary response gene (88) (MyD88) and transforming growth factor beta-activated protein kinase-1 binding protein-2 (TAB2) [13], which are upstream of nuclear factor-kB (NF-kB), their inhibition by miR-155 can lead to decreased NF-kB activation [14]. Conversely, in vitro NF-kB inhibition could prevent alcohol-induced upregulation of miR-155 in Kupffer cells [8]. NF-kB activation has recently been proven to be involved in the pathogenesis of alcohol-induced neuroinflammation [2]. The transcriptional activity of the most abundant form of NF-kB heterodimer, p50/p65, is increased by phosphorylation of its p65 subunit [15]. NF-kB is known to induce the transcription of proinflammatory cytokines and chemokines, like tumor necrosis factor-a (TNFa), monocyte chemotactic protein-1 (MCP1) and interleukin-1-beta (IL-1b) [16,17], all of which are increased in alcohol-induced neuroinflammation [2,3,18]. Posttranslational cleavage of pro-IL-1b to mature IL-1b is required for its functional activity and is executed by the inflammasome via caspase-1 activation [19]. TLR activation via danger or pathogen associated molecular patterns (DAMPs and PAMPs) leads to NF-kB activation and consequently increased cytokine production [16]. TLR4 is one of the major pathways involved in alcohol-induced neuroinflammation [3,18]. The aim of our study was to examine the role of miR-155 in the pathogenesis of alcohol-induced neuroinflammation in vivo. Our novel results suggest that chronic alcohol consumption induces miR-155 in the cerebellum in a TLR4-dependent manner. Furthermore, alcohol-induced TNFa and MCP1 production is miR-155-dependent. Animals This study was approved and conducted according to the regulations of the Institutional Animal Care and Use Committee (IACUC) of the University of Massachusetts Medical School (Worcester, MA). Six to eight weeks old female C57/BL6J wild type (WT); miR-155 knock-out (KO) and toll-like receptor-4 (TLR4) KO mice (backcrossed on a C57/BL6J background) were used. For 5 weeks the animals received 5% (v/v) ethanol (36% ethanol-derived calories) containing Lieber-DeCarli diet (EtOH) or pair-fed diet (PF) with an equal amount of calories where the alcohol-derived calories were substituted with dextran-maltose (Bio-Serv, Frenchtown, NJ) [20]. The daily consumption of the diet was the same in the WT and all KO mouse strains, approximately 10 ml/animal. Sample Collection Blood was collected and animals were sacrificed by cervical dislocation. Cerebella and cerebra were immediately isolated and were snap frozen or stored in RNAlater (Qiagen GmbH, Maryland, USA) for protein or messenger ribonucleic acid (mRNA) and miRNA evaluation, respectively. Serum and brain samples were stored at 280uC. Cells Primary microglia of brain of adult WT mice was isolated similar to Frank et al. [21]. Briefly, after cheek bleeding whole brain was washed in ice-cold PBS containing 2% FBS and 0.2% glucose, minced in Petri dish and homogenized in Tenbroeck homogenizer (Wheaton Industries, Millville, NJ). Homogenate was filtered through a 40 mm cell strainer (BD Biosciences, Bedford, MA) into a 50 ml conical tube and was centrifuged at 1250 RPMI for 5 min at room temperature (RT). Supernatant was discarded and pellet was resuspended in 3 ml 70% Percoll and transferred to a 15 ml conical tube. 6 ml 50% Percoll followed by 2 ml 2% fetal bovine serum (FBS) and 0.2% glucose containing phosphatebuffered saline (PBS) were layered on top of the 70% Percoll cellsuspension and centrifuged at 2400 RPMI for 30 min at RT. The layer containing enriched microglia was collected from the interface between the 70 and 50% Percoll phases and washed twice with 1 ml 2% FBS and 0.2% glucose containing PBS and centrifuged at 1250 RPMI for 5 min at RT. Prior to plating, microglia from two mice were pooled together. Isolated brain microglia were suspended in RPMI containing 10% FBS and plated in 96-well plates at a density of 10 5 cells/100 ml/well. Nonadherent cells were removed by washing cells with PBS one hour after plating. The purity of microglia was evaluated by FACS analysis, 91.2% of the cells were positive for CD11b staining (data not shown). An immortalized mouse microglia cell-line, generated from WT animals, was also employed [22]. The microglia cell line was plated on 6-well plates at a density of 1610 6 cells/1 ml/well. The cell experiments were executed a minimum of two times at least in triplicates. In vitro Immune Stimulation Cells were incubated with media alone or media containing 50 mM ethanol (EtOH) and/or 100 ng/ml lipopolysaccharide (LPS) (Sigma, St. Louis, MO) at 37uC, 5% CO 2 and harvested 1, 6 or 18 hours after stimulation. Samples were run in triplicates for each condition. At the end of each incubation, media was collected, centrifuged at 1250 RPMI for 5 min at 4uC to remove floating cells and supernatants were stored at 280uC. After washing cells twice with PBS, nuclear and cytoplasmic extracts were isolated or cells were lysed in QIAzol Lysis reagent (Qiagen, Maryland, USA) at 280uC for further mRNA and miRNA extraction. Polymerase Chain Reaction (PCR) RNA was extracted using RNeasy kit (Qiagen, Maryland, USA). cDNA was transcribed from 1 mg of total RNA using Reverse Transcription System (Promega Corp., Madison, WI) in a final volume of 30 ml. SYBR-Green-based real-time quantitative PCR was performed using the iCycler (Bio-Rad Laboratories Inc., Hercules, CA). Comparative threshold cycle (Ct) method was used to calculate expressions relative to WT control groups. The final results were expressed as fold changes between the sample and the controls corrected with internal control, 18S [23]. Primers used for the experiments are listed in Table 1. Enzyme-activity Assay Caspase-1 colorimetric assay was used to determine the enzymatic activity (R&D Systems, Inc., Minneapolis, MN) from cerebellar tissue lysates. Electromobility Shift Assay (EMSA) End labeling of double-stranded NF-kB oligonucleotide, 59AGTTGAGGGGACTTTCGC39 was accomplished by treatment with T4 polynucleotide kinase in the presence of c 32 P-ATP (PerkinElmer, Waltham, MA), followed by purification on a polyacrylamide copolymer column (Bio-Rad). Microglial nuclear extract (2.5 mg) or cerebellar whole cell lysate (5 mg) was incubated with 1 ml labeled oligonucleotide (50,000 cpm) and 4 ml dI-dC (Affymetrix Inc., Santa Clara, CA) and 5X gel buffer (containing 20 mM HEPES pH 7. For supershift analysis, 2 mg of anti-p65 antibody (SantaCruz Santa Cruz Biotechnology Inc., Santa Cruz, CA) was included in the binding reaction 30 minutes prior to labeling. For cold competition reaction a 20-fold excess of specific unlabeled doublestranded probe was added to the reaction mixture 20 minutes prior to adding the labeled oligonucleotide. Samples were incubated at room temperature for 20 minutes. Reactions were run on a 4% polyacrylamide gel. Gels were then dried and exposed to an X-ray film at 280uC for 6 hours or overnight where appropriate. Kodak X-OMAT 2000A Processor was used for film development in the darkroom. The films were scanned and densitometry was performed on the images using Multi Gauge Ver.3.2 image software (Fujifilm Corp., USA) [24]. Statistical Analysis Since the data was not normally distributed, statistical analysis was performed using Kruskal-Wallis nonparametric test. Data are shown as average 6 standard error of the mean (SEM) and differences were considered statistically significant at p#0.05. The experiments were performed a minimum of two times. Pro-inflammatory Cytokines and microRNAs are induced in Alcohol-fed Mice in the Cerebellum Previous reports have shown that neuroinflammation is present and proinflammatory cytokines are upregulated in chronic alcoholic brains in mice as well as humans [2,3,18]. We found significant induction of TNFa, MCP1 and IL-1b protein in chronic alcohol feeding compared to control mice in the cerebellum ( Figure 1A-C). MicroRNAs (miRNAs) are small non-coding RNAs with regulatory function including modulation of inflammation and cytokine production [25]. MiR-125b, -132, -146a and -155 have been shown to be altered in the LPS-induced inflammatory pathway [25,26]. We found a significant increase in miR-155 and miR-132, but no change in miR-125b or miR-146a in chronic alcohol feeding compared to control mice in the cerebellum ( Figure 1D). MicroRNA-155 Deficiency Protected Mice from Ethanolinduced Proinflammatory Cytokine Increase in the Cerebellum MiR-155 can increase TNFa mRNA half-life in RAW macrophage cell line contributing to inflammation [8]. To evaluate the effect of miR-155 on alcohol-induced neuroinflammation, we employed miR-155 deficient mice. In contrast to WT mice, alcohol-fed miR-155-KO mice showed no increase in TNFa and MCP1 both at mRNA and protein levels compared to pair-fed controls (Figure 2A-D). Inflammasome Activation and IL-1b Increase is Independent of miR-155 in Alcohol-fed Mouse Cerebellum Recently, we showed inflammasome activation and consequent IL-1b production in the brain of alcohol-fed mice [18]. Interestingly, alcohol-fed miR-155 KO mice showed similar induction of caspase-1 activation and IL-1b protein increase to WT mice ( Figure 3A-D), suggesting that caspase-1 and IL-1b are regulated independent of miR-155. However, there was no change in pro-IL-1b mRNA expression in ethanol-fed miR-155-KO compared to control mice ( Figure 3E). This observation was similar to the protection from TNFa and MCP1 protein induction but incomplete (partial) protection of caspase-1 activation and IL-1b protein in TLR4-KO mice [18]. MicroRNA-155 Deficiency Protected Alcohol-fed Mice from NF-kB Activation in the Cerebellum To evaluate the mechanism by which miR-155 regulates cytokine and chemokine production, we evaluated NF-kB activation. NF-kB is a major regulator in proinflammatory pathways and can upregulate multiple proinflammatory cytokines and chemokines, including TNFa, MCP1 and pro-IL-1b [16,17]. In addition, NF-kB activation mediates induction of miR-155, in turn miR-155 can decrease NF-kB activity [8,13]. Activation of NF-kB occurs by phosphorylation of p65, part of the p50/p65 NF-kB heterodimer and its translocation to the nucleus. [15]. We found increased NF-kB DNA binding (measured by EMSA) in the cerebellum of ethanol-fed WT mice compared to controls but no increase in alcohol-fed miR-155-KO mice compared to their PF controls ( Figure 4A-B). Furthermore, supershift analysis with p65 antibody showed increased p65 DNA binding in alcohol-fed wildtype but not in miR-155-KO mice compared to pair-fed controls ( Figure 4C-D). Consistent with increased NF-kB DNA binding, phosphorylated-p65 levels were also increased in the brains of alcohol-fed WT mice, but there was no increase in the brains from alcohol-fed miR-155-KO mice compared to appropriate controls ( Figure 4E-F), while alcohol-feeding did not change total-p65 levels in the brain ( Figure 4G-H). Induction of miR-155 is TLR4-dependent in Cerebella from Chronic Alcohol-fed Mice DAMPs and PAMPs are major inducers of inflammation via receptors, like the Toll-like receptor (TLR) family [27]. Previous reports have shown that TLR4 can activate NF-kB [16] and can also induce miR-155 upregulation [9,12]. Furthermore alcohol- induced neuroinflammation can be triggered by TLR4 [3]. Here we tested whether TLR4 was required for alcohol-induced upregulation of miR-155. Alcohol-fed TLR4 KO mice had no increase in miR-155 compared to control mice in the cerebellum ( Figure 5A). Furthermore, TLR4-KO mice had no NF-kB activation indicated by phosphorylated-p65 compared to control mice in the cerebellum ( Figure 5B-C), and there was no change in total-p65 levels (Figure5 D-E). Moreover, recently we showed that alcohol-induced TNFa and MCP1 production was prevented in TLR4-KO mice in the cerebellum [18]. Induction of miR-155 is TLR4 Dependent in Mouse Microglia In a previous study microglia cell line stimulation with TLR4ligand, LPS, resulted in increased miR-155 expression [6]. We tested whether TLR4 stimulation could directly induce miR-155 in microglia as we found increased mRNA expression of the microglia markers, CD68 and ionized calcium binding adaptor molecule-1 (Iba1), in chronic ethanol-fed WT mice compared to WT control-fed mice in cerebellum, but no change in TLR4-KO mice ( Figure 6A-B). The WT immortalized mouse microglia cell line showed increased miR-155 expression upon stimulation with the TLR4-ligand, lipopolysaccharide (LPS) ( Figure 6C). Similar results were found in primary mouse microglia ( Figure 6D). Ethanol alone decreased miR-155 levels ( Figure 6C-D). However ethanol treatment resulted in greater fold induction of miR-155 by LPS in immortalized microglia (from 9.4 to 38.6) and in primary microglia (from 3.8 to 5.85) when compared to cells in media alone, suggesting that alcohol augments LPS-induced miR-155 induction. Discussion Chronic ethanol feeding results in neuroinflammatory changes in cortical, hippocampal and cerebellar brain regions [2,3,18]. Increasing evidence suggests that long-term neurodegenerative changes in the cerebellum of alcoholics are not solely due to lack of dietary factors [18,28]. The neuroinflammatory changes include induction of pro-inflammatory cytokines and chemokines, DAMPs, NF-kB, inflammasome and inducible nitric oxide synthase (iNOS) activation, nictoinamide adenine dinucleotide phosphate (NADPH)-oxidase and reactive oxygen species mediated pathways [2,3,18]. The role of miRNAs in the pathogenesis of neurological diseases is gaining increased attention [4,5]. MiRNAs are involved in the modulation of innate and adaptive immune responses and regulate inflammatory pathways [6]. MiR-155 and miR-132 have broad pro-inflammatory effects, while miR-125b and miR-146a are negative regulators of inflammation in most cell types [29,30]. Here we found miR-132 and miR-155 upregulation and no changes in miR-125b or miR-146a levels in the cerebellum after chronic alcohol feeding suggesting that miR-132 and miR-155 are involved in the pathophysiology of alcoholinduced neuroinflammation. We show for the first time that miR-155-KO mice were protected from alcohol-induced TNFa and MCP1 induction in the cerebellum. This is consistent with other reports where silencing of miR-155 in LPS treated microglia cell line resulted in decreased TNFa induction, whereas IL-1b levels remained unaffected [6]. Consistent with our findings, miR-155 over-expressing mice showed increased TNFa production upon LPS challenge [30]. MiR-155-KO mice have been reported to have immune deficiencies, including impaired T and B cell development and antigen presentation by dendritic cells. These mice are also prone to developing lung fibrosis and are less resistant to certain bacterial challenges [31,32]. However, miR-155-KO mice also showed resistance to rheumatoid arthritis and experimental autoimmune encephalomyelitis [33,34]. These observations together with our results suggest that miR-155 is an important molecular regulator of neuroinflammation induced by alcohol. A common element in regulating pro-inflammatory gene expression is the activation of NF-kB [16,17]. NF-kB has binding sites on the promoter regions of IL-1b, TNFa and MCP1 genes [16,17]. Furthermore, miR-155 is induced by NF-kB activation and we previously reported that in liver resident macrophages, Kupffer cells, miR-155 was induced by chronic alcohol [8]. Moreover, miR-155 induction in Kupffer cells by ethanol or by stimulation with the TLR4 ligand, LPS, was NF-kB dependent [8]. Here we found that in contrast to WT mice, TLR4-KO mice had no induction in miR-155 expression upon alcohol feeding. These observations indicate that the miR-155 regulated pathway is TLR4-dependent and the TLR4-mediated inflammatory pathway is likely miR-155 mediated in our model. Our current data also suggest that miR-155 induction in the brain is TLR4-dependent and involves NF-kB activation. The baseline level of NF-kB activation was somewhat higher in miR-155-KO mice compared to WT, but it did not increase upon alcohol-feeding, and did not affect the baseline protein levels of TNFa, MCP1 or IL-1b supporting the notion for the miR-155-dependent induction by alcohol. Interestingly, in some studies, miR-155 was found to down-regulate NF-kB activation [14]. While miR-155 deficiency protected from alcohol-induced TNFa and MCP1 increase in the brain, it failed to prevent alcohol-induced IL-1b production and caspase-1 activation. Recently we showed that alcohol-fed TLR4-KO mice had similar protection from TNFa and MCP1 protein induction and lack of Figure 3. MicroRNA-155 KO mice are not protected from alcohol-induced IL-1b increase in the cerebellum. WT (n = 6 or 7) or miR-155-KO (n = 5 or 10) mice were fed with control (PF) or EtOH diet for 5 weeks, respectively. Inflammatory cytokine, IL-1b was measured by specific ELISA on whole cerebellar lysates and corrected with total protein (A). Mature IL-1b protein of whole cerebellar lysates was assessed by Western blot using b-actin as loading control (B), and further quantified by densitometry (C) which represents six to ten samples per group. The inflammasome activity was measured by caspase-1 colorimetric assay from whole cerebellar lysates and corrected with total protein (D). Pro-IL-1b mRNA was assessed by real-time PCR from whole cerebellar RNA extract, corrected with 18S (E). Bars represent mean6SEM (*, #: p value,0.05 relative to appropriate PF or WT controls, respectively, by Kruskal-Wallis non-parametric test). doi:10.1371/journal.pone.0070945.g003 protection from caspase-1 activation IL-1b protein increases [18]. Unlike TNFa and MCP1, IL-1b increase and caspase-1 activation were not prevented by miR-155 deficiency, which is consistent with reports showing increased IL-1b production in dendritic cells after miR-155 inhibition [12]. Pro-IL-1b mRNA levels were not increased in miR-155-KO mice most likely due to the lack of NF-kB activation. While IL-1b protein production is largely dependent on caspase-1 activation, pro-IL-1b mRNA induction is NF- Figure 4. MicroRNA-155 deficiency protects from alcohol-induced NFkB activation in mouse cerebellum. WT (n = 6 or 7) or miR-155-KO (n = 5 or 10) mice were fed with control (PF) or EtOH diet for 5 weeks, respectively. NF-kB activity of whole cerebellar lysates was assessed by EMSA for NF-kB (A-B) and supershift with anti-p65 antibody (C-D), loading equal amounts of protein, using EtOH-fed cerebellar sample for cold competition control (ctr), and further quantified by densitometry. Phosphorylated-p65 (E-F) and total-p65 (G-H) protein of whole cerebellar lysates was assessed by Western blot, using b-actin as loading control, and further quantified by densitometry which represents six to ten samples per group. Bars represent mean6SEM (*, #: p value,0.05 relative to appropriate PF or WT controls, respectively, by Kruskal-Wallis non-parametric test). doi:10.1371/journal.pone.0070945.g004 kB mediated [35]. Inflammasome activation is induced in alcoholfed mouse brains via PAMPs and DAMPs, like high mobility group box protein 1 (HMGB1) [18]. IL-1b protein level has not been affected by the deletion of either TLR4 or miR-155, which might indicate that the inflammasome mediated pathway has a distinctive regulatory pattern from that of TNFa and MCP1. Previous reports show activation of microglia and astrocytes along with neuronal changes and cell death in alcoholic brains in both humans and animals [3,36]. We found upregulation of miR-155 in LPS-stimulated microglia cell line as well as primary microglia isolated from pair-fed or alcohol-fed mice. Consistent with our findings, others have shown that miR-155 is increased upon LPS stimulation in an N9 microglia cell line, reducing its target gene SOCS1 [6]. Ethanol alone decreased miR-155 in microglia, but ethanol treated cells or cells from ethanol-fed animals had higher miR-155 fold-induction by LPS, suggesting a sensitization to PAMPs and potential TLR4-inducing DAMPs. The fact that miR-155 was decreased in primary microglia from alcohol-feeding might be attributable to the 18 hours incubation period with media only, which is devoid any of the DAMPs or PAMPs that would be present in vivo. To address this question, processing of miRNA immediately after microglia isolation would be necessary. Another plausible explanation is that other cells, for example astrocytes, may be involved in miR-155 induction in the brain, but this awaits further investigation. In summary, we report for the first time that miR-155 is induced in alcohol-fed mice in the brain. The induction of miR-155 in the cerebellum is TLR4-dependent. Furthermore, cerebellar induction of TNFa and MCP1 is miR-155-dependent, however, induction of mature-IL-1b is miR-155 independent in chronic alcohol feeding. We propose that miR-155 silencing might have a therapeutic role in the improvement of alcohol-induced neuroinflammation and further work on this field is warranted.
2016-05-12T22:15:10.714Z
2013-08-09T00:00:00.000
{ "year": 2013, "sha1": "38730df007129da963d9f1889c897386048fe2a3", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0070945&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "73b583afc72107232b8a047fba5fbb0eb2e317c5", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
14236008
pes2o/s2orc
v3-fos-license
Tricuspid regurgitation improvement in relation to the amount of pulmonary artery pressure reduction. Background: Given the common concomitance of tricuspid regurgitation (TR) with significant mitral stenosis, we aimed at exploring the relation between TR severity and pulmonary artery hypertension (PAH) in patients who underwent mitral balloon valvotomy (MBV). Methods: We analyzed the echocardiography data of 133 consecutive patients (82.0% female, mean age 44.68 ± 12.56 years) with different degrees of TR severity that underwent MBV between April 2006 and March 2008. The pulmonary artery systolic pressure (PAPs) > 35 mmHg was considered as PAH. Results: Before MBV, 36.20% of the patients had moderate to severe TR, 92.5% PAH, and 18.0% right ventricular (RV) dilation (RV dimension ≥ 33 mm). After MBV, TR severity improved in 41.4%, worsened in 8.3%, and did not change in 50.4%. Before and after MBV, PAPs was significantly correlated with TR severity, and the mean PAPs change in patients with improved TR was significantly more than that of patients without TR improvement (p value = 0.042). Tricuspid regurgitation severity and mean PAPs (from 52.83 ± 18.82 to 35.89 ± 9.39 mmHg) decreased significantly after MBV (both p values < 0.001); this reduction was significantly correlated to the amount of PAPs decrease. A cut-off point of ≥ 19 mmHg reduction in PAPs had a specificity of 71.79% and sensitivity of 52.73% to show TR severity improvement (by Receiver-Operative-Characteristics analysis). The mean of RV dimension decreased from 28.94 ± 5.43 to 27.95 ± 4.67 mm (p value < 0.001). In contrast to patients with RV dilation, TR reduced significantly in patients without RV dilation (p value < 0.001). Conclusion: Improvement in TR severity was directly correlated with the amount of PAPs reduction after MBV. More studies are needed to better define a cut-off value for PAPs reduction related to TR severity improvement. Introduction Rheumatic heart disease is the most common cause of mitral stenosis. 1 In patients with long-standing left-sided valve disease, pulmonary hypertension is commonly present, which may thus give rise to the development of tricuspid regurgitation (TR). 2 In such cases, there have been reports of up to 28% of functional TR, 3 whose development is directly associated with increased morbidity and mortality. 4 Indeed, even moderate TR negatively impacts survival, regardless of left ventricular function or pulmonary arterial pressure. 5 It is thought that secondary TR decreases or even disappears Original Article after surgical correction of mitral valve disease. 6 It has also been demonstrated that significant TR in patients with severe mitral stenosis and severe pulmonary artery hypertension regresses after successful mitral balloon valvotomy (MBV), 7 as pulmonary arterial pressure reduction lessens the right ventricular (RV) pressure and consequently eliminates the TR. 8 The aim of this study was to analyze pre-and postinterventional TR severity in patients with structurally normal tricuspid valves who underwent MBV to find out whether TR severity improves after intervention. We also investigated the relation between TR severity and pulmonary artery hypertension to determine whether a reduction in pulmonary artery systolic pressure (PAPs) has an impact on TR. Methods We considered the echocardiography data of 170 patients with different degrees of TR severity that underwent MBV in our center between April 2006 and March 2008. From these patients, 37 were excluded due to missing data, leaving 133 patients for this study. None of the patients had tricuspid valve rheumatismal involvement. In the evaluation of TR severity, two-dimensional echocardiography with color Doppler study was done using a 3-MHZ phased array sector scanner (Vivid 3 or Vivid 7 GE, USA) within one month before and during the first month after MBV. In all standard views, including the apical 4-chamber, parasternal short axis, RV inflow, and subcostal views, TR severity was observed and graded. TR was graded as 0 for no regurgitation, 1+ for trivial, 2+ for mild, 3+ for moderate, 4+ moderately severe, and 5+ for severe regurgitation. Left ventricular ejection fraction was measured by the modified biplane Simpson method, and RV systolic function was evaluated by the tricuspid annular plane systolic excursion (TAPSE) and tissue Doppler imaging study (RVsm). Additionally, PAPs was estimated by obtaining the peak TR Doppler jet from multiple windows until peak velocities were consistent and reproducible plus right atrial pressure (which was estimated in regard to JVC size and respiratory collapse). Pulmonary artery hypertension was defined as PAPs > 35 mmHg, 9, 10 although the current definition of pulmonary artery hypertension is based on mean PAPs > 25 mmHg. 11,12 Left ventricular ejection fraction < 50 % and RV dimension ≥ 33 mm were considered as left ventricular systolic dysfunction and RV dilation, respectively. In addition, TR severity reduction ≥ 1 grade was defined as TR severity improvement. Mitral balloon valvotomy was performed via the transvenous, transseptal approach from the right femoral vein in accordance with the stepwise Inoue balloon technique, as was first described by Inoue et al. 13 The Inoue balloon was passed across the atrial septum before being flow guided to the left ventricle. The septotomy was subsequently enlarged with a 14F vessel dilator before segmental inflation of the balloon was performed within the mitral valve. The balloon size was determined according to the height of the patient ([Height (cm) / 10] + 10 = recommended balloon size). 14 Immediately after the procedure, hemodynamic measurements were repeated and recorded. The numerical variables are presented as mean ± SD (standard deviation), while the categorical variables are summarized by absolute frequencies and percentages. The continuous variables were compared using Student's t-test or one-way analysis of variance (ANOVA) across the groups defined by TR severity grade, and the categorical variables were compared using the chi-square or Fisher's exact test, when more than 20% of the contingency table cells had expected cell frequencies less than five. The continuous variables were compared by the paired t-test, ordinal variables by the Wilcoxon signed ranks test, and categorical (binary) ones by McNemar's test, prior to and after percutaneous MBV. The Receiver Operating Characteristic (ROC) curve analysis was performed to find an optimal cut-off point for PAPs reduction to show at least a one-grade regress in TR severity. For statistical analysis, the statistical software SPSS version 13.0 for Windows (SPSS Inc., Chicago, IL) was used. All p values were 2-tailed, with statistical significance defined by p value ≤ 0.05. Results The demographic and clinical data are presented in Table 1. The study population consisted of 133 consecutive patients with different degrees of TR severity (Table 2) who underwent MBV because of significant mitral stenosis. Prior to MBV, 92.5% of the patients had pulmonary arterial hypertension (PAPs > 35 mmHg), which decreased significantly to 44.4% after MBV (p value < 0.001). The mean PAPs reduced markedly from 52.83 ± 18.82 mmHg to 35.89 ± 9.39 mmHg after intervention (p value < 0.001). Before and after intervention, PAPs was significantly correlated with TR severity: the higher the PAPs, the higher the degree of TR (both p values < 0.001). Arezou Zoroufian et al Changes of mean PAPs in the patients with improved TR grade were significantly higher than those in the patients without TR improvement (p value = 0.034); higher rates of PAPs reduction were accompanied by higher chances of TR improvement. The ROC curve analysis demonstrated that a cut-off point of ≥ 19 mmHg reduction in PAPs had a specificity of 71.79% and sensitivity of 52.73% to show at least a onegrade regress in TR severity. In 86.4% of the patients with moderate to severe TR, improvement in TR severity occurred by this cut-off point. Although the patients with mild or trivial TR did not demonstrate significant improvement in TR post MBV compared to pre MBV, 34.5% of them showed TR improvement by the same cut-off point (p value < 0.001) The mean ejection fraction was 53.87 ± 4.87% in all the study population prior to MBV. The mean pre-procedural ejection fraction was not different between the patients with and without TR improvement. The patients who showed a worsening of TR grade after intervention had a significantly lower mean ejection fraction compared to those whose TR severity had improved or not changed (p value = 0.003 and p value = 0.002, respectively). The mean of mitral valve peak pressure (MVPP) reduced from 19.86 ± 7.11 mmHg to 10.35 ± 3.45 mmHg. The mean of mitral valve mean pressure (MVMP) reduced from 11.98 ± 5.30 mmHg to 5.32 ± 2.51 mmHg before to after MBV, respectively. Both of these changes were statistically significant (both p values < 0.001). No relation was, however, found between the difference of mean MVPP, mean MVMP, and TR changes. The mean RV dimension decreased from 28.94 ± 5.43 mm before to 27.95 ± 4.67 mm after MBV (p value < 0.001). Twenty-three (17.7%) patients had RV dilation (RV dimension ≥ 33 mm) before intervention, which decreased to 19 (14.6 %) after intervention. Tricuspid regurgitation grade improved in 44% of the patients without RV dilation and in 29.2% of the patients with RV dilation; these findings were not statistically significant. The presence or absence of atrial fibrillation before MBV had no relationship with TR changes after MBV. Discussion This study revealed association between the amount of PAPs reduction and degree of TR severity regression after MBV in patients with significant mitral stenosis. Right ventricular dilation due to RV ischemia or infarction, mitral valve disease, and pulmonary vascular disease is a more common cause of functional TR than primary valvular disease 15 and it has been demonstrated that the presence of TR is associated with pulmonary hypertension. 16 Song et al. 17 reported that TR was resolved in 32% of patients who underwent percutaneous mitral valvoplasty. In their study, all the patients had significant moderate to severe TR before intervention. They showed that patients with significant functional TR had more severe mitral stenosis, which could be diminished if the transmitral pressure gradient was sufficiently relieved with percutaneous mitral valvoplasty. In the group of patients who showed resolution of TR grade, the mean PAPs reduced from 57 ± 19 before percutaneous mitral valvoplasty to 35 ± 16 mmHg at followup. It is worthy of note that the authors did not demonstrate any relation between the amount of PAPs reduction and TR regression. In the Hannoush et al. investigation,7 of 53 patients with significant TR, 27 showed TR regression after MBV. The study showed that PAPs was initially higher and decreased (from 70.7 ± 23.8 to 36.5 ± 8.3 mmHg) at follow-up in those patients. Right ventricular dimension was also significantly reduced after intervention. In contrast, Shafie et al. 8 demonstrated that in 8 patients with severe TR who underwent successful closed commissurotomy for severe mitral stenosis, no TR improvement was observed at followup. Also Sagie et al. reported that although RV systolic pressure fell by more than 10 mmHg, RV dimension did not decrease and TR did not resolve after percutaneous balloon Tricuspid Regurgitation Improvement in Relation to the Amount of Pulmonary ... mitral valvotomy. 18 As was mentioned above, there are some studies investigating TR severity after intervention with the aim of resolving mitral stenosis. In this study, we compared TR severity before and after MBV. Tricuspid regression occurred in 41.4% of the patients, of whom 54.5% had moderate or greater degrees of TR before intervention; there was, however, no significant TR improvement in patients with lower grades of TR before MBV (p value = 0.073). In our study population, the mean of MVPP and mean of MVMP decreased significantly after MBV. Transmitral pressure gradient was concomitant with TR reduction, as was described before. 17 Be that as it may, there was no relation between the difference of mean MVPP, mean MVMP, and TR changes. Berger et al. 19 reported that TR could be identified in a large number of patients with pulmonary artery hypertension, especially when PAPs rose to 50 mmHg. In their study, 39 out of 49 patients with PAPs ≥ 35 mmHg had TR, while only 2 out of 20 patients with PAPs < 35 mmHg presented with TR. Some investigators have shown that severe functional TR with a dilated annulus can be eliminated spontaneously after reducing PAPs, 3 but other investigators have demonstrated that elevated PAPs has no significant determinable role in functional TR. 20 Most investigators currently believe that patients with mitral stenosis associated with moderate or more than moderate TR, especially when there is tricuspid valve annulus dilation, are not good candidates for percutaneous transvenous mitral commissurotomy and that they should be referred for mitral valve surgery plus tricuspid valve annuloplasty. In the present study, before and after MBV, PAPs was significantly correlated with TR severity; higher PAPs was seen concomitantly with higher degrees of TR. After MBV, the mean PAPs and the number of patients with pulmonary artery hypertension decreased significantly; TR severity was significantly correlated to the amount of PAPs reduction after MBV. We sought to determine a cut-off point in PAPs reduction to determine how much decrease in PAPs warrants TR improvement after intervention. In the final analysis, we found out that a cut-off point of ≥ 19 mmHg reduction in PAPs had a specificity of 71.79 % and sensitivity of 52.73% to predict TR improvement. In our sample, 86.4% of the patients with higher degrees of TR and 34.5% with lower grades of TR before MBV showed TR improvement with this cut-off point. Previous studies have revealed that the grade of TR is correlated with left ventricular ejection fraction and TR is strongly associated with RV dilation. 16,21 Our results showed a significant correlation between ejection fraction and TR changes only in the patients with worsened TR after intervention. On the other hand, pre MBV, ejection fraction < 50% showed a relation with worsening of TR severity after MBV. The present study has some limitations, the most prominent amongst which is the fact that only immediate post-interventional echocardiography data were utilized. Moreover, longer follow-up periods are required to evaluate changes in TR severity and to determine cut-off points in PAPs reduction with higher specificity and sensitivity. Conclusion In this study, the severity of TR regressed after MBV and this regression was related to the amount of PAPs reduction. A cut-off point of ≥ 19 mmHg reduction in PAPs, with a specificity of 71.79 % and sensitivity of 52.73%, showed TR improvement after MBV. Patients with an ejection fraction < 50% at the time of MBV had a greater chance of TR severity worsening after MBV.
2017-09-06T22:38:55.342Z
2010-08-31T00:00:00.000
{ "year": 2010, "sha1": "b7f9b33c6ddce0297439532cffa4c24ef82f50fb", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "20d3ebc73c0c3886f075c927e8e8cf0bd0eb976a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245463361
pes2o/s2orc
v3-fos-license
Efficacy and Safety of Andexanet Alfa for Bleeding Caused by Factor Xa Inhibitors: A Systematic Review and Meta-Analysis Direct oral anticoagulants (DOAC) including factor Xa inhibitors are associated with bleeding events which can lead to severe morbidity and mortality. Reversal agents like andexanet alfa (AA) and 4F-PCC (Four-factor prothrombin concentrate complex) are available for treating bleeding that occurs with DOAC therapy but a comparison on their efficacy is lacking. In this study, we analyzed the efficacy and safety of patients treated with andexanet alfa for bleeding events from DOAC. Databases were searched for relevant studies where AA was used to determine efficacy and safety in bleeding patients who were on factor Xa inhibitors. Published papers were screened independently by two authors. RevMan 5.4 (The Cochrane Collaboration, 2020) was used for data synthesis. Odds ratio (OR) and mean difference (MD) was used to estimate the outcome with a 95% confidence interval (CI). Among 1245 studies were identified after a thorough database search and three studies were included for analysis. AA resulted in lower odds of mortality compared to 4F- PCC (OR, 0.37; 95% CI, 0.20-0.71) among patients with intracerebral hemorrhage. There was no difference in thrombotic events between patients receiving AA and 4F-PCC (OR, 2.40; 95% CI, 0.36-15.84). No differences in length of hospital stay and intensive care unit (ICU) stay were seen between patients receiving AA and 4F-PCC. In conclusion, andexanet alfa reduced in-hospital mortality in patients who had bleeding due to factor Xa inhibitors compared to 4F-PCC. However, there were no differences in thrombotic events, length of ICU, and hospital stay between patients treated with AA and 4F-PCC. Introduction And Background Direct oral anticoagulants (DOAC) have been increasingly used in patients for the prevention of systemic embolization in atrial fibrillation as well as treatment and prevention of deep vein thrombosis (DVT) and venous thromboembolism (VTE). As a result, the indications of DOAC have significantly expanded in the last decade [1][2][3][4][5]. Predictable pharmacokinetics and pharmacodynamics, rapid onset and offset of action, few drug interactions, and absence of need for regular laboratory monitoring provide an advantage to oral factor Xa inhibitors over traditional Vitamin K antagonists [6]. Factor Xa inhibitors also reduce fatal and intracranial hemorrhage compared with vitamin K antagonists [7,8]. However, fatal bleeding has been reported with oral factor Xa inhibitor use [8,9]. Before the introduction of andexanet alfa (AA), off-label use of 4 factor-prothrombin concentrate complex (4F-PCC) was advised and was used in the situation of life-threatening bleeding [10]. Prothrombin complex concentrates (PCCs) are isolated from fresh frozen plasma (FFP) and contain Vitamin K-dependent factors II, VII, IX, and X [11]. In May 2018, AA received FDA approval for use in patients treated with rivaroxaban and apixaban in the setting of life-threatening or uncontrolled bleeding following ANNEXA-A and ANNEXA-R trials in healthy participants [12,13]. AA is a modified recombinant, catalytically inactive form of human factor Xa, which binds and sequesters factor Xa inhibitor molecules that reduce anti-factor Xa activity rapidly in the body [14]. A multicenter, prospective, open-label, single-group study ANNEXA-4 was done in bleeding patients following FDA approval, which showed the drug's good efficacy and safety profile [15]. Randomized controlled trials have not been done, given the risks of using a placebo in acutely bleeding patients. However, some retrospective observational studies and case series studying the efficacy and safety of AA in bleeding patients have been published. In addition, some studies have compared efficacy and safety with 4F-PCC. We have conducted this systematic review and meta-analysis to analyze the effectiveness and safety profile of AA in bleeding caused by factor Xa inhibitors. Review Methods We used Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines for the systematic review of available literature [16]. The study protocol was registered in the International prospective register of systematic reviews (PROSPERO) CRD42021244219. Literature search We searched PubMed, PubMed Central, Scopus, Embase, and Cochrane library for relevant studies published till February 2021. Searches were conducted using the keywords like "andexanet alfa", "andexanet", "andexanet alpha", "bleeding", "factor Xa inhibitor," and "factor Xa inhibitors" and appropriate boolean operators. Details of the search strategy are available in Supplementary Material 1. A. Types of Studies We included studies done to determine the efficacy and safety of andexanet alfa in patients who had bleeding in the setting of factor Xa inhibitor use. As randomized controlled trials were not available, we included prospective and retrospective studies and case series with more than ten patients. AA was used to determine efficacy and safety in bleeding patients on factor Xa inhibitors in qualitative analysis. In addition, the studies with both treatment and control groups were included in the quantitative synthesis. B. Types of Participants The studies required patients to be more than 18 years of age and had bleeding in the setting of Factor Xa inhibitor use. C. Types of Interventions Andexanet alfa was taken in the treatment arm, while 4F-PCC or other blood products were included in the control arm. D. Types of Outcome Measures Our outcome of interest was hemostatic efficacy, mortality within 30 days, the incidence of thrombotic events, and length of hospital and ICU stay following treatment with AA or other blood products. We excluded types of studies with the following characteristics: meta-analysis, reviews, in-vitro studies, studies done on healthy subjects, case reports, editorials, opinions, letters, protocols, abstracts/presentations, dissertation, and animal studies. Case series with fewer than ten patients, articles where full-text articles were not available, ongoing studies, and studies with incomplete data were also excluded. Data extraction and management Titles, abstracts, and full texts were screened for study and report characteristics that matched eligibility criteria. Studies were independently screened by two reviewers (AA and SS) using Covidence (Covidence systematic review software, Veritas Health Innovation, Melbourne, Australia) and data were extracted for both quantitative and qualitative synthesis. The conflicts were resolved by taking the opinion of the third reviewer (NK). The data extraction sheet was created using Microsoft Excel software. One reviewer collected the data from all articles; the second reviewer verified the data for accuracy and highlighted discrepancies; the third reviewer resolved any disagreements and carried out a thorough evaluation to ensure that only the outcomes of interest were taken into account. The following variables were included: first author, type of design, site of study, year of publication, sample size, mean age, percentage of male and female, indication for anticoagulation, hemostatic efficacy, mortality within 30 days, length of hospital stay, length of ICU stay and incidence of thrombotic events. Risk of Bias We used the Joanna Briggs Institute (JBI) critical appraisal checklist for cohorts and case series for quality and risk of bias assessment ( Tables 1-2). Statistical Analysis RevMan 5.4 (The Cochrane Collaboration, 2020) was used for statistical analysis. Odds ratio (OR) and mean difference (MD) was used to estimate the outcome with a 95% confidence interval (CI). Assessment of Heterogeneity The statistical heterogeneity among the studies was calculated and assessed with the I 2 test based on previously recommended stratifications. In the case of heterogeneity, we used the invariance and random-effect finally, well. Finally, we evaluated the sensitivity by rerunning the analysis to assess any unrevealed differences. Results A total of 1245 studies were identified after thorough database searching, and 351 duplicates were removed. Title and abstracts of 894 studies were screened, and 860 irrelevant studies were excluded. The full-text eligibility of 34 studies was assessed, and 24 studies were excluded for definite reasons ( Figure 1). A total of 10 studies were included in the qualitative summary (Table 3), and three studies were included in the quantitative analysis. Quantitative Analysis Only three studies reported the use of AA contrasting with 4F-PCC among ICH patient groups used in synthesis. Discussion Our meta-analysis is the most comprehensive meta-analysis to evaluate the effect of andexanet alfa in bleeding caused by factor Xa inhibitors evaluating the mortality, length of hospital stay, length of ICU stay, and thrombosis in comparison to 4-F PCC. The major finding of our study was that andexanet alfa decreased mortality in patients who had intracerebral bleeding due to factor Xa inhibitors compared to 4F-PCC. There were 105 mortalities in 865 patients (12.13%) receiving andexanet alfa across ten studies. In contrast, the overall mortality rate in a recent 4F-PCC meta-analysis in FXa inhibitor bleeding was 18% compared to 12.13% in our analysis and 14% in the ANNEXA-4 trial [27]. The studies done by Ammar et al. and Barra et al. showed a higher mortality rate of 39% and 22% respectively which is higher than that of other studies as these studies included only ICH patients [17,18]. Mortality was also significant in a study done by Culbreth (40%) as 14 out of 15 patients had ICH. The Ammar et al. study showed a similar mortality rate in the andexanet group and 4F-PCC group (39% and 38% respectively) while the Barra et al. study showed higher mortality in the 4F-PCC group (63.6%) than in andexanet receiving patients (22.2%) [17,18]. Patients in the 4F-PCC group in the Barra et al. study had lower baseline GCS and higher baseline hematoma volume which might have contributed to the higher mortality [18]. We found no difference in the incidence of thrombotic events caused by AA in comparison to 4F-PCC for the reversal of bleeding caused by factor Xa inhibitor. A recent meta-analysis of seven studies including 240 patients showed thrombotic events of 4% with the use of 4F-PCC [27]. In contrast, we found 48 incidences of thrombosis among 523 patients in the nine studies included in our analysis. A prior meta-analysis done by Rodrigues et al. had estimated the risk of thrombosis with andexanet alfa and Idarucizimab at 5.5%, however, the analysis just included three studies for evaluation of thrombosis risk associated with AA and evaluated the cumulative risk of thrombosis associated with both andexanet alfa and Idarucizimab [28]. [22]. Clinical benefit of AA use was observed in bleeding due to factor Xa inhibitors in our analysis; however, the cost of stocking AA in most hospitals might be prohibitive for the immediate use for reversible DOAC related life-threatening bleeding. The median projected cost of andexanet alfa was $22,120/patient compared to $5670/patient for 4F-PCC. 4F-PCC currently is more widely available and less expensive, but that may change if the cost for AA comes down in the future [32]. 4F-PCC and andexanet alfa have not been compared in a prospective randomized clinical trial, and results of such studies are needed to inform clinical practice in DOAC related bleeding events. There is an ongoing randomized, multicenter clinical trial evaluating the efficacy and safety of andexanet alfa versus the usual standard of care in patients with ICH anticoagulated with a DOAC, which may be completed in 2023 [33]. Limitations of the study Most of the studies included were case series and retrospective observational studies. Only one prospective study, the ANNEXA-4 trial, was included. There were control groups in only three of our studies which were all retrospective. The sample size was less in our studies. Therefore, there was a moderate to high risk of bias in our studies. ANNEXA-4 trial had wide exclusion criteria: planned surgery within 12 hours after andexanet alfa administration, ICH with GCS less than 7, hematoma volume more than 60 cc, expected survival less than one month, use of VKA, dabigatran, PCC, WB, or plasma in last seven days. Giovino's study also excluded patients with GCS less than 7 and hematoma volume >60 ml [24]. However, patients requiring surgical intervention, patients who received other blood products before AA administration, unknown time of the last factor Xa inhibitor dose, patients with low GCS and higher hematoma volume were included in other studies. In real clinical practice, patients with low GCS and expected mortality of less than one month required AA administration and were included in other studies. Knowledge about the administration of other blood products and time since the last factor Xa inhibitor was not feasible due to the retrospective nature of some studies and were thus included. Culbreth et al. 2019 included patients with bleeding due to factor Xa inhibitor who required emergent surgery. Conclusions Andexanet alfa reduced in-hospital mortality in patients who had bleeding due to factor Xa inhibitors compared to 4F-PCC. There was no difference in thrombotic events, length of ICU, and hospital stay between andexanet alfa and 4F-PCC. Thus, AA is a promising therapeutic agent for the reversal of factor Xaassociated bleeding. However, the cost of stocking AA in most hospitals might be prohibitive for the immediate use for reversible of DOAC related life-threatening bleeding. 4F-PCC currently is more widely available and less expensive, but that may change when the cost for AA decreases. More studies are required in the future to determine the effect of AA as compared to 4F-PCC in patients with DOAC-related bleeding other than intracranial bleeding.
2021-12-25T16:10:23.682Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "29920ad85c755574e5b81955beff0fab1626c574", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/73906-efficacy-and-safety-of-andexanet-alfa-for-bleeding-caused-by-factor-xa-inhibitors-a-systematic-review-and-meta-analysis.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f9714450003e6ee752b59271870e3ca30ad0395c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
222289174
pes2o/s2orc
v3-fos-license
Commentary on Endometriosis Recurrence Management Medical treatment of endometriosis ranges from symptomatic control to therapies aimedat suppressing the ovarian production of estrogen. Almost all the treatment strategies are suppressive rather than curative so that the discontinuation of therapy leads to recurrence of symptoms. In 2009, a systematic review of literature estimated the recurrence rate of endometriosis to be 21.5% and 40%-50% within two and five years, respectively [1], which is much more prevalent than previously believed. Regrowth of residual lesions and de novo lesion formation are possible pathogenesis mechanisms leading to the recurrenceof endometrial lesions. Radical surgery means the elimination of all possible endometriosis implants detected in pelvic and abdominal cavity, that is sometimes insufficient to radically remove these lesions; therefore, lesions often reappear postoperatively. Medical treatment options such as the application a) plays an essential role in the management of endometriosis by reducing estrogen levels in order to promote the progressive atrophy of an ectopic endometrium [2]. Commentary Medical treatment of endometriosis ranges from symptomatic control to therapies aimedat suppressing the ovarian production of estrogen. Almost all the treatment strategies are suppressive rather than curative so that the discontinuation of therapy leads to recurrence of symptoms. In 2009, a systematic review of literature estimated the recurrence rate of endometriosis to be 21.5% and 40%-50% within two and five years, respectively [1], which is much more prevalent than previously believed. Regrowth of residual lesions and de novo lesion formation are possible pathogenesis mechanisms leading to the recurrenceof endometrial lesions. Radical surgery means the elimination of all possible endometriosis implants detected in pelvic and abdominal cavity, that is sometimes insufficient to radically remove these lesions; therefore, lesions often reappear postoperatively. Medical treatment options such as the application a) plays an essential role in the management of endometriosis by reducing estrogen levels in order to promote the progressive atrophy of an ectopic endometrium [2]. Our objective was to introduce a less invasive and low risk management strategy to preventthe recurrence of endometriosis through combination therapy.In this novel management approach, GnRH-a pre-treatment is used to reduce inflammations as well as endometriosis diagnosis and staging of endometriosis through laparascopy. Combination Therapy Our presented technique is based on minimal surgery with ovary suppression with regard to endometriosis stage. Combination therapy has been adopted as an approach for the management of endometriosis over the past 25 years [3]. Given the lack of evidence on how estrogen levels affect endometriosis management, second and third look laparoscopy is suggested to follow the endometriosis lesion changes three and six months after GnRH agonist treatment, which is really helpful to decrease the endometriosis spots and lesions, deep penetration, and frozen pelvic [4]. GnRh-a pre-treatment leads to the resolution of lesions, decreases operation time and the occurrence of complications,which is useful to completely clear the tissue from serious infiltration or small retroperitoneal lesions through laparoscopic surgery. Recurrence Results Our data indicated that the duration of GnRH agonist therapy is highly dependent upon the stage of endometriosis. Medical treatment is the preferred option and a main advantage of this method appears to be the elimination of residual lesions so that no spot remains leading to regrowth. In stage I, after three months of GnRH agonist therapy, a majority of lesions disappeared in of and III, after the first look diagnostic laparoscopy, GnRH agonist was injectedfor six and nine months, respectively, and the prescription of the new dosage was dependent upon thesecond observational laparoscopy following this period.Interestingly, all stage Iand II endometriosis patients cured with 3-and 6-month GnRHa treatment, respectively, and there was no report of recurrence afterfive years of follow up using OCP (Oral Contraceptive Pill). However, in stages III endometriosis 6-month GnRHa was not sufficient to eliminate all lesions but after 9 month most of them disappear. Discussion This advanced method for early definitive diagnosis of endometriosis by performing laparoscopy instead of blindly administering initial medical and drug therapies could be a clinical advancement to treat endometriosis. The treatment cost as well as recurrence rate is lower than other therapeutic approaches with low damages or surgical complications. It seems that the presented protocol is useful in the prevention of endometriosis recurrence via complete elimination of endometriosis lesions [5]. Despite improvementof surgical techniques and interventions, we believe that for endometriosis management, "The Enemy" must be well defined and an appropriate weapon selected against it.
2020-05-21T00:11:04.430Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "3254216873be78cca05874139364443104e89dfc", "oa_license": "CCBY", "oa_url": "https://www.authorea.com/doi/pdf/10.22541/au.158880119.94299332", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "34d45a1de47866ddb85306b5eadfbb0ef6c9ea43", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12706908
pes2o/s2orc
v3-fos-license
Differentiation of Eight Commercial Mushrooms by Electronic Nose and Gas Chromatography-Mass Spectrometry Volatile profiles of eight mushrooms were characterized by gas chromatography-mass spectrometry and electronic nose analysis. Volatile compounds including 11 alcohols, 11 ketones, 15 aldehydes, 3 sulfur compounds and alkenes, 8 terpenes, 7 acid and esters, 5 heterocyclic compounds, 20 aromatic compounds, and 4 other compounds were identified. The overall aroma properties of the mushrooms were analyzed by the electronic nose. Results indicated that the e-nose sensors have the ability to accurately respond to different mushrooms with similar fingerprint chromatograms. The relationship between the GC-MS data and e-nose responses of different mushrooms was modeled by principal component analysis and partial least squares regression. This combination for the volatile analysis with chemometric methods can be applied to distinguish different mushrooms successfully. Furthermore, it is concluded that the volatile composition of commercial mushrooms could benefit a finger spectrum by e-nose to identify the species of edible fungi. Introduction Mushrooms are fleshy and fruiting bodies containing a wide range of edible fungi, such as Lentinus edodes (shiitake), Pleurotus abalonus, Agrocybe aegirit, Hericium erinaceus, Pleurotus eryngii.Because of their attractive tastes, flavors, and nutritional characteristics, mushrooms are commonly used as food ingredients and also as one of the fundamental components in traditional Chinese medicines [1].Generally, the specific pleasant odors of mushroom species and their products are described as almond-like or anise-like odors, floral or herb odors, or fruity odors [2].For instance, the fruity flavor is typical of some species such as Armillaria mellea, Ceriporiopsis subvermispora, and Dichomitus squalens [3].The fragrant flavor is achieved from Pleurotus sapidus and Stereum sanguinolentum, whereas the pleasant and anise are considered as characteristic flavors of Phaeolus schweinitzii and Gloeophyllum odoratum [2]. Although the quality of mushrooms is highly associated with numerous factors including aroma, taste, color, and texture, the aroma of the mushroom plays a major role in sensory attributes and consumer acceptance [1].Because the unique mushroom flavors correspond to species, this could be employed to discriminate different mushroom species [4].As one of the main compounds accounting for the unique mushroomy flavor, 1-octen-3-ol was first discovered in Tricholoma matsutake [5], and subsequently a series of C8 aliphatic components was reported to be responsible for the mushroom flavor, such as 3-octanol and 3-octanone [6].Gas chromatography-mass spectrometry (GC-MS), gas chromatograph-flame ionization detector (GC-FID), and headspace-gas chromatograph (HS-GC) analysis have been widely applied in the analysis of mushroom volatile components [7].Currently, approximately 150 different volatile compounds have been identified in mushrooms and classified into several categories such as alcohols, aldehydes, alkanes, aromatics, sulphur compounds, lower terpenes, and others [8].Malheiro et al. [9] demonstrated the potential of using volatile components to discriminate six mushroom species, using GC-MS combined with e-nose. Sensory evaluation is a common method in the flavor analysis of foods.However, there are a number of disadvantages in sensory evaluation including high cost of training panelists, panelist subjectivity, incapacity of online monitoring, and time-consuming.As an alternative approach, electronic nose (e-nose) combining with GC-MS is an innovative and emerging technology for odor analysis with the powerful capability in qualitative and quantitative determination of trace volatile components in food samples [20].This method exhibits great advantages such as rapid detection, high objectivity, high sensitivity (suitable for tiny amount of samples), long-term routine application, simplicity, and ease of use.Feng et al. [21] analyzed the volatile compounds of Mesona Blumes gum/rice extrudates using GC-MS and enose, and the results showed that this was able to effectively distinguish Mesona Blumes gum/rice (MBG) extrudates at different MBG content.Wang et al. [20] also demonstrated that the e-nose sensors combining with GC-MS were capable of clearly and rapidly distinguishing the flavor differences among synthetic milk, natural milk, and the enzyme-induced milk.Furthermore, this method has been also utilized in the identification of the geographical origin of propolis [22]. Traditionally, e-nose can be just used as a discrimination tool to differentiate various samples.However, it still suffers from detailed information regarding the difference between the discriminated samples.It is well known that sensors on e-nose could have different stimuli to different chemical compounds, which might be used as a typical approach to correlate the chemical compounds and sensors of e-nose.Therefore, it can be used as a finger spectrum to characterize the concrete chemical compound.However, little information has been reported in odor analysis of mushrooms by GC-MS and e-nose.The major objectives of this work were to (1) study the feasibility of electronic nose sensors for discriminating the different mushrooms; (2) investigate the volatile compositions of mushrooms using GC-MS analysis; (3) conduct the correlation analysis between aroma compounds and electronic nose responses for the interpretation of sensor properties using multivariate analysis of principal component analysis (PCA) and partial least squares regression (PLSR). Materials. The eight dried commercial edible mushrooms of Pleurotus abalonus, Agrocybe aegirit, Hericium erinaceus, Grifola frondosa, Coprinus comatus, Boletus edulis, Lentinula edodes, Pleurotus eryngii were purchased from a supermarket of Tesco, Shanghai, China.The species of the mushrooms were identified by the manufacturers and labeled in the package bags.After arrival, the samples were redried at an 80 ∘ C oven for 4 h to achieve same moisture content.The dried mushrooms were crushed in a disintegrator (Dianjiu Traditional Medicine Machinery Manufacture Co. Ltd., Shanghai, China) and the powders were packaged in PVC bags and kept in a dry and dark place at −18 ∘ C for further use. Preparation of Mushroom Extracts. The mushroom powders were sieved through 80 mesh griddles and about 25 g was transferred into a 2 L round bottom flask.Solvent of deionized water was added in the flask at a solid-liquid ratio of 1 : 10 and steam distilled for 2.5 h.After cooling to ambient temperature, the distillation extract was collected and equal volume of anhydrous ethyl ether was added for extracting the flavor compounds.The extract was dried over anhydrous sodium sulphate, maintained at freeze temperature of −18 ∘ C to remove water (as ice crystals), and then concentrated to 1 mL prior to further analysis. GC-MS Analysis. GC-MS analysis was conducted using an Agilent 7890N gas chromatography-5975 mass selective detector (GC-MS) (Agilent Technologies Inc., Palo Alto, CA), equipped with a HP-INNOWAX column (60 m × 0.25 mm × 0.25 m).The carrier gas was used as helium at a constant flow rate of 1.0 mL/min.The injector port was heated to 250 ∘ C, using the splitless injection mode.The initial oven temperature was maintained at 40 ∘ C for 3 min, then raised to 150 ∘ C at a rate of 5 ∘ C/min and held for 1 min, and finally raised to 220 ∘ C at a rate of 10 ∘ C/min and maintained for 2 min.The temperatures of injector and detector were 250 ∘ C and 220 ∘ C, respectively.The mass spectra were captured in the electron impact (EI) ionization mode, with an ionization voltage of 70 eV and a scanning range of m/z 40-400.Other parameters included the ion source of 230 ∘ C and mass spectrometry interface of 280 ∘ C. Each measurement was performed in triplicate and repeated three times. Identification and Quantification of Volatile Compounds. The identification of volatile compounds was based on computer matching with the mass spectra in the NIST 05, WILEY and ADAMS libraries, as well as by comparison of the mass spectra and retention indices (RI) according to those reported in the literatures [1,[10][11][12][13][14][15][16][17][18][19]23].In addition, a home-made library in the Shanghai Institute of Technology, based on the analysis of reference oils and commercially available standards, was also used for the identification and quantification. Electronic Nose Analysis of Volatile Compounds 2.4.1.The Preparation of the Sample for Electronic Nose.For e-nose analysis, the mushroom powders were sieved through 40 mesh griddles.About 0.2 g powders was put into a 10 mL vial and kept in a chamber at controlled temperature (37 ∘ C) and humidity (50%) [24] for further use. Electronic Nose Detection.An e-nose of AlphaMOS FOX 4000 (AlphaMOS, Toulouse, France) was applied to study the volatile compounds.The device was composed of eighteen metal-oxide sensors with a headspace autosampler HS100, e-nose unit, and e-nose software.18 different metal oxide sensors could be divided into three chambers [25], which were three types of sensors, that is, six "LY" type sensors, five "T" type sensors, and seven "P" type sensors.The response characteristics of the gas sensors varied depending on their types [26].Types P and T sensors are based on tin dioxide (SnO 2 ) but have different sensor geometries.LY sensors are chromium-titanium oxides (Cr 2 −xTixO 3 −y) and tungsten oxide (WO 3 ) sensors [25].Various types of sensors were used in instruments to ensure sufficient sensitivity and selectivity. Each sample vial was heated to 50 ∘ C and then agitated at 500 rpm for 900 s immediately prior to injection.The sample headspace volume of 2.5 mL was drawn from the vial at 500 L/s, using a syringe maintained at 60 ∘ C. The sample was injected into the e-nose at a speed of 500 L/s and delivered to the sensors with a purified air carrier gas (O 2 + N 2 > 99.95%, O 2 = 20±1%, H 2 O < 5 ppm, CO 2 < 5 ppm, C H < 5 ppm) at a flow rate of 150 mL/min.Sensor resistances were recorded for 120 s, and 600 s of delay was used to allow the sensor to return to baseline values before the next injection. Statistical Analysis. The GC-MS profiles of mushroom were analyzed by PCA, and PLS2 was used to explain correlations among GC-MS and e-nose data sets.Partial least squares regression (PLSR) [25] were performed with GC-MS and e-nose data.For determining the predictability of e-nose sensors from GC-MS data, PLS1 regression was performed with GC-MS data as the -variable and e-nose data as -variable.Regression coefficients were analyzed by jack-knifing. All variables were centered and standardized to make each variable has a unit variance and zero mean before applying PLS analysis.All PLSR models were validated using full cross-validation.Statistical analysis was performed by using the Unscrambler v.9.7 (CAMO ASA, Oslo, Norway). Volatile Compounds in Different Mushrooms.The volatile compounds in the mushrooms were extracted by steam distillation and then analyzed by GC-MS.Table 1 lists the tentatively identified 88 compounds in which 31 compounds were identified using the Wiley MS Library Database and 51 were identified by comparison of the retention time and the MS spectrum of the pure chemical standards.These include 11 alcohols, 11 ketones, 15 aldehydes, 3 sulfur compounds and alkenes, 8 terpenes, 8 acid and esters, 5 heterocyclic compounds, 20 aromatic compounds, and 4 other compounds. Alcohols have been considered as the main odorants of the mushroomy aroma (Cho et al. [1,8]).In the present work, the alcohols with the high concentrations were detected in species of P. abalonus, L. edodes, and P. eryngii, followed by H. erinaceus (Figure 1).Among the alcoholic compounds, 1-octen-3-ol has the highest concentration in L. edodes (Table 1), whereas 3-octanol owns the highest concentration in P. abalonus (∼11.6%).It has been reported that the C8 aliphatic compounds, including 1-octen-3-ol, 3-octanol, 3octen-2-one, and 2-octenal, 3-octanone, are the major contributors to the characteristic flavor of mushroom of Tricholoma matsutake [27].These C8 compounds are mainly formed by the oxidation of linoleic or linolenic acids in the presence of enzymes of lipoxygenase and hydroperoxide lyase [28]. The results also indicated that P. abalonus, A. aegirit, and C. comatus contained the highest level of ketones (Figure 1), accounting for ∼26.0%, 19.2%, and 18.1% of the total volatile compounds in these species, respectively.Ketones of 3octanone and 2-undecanone were identified in all of the tested species and P. abalonus consisted of the highest level compared to others (Table 1).The characteristic component of 3-octanone is a common herb aroma, and 2-undecanone is considered to be the main compound responsible for fruity flavor [29].It is well-recognized that some odoractive ketone substances, such as -ionone and trans-geranyl acetone, belong to oxidative by-products or degradation products derived from carotenoids (and therefore called norisoprenoids) and have been identified in mushrooms [30].Ketones of -ionone and -dihydro-ionone are also important flavor compounds in some port wines [31].In this work, geranyl acetone is the highest content in the mushrooms of A. aegirit and C. comatus, with the aroma description of green and magnolia, implying that this compound could be a flavor marker of these mushroom species. Aldehydes were the third most representative chemicals in the tested mushrooms, with 15 compounds being identified.About 35.8% of the total flavor compounds in A. aegirit and about 48.2% in H. erinaceus were aldehydes (Figure 1).Among the identified aldehydes, hexanal (5.7%), nonanal (7.2%), 6-nonenal (4.2%), and (2E, 4E)-2, 4-decadienal (12.2%) had the highest concentration in the species of P. eryngii, C. comatus, A. aegirit, and H. erinaceus, respectively.In addition, octanal and (E)-2-octenal had the highest concentrations in H. erinaceus and P. eryngii (Table 1).Of interest, no aldehyde was detected in the mushroom of L. edodes.A homologous series of n-aldehydes from C-5 to C-10 and simple unsaturated aldehydes from C-7 to C-10 were observed in the samples (Table 1).These compounds could be derived from the products of degradation or oxidation of the lipid in mushrooms [32].(E)-2-Heptenal, 2-octenal, and (2E, 4E)-2, 4-decadienal was observed in all species except in L. edodes.It was suggested that the aldehydes of 5-methyl-2-phenyl-2-hexenal, benzaldehyde, and phenyl acetaldehyde were generated from the Maillard reaction pathway [33].Chen and Wu [16] also demonstrated the presence of 5methyl-2-phenyl-2-hexenal in mushroom of Agaricus subrufescens.Volatile compounds of aldehydes generally displayed coarse and heavy aromas of raw fish [34].Different types and levels of aldehydes in different mushrooms might also be used to discriminate the mushroom species.It was noted that the dried commercial mushrooms underwent the drying process and generated some compounds such as 1octen-3-ol.The redrying process aims to offer better store samples, and steam distillation is used to extract the volatile compounds from mushroom.It is well-known that the mushroom flavor can be enhanced after cooking or heating treatment, because of the increases in the concentrations of some compounds such as 1-octen-3-ol [35].Therefore, the "artifacts of volatile compounds" produced from Maillard reaction and lipid oxidation also are recognized as the intrinsic flavor compounds of mushroom. Of the components identified, aromatic compounds were one of the important groups in all mushrooms, for example, P. abalonus (15.3%), A. aegirit (34.4%), H. erinaceus (17.6%),G. frondosa (37.3%), C. comatus (20.2),L. edodes (3.6), B. edulis (45.1%), and P. eryngii (20.7%).Benzaldehyde, phenyl acetaldehyde, anethole and benzeneacetic acid, and methyl ester were the most abundant components (Table 1).The existence of such a large amount of aromatic components may be the cause of "almond-like" aroma during blending of these mushrooms [16].The high content of these compounds and their similarities in structure indicate that the aromatic compounds may have a common origin.The formation of benzaldehyde and benzyl alcohol could be increased to a significant extent if benzoic acid was blended with fresh mushrooms, suggesting that the occurrence of enzymes could be responsible for the reduction of benzoic acid or its derivative into benzaldehyde and others [16]. Table 1 also showed that 7 compounds were found in the group of acids and esters, including octyl formate, methyl cinnamate, and nonanoic acid.The acids and esters have been reported to be the major odors of fruit and grass flavors, such as octyl formate rich in blackberry with strong odors of orange fruit and rose [42].Methyl cinnamate was reported to be abundant in the volatile compounds of pine-mushroom [1], and it was noted that this compound could prevent from the attack of the mycophagous collembolan to mushrooms [43]. An azulene-type compound was identified in C. comatus.To our best knowledge, this compound has only been isolated previously in the mushroom of Lactarius salmonicolor [44].DL-Menthol with strong inhibitory activity against fungi of Trichoderma [45] was also detected in the tested mushrooms.This implied that some mushrooms have natural biofungicide activity. Principal component analysis (PCA) of mushroom volatile compounds is as follows: to discriminate different mushroom species according to the GC-MS identified compounds, a PCA was performed in Figure 2 the projection of the GC-MS data for all samples.It was shown the differences among them.PCA provided a separation of the samples with 41% and 20% of the variation that accounted for PC1 and PC2, respectively.The contribution rate of the accumulative total of variance of the 2 factors in PCA is 61% representing that two PCs can explain 61% of the whole mushroom volatiles.The eight mushroom species were distributed according to their respective major compounds, where 3-octanol (1), 3-octanone ( 12), 2-octanol (2), 1-octen-3-ol (3), 1, 2, 4-trithiolane (39), octanal (28), 1octen-3-one (14), anethole (53), 6-nonenal (30), and phenyl acetaldehyde (49) had the higher power of discrimination. Discrimination of Different Mushrooms by Electronic Nose (E-Nose). For better visualization of data, PCA was performed to identify the patterns of correlation with individual composition variables in the discrimination among different mushroom samples by using signals corresponding to three repeated exposures of each sample in Figure 3.The clearly different distributing results of different mushroom samples in PCA analysis in Figure 3 confirmed that the e-nose sensors have the ability to accurately respond to different mushrooms with similar fingerprint chromatograms.In addition, the differences between groups have been visualized by PCA plots more clearly (Figure 3). As shown in Figure 3, each group was clearly distinguished from the other groups by using PCA analysis.There was a main separation among different mushroom samples and all the mushroom samples were separated into eight groups.Even the same species such as P. abalonus and P. eryngii can be separated clearly.The score plot for the first two principal components (PC1 and PC2) is shown in Figure 3.The score plot reveals the separation along PC1 accounting for 72.87% of the total data variance in the set, while separation along PC2 accounted for 15.34% of the variation in the sample set. The results indicated that eight mushroom samples can be distinguished on the basis of different odors by e-nose with PCA method.Therefore, according to the obtained results, the e-nose can be used as a useful tool for quickly distinguishing the mushrooms, taking into account the concentration of volatile compounds.The findings from analyses carried on mushroom samples were in agreement with the results obtained by means of GC-MS as they both separated mushroom samples successfully.Hence, e-nose could be used as an identification tool for mushroom. Relationship between GC-MS Profiles and E-Nose Analysis. To study relationships between GC-MS data and e-nose responses, two data sets were analyzed by PLS.The GC-MS data were selected from all GC-MS profiles, according to the results in Figure 2, which located in the ring.Figure 4(a) shows two factors loading plot for GC-MS data (-matrix) and 18 sensors (-matrix).Four sensors clusters located in three quadrants and 25 GC-MS peaks are placed in three locations.The derived PLSR model included two significant PCs successfully explaining 74% of the cross validated variance.Figure 4(b) indicates a typical response of the e-nose sensors for the measurement of the eight mushroom samples. Each curve represented the maximum response value of the mushroom volatiles on the sensors during parsing time.Although they had a roughly identical trend on the 18 sensors, there were significant differences between eight mushrooms on sensors, for instance, P30/2, P40/2, P30/1, and PA/2. Conclusions In this study, it was the first time that the volatile compounds of 8 different edible mushroom species were characterized by using both GC-MS and e-nose.Based on the GC-MS analysis, a total of 88 volatile compounds were identified and differences in the composition of volatile components from eight mushrooms were observed.It was feasible to classify the mushroom samples into eight groups by using GC-MS and enose.Elementary results confirmed the usefulness of GC-MS and electronic nose for classification purpose of mushroom.This combination for the volatile analysis with chemometric methods can be applied to distinguish different mushrooms successfully.Furthermore, this study results about the volatile composition of commercial mushrooms could help to set up a finger spectrum by e-nose to identify the species of edible fungi. Figure 3 : Figure 3: Plot of the first two principal components of the PCA model built with the electronic nose data related to the eight mushroom samples. Table 1 : Volatile profile of eight commercial mushroom species expressed in normalized chromatographic peak area.
2018-04-03T00:31:13.465Z
2015-03-19T00:00:00.000
{ "year": 2015, "sha1": "e87fa12b43654835ac8b509c0167a5f72d38f66c", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/js/2015/374013.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e87fa12b43654835ac8b509c0167a5f72d38f66c", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Computer Science" ] }
53427006
pes2o/s2orc
v3-fos-license
Croton gratissimus leaf extracts inhibit cancer cell growth by inducing caspase 3/7 activation with additional anti-inflammatory and antioxidant activities Background Croton species (Euphorbiaceae) are distributed in different parts of the world, and are used in traditional medicine to treat various ailments including cancer, inflammation, parasitic infections and oxidative stress related diseases. The present study aimed to evaluate the antioxidant, anti-inflammatory and cytotoxic properties of different extracts from three Croton species. Methods Acetone, ethanol and water leaf extracts from C. gratissimus, C. pseudopulchellus, and C. sylvaticus were tested for their free radical scavenging activity. Anti-inflammatory activity was determined via the nitric oxide (NO) inhibitory assay on lipopolysaccharide (LPS)-stimulated RAW 264.7 macrophages, and the 15-lipoxygenase inhibitory assay using the ferrous oxidation-xylenol orange assay. The cytotoxicity of the extracts was determined on four cancerous cell lines (A549, Caco-2, HeLa, MCF-7), and a non-cancerous African green monkey (Vero) kidney cells using the tetrazolium-based colorimetric (MTT) assay. The potential mechanism of action of the active extracts was explored by quantifying the caspase-3/− 7 activity with the Caspase-Glo® 3/7 assay kit (Promega). Results The acetone and ethanol leaf extracts of C. pseudopulchellus and C. sylvaticus were highly cytotoxic to the non-cancerous cells with LC50 varying between 7.86 and 48.19 μg/mL. In contrast, the acetone and ethanol extracts of C. gratissimus were less cytotoxic to non-cancerous cells and more selective with LC50 varying between 152.30 and 462.88 μg/mL, and selectivity index (SI) ranging between 1.56 and 11.64. Regarding the anti-inflammatory activity, the acetone leaf extract of C. pseudopulchellus had the highest NO inhibitory potency with an IC50 of 34.64 μg/mL, while the ethanol leaf extract of the same plant was very active against 15-lipoxygenase with an IC50 of 0.57 μg/mL. A linear correlation (r<0.5) was found between phytochemical contents, antioxidant, anti-inflammatory and cytotoxic activities of active extracts. These extracts induced differentially the activation of caspases − 3 and − 7 enzymes in all the four cancerous cells with the highest induction (1.83-fold change) obtained on HeLa cells with the acetone leaf extract of C. gratissimus. Conclusion Based on their selective toxicity, good antioxidant and anti-inflammatory activities, the acetone and ethanol leaf extracts of C. gratissimus represent promising alternative sources of compounds against cancer and other oxidative stress related diseases. Electronic supplementary material The online version of this article (10.1186/s12906-018-2372-9) contains supplementary material, which is available to authorized users. Background Oxidative stress results from an imbalance between the production of free radicals and the ability of the body to counteract or detoxify their harmful effects through neutralization by antioxidants [1]. The free radical theory of aging developed by Denham Harman is based on the concept that damage accumulates throughout the entire lifespan and causes age dependent disorders including diabetes, atherosclerosis, neurodegenerative diseases and cancer [2,3]. Cancer development is characterized by redox imbalance with a shift towards oxidative conditions. In fact, free radicals can bind through electron pairing with macromolecules such as proteins, phospholipids and DNA in normal cells to cause protein and DNA damage along with lipid peroxidation [1]. Consequently, the accumulation of these cellular disorders may cause mutation and lead to various disturbances in the cell metabolism, which can result in deregulated cell growth, and finally carcinoma [4]. Antioxidants are helpful in reducing and preventing damage caused by free radicals because of their ability to donate electrons, which neutralize the radicals without forming another. This property has led to the hypothesis that antioxidants, with their ability to decrease the level of free radicals, might lessen the radical damage causing chronic diseases, and even radical damage responsible for aging and cancer. Antioxidant phytochemicals found in vegetables, fruits and medicinal plants have been reported to be responsible for health benefits such as the prevention and treatment of chronic diseases caused by oxidative stress [5]. Many antioxidant phytochemicals have been associated with anti-cancer activities, and this includes curcumin from turmeric, genistein from soybean, tea polyphenols from green tea, resveratrol from grapes, sulforaphane from broccoli, isothiocyanates from cruciferous vegetables, silymarin from milk thistle, diallyl sulfide from garlic, lycopene from tomato, rosmarinic acid from rosemary, apigenin from parsley, and gingerol from gingers [6]. During the last two decades, it has been revealed that oxidative stress can lead to chronic inflammation, which in turn could mediate most chronic diseases including cancer. Chronic inflammation is usually associated with an increased risk of several human cancers [7]. Indeed, the relationship between inflammation and cancer has been suggested by epidemiological and experimental data, and confirmed by the fact that anti-inflammatory therapies were also efficient in cancer prevention and treatment [8,9]. The genus Croton belongs to the family Euphorbiaceae, and is a diverse and complex group of plants ranging from herbs and shrubs to trees. Croton species can be found in different parts of the world, and some of the most popular uses include treatment of cancer, constipation, diabetes, digestive problems, dysentery, external wounds, intestinal worms, pain, ulcers and weight loss [10]. Croton sylvaticus Hochst. is a fast-growing and decorative tree, which is widely used in the management of inflammatory conditions, infections and oxidative stress related diseases. In Tanzania and Kenya, the decoction of the leaves and root bark of C. sylvaticus is used in traditional medicine against tuberculosis (TB), inflammation, as a purgative, as a wash for body swelling caused by kwashiorkor or by tuberculosis, and for the treatment of malaria [11]. Previous reports showed the acetylcholinesterase inhibitory activity of the ethyl acetate leaf extract of C. sylvaticus and isolated compounds [12]. Other compounds isolated from this plant have antiplasmodial activity [13], and low to high toxicity observed in the brine shrimp larval lethality test [11]. Croton gratissimus Burch. (synonym C. zambesicus Müll.Arg.) is native to tropical west and central Africa, and is used to treat fever, dysentery and convulsions [14]. The leaf decoction is used in Benin as anti-hypertensive, anti-microbial (against urinary infections) and to treat malaria-linked fever [15]. Some compounds, named cembranolides isolated from leaf extracts of Croton gratissimus, have moderate activity against ovarian cancer cell lines and Plasmodium falciparum [16,17]. Croton pseudopulchellus Pax, originating from southern Africa, is widely distributed in tropical East and West Africa. This Croton species is used in southern and central parts of South Africa against TB symptoms such as coughs, fever and blood in sputum [18]. Based on their diverse uses in traditional medicine against various diseases in which excess production of free radicals or inflammation is implicated, the present study aims to evaluate the antioxidant, anti-inflammatory and cytotoxic properties of three Croton species extracted using different solvents. Plant material and extraction Fresh leaves of the three Croton species were collected at the Lowveld Botanical Gardens, Nelspruit, Mpumalanga (South Africa) in January 2016. The plant materials were dried at room temperature in a well-ventilated room for two weeks. The dried materials were ground to fine powder and stored in honey jars in the dark until use. Herbarium specimens for each of the plant species were prepared, and identification was made by Mrs. Elsa van Wyk and Ms. Magda Nel of the HGWJ Schweickerdt Herbarium (PRU), University of Pretoria. The identification numbers of plant species are presented in Table 1. Powder (100 g) from each plant was extracted by maceration in 1000 mL of different solvents (water, acetone and ethanol). The mixtures were covered and left overnight at room temperature. Each mixture was filtered through Whatman No.1 filter paper into pre-weighed honey jars and the filtrates obtained from acetone and ethanol extraction were concentrated under reduced pressure using a rotary evaporator at 40°C to obtain a residue which constituted the crude extract. The water filtrate was dried in a ventilated oven at 50-55°C until complete evaporation of water. The extraction process was repeated three times with fresh solvent. The honey jars containing the crude extracts were weighed again to determine the percentage yield of the crude extracts ( Table 1). The dried extracts were stored in a cold room (4°C) until use. Phytochemical analysis Total phenolic content The total phenolic content (TPC) of different extracts was determined using the Folin-Ciocalteu method adapted to a 96-well microplate as described by Zhang et al. [19]. The reaction mixture was prepared by adding respectively 20 μL of each extract (5 mg/mL in DMSO), 100 μL of Folin-Ciocalteu reagent (1 mL of Folin-Ciocalteu reagent in 9 mL of distilled water), and 80 μL 7.5% Na 2 CO 3 solution in deionized water. The mixture was then incubated in the dark at room temperature (25°C) for 30 min, and the absorbance was read at 765 nm on a microplate reader (Epoch, BioTek). The total phenolic content was estimated from a gallic acid (GA) calibration curve (10-100 mg/L; y = 0.6886x + 0.0884; R 2 = 0.9901), and results were expressed as milligram of gallic acid equivalent (GAE) per gram of extract. Total flavonoid content The total flavonoid content (TFC) of different extracts was determined using the aluminium chloride spectrophotometric method based on the formation of aluminium-flavonoid complexes [20]. The reaction mixture was prepared by mixing 2 mL of each extract (0.3 mg in 1 mL of methanol), 0.1 mL of aluminium chloride hexahydrate solution (10% aqueous AlCl 3 solution), 0.1 mL of 1 M potassium acetate and 2.8 mL of deionized water. The mixture was shaken and incubated at room temperature (25°C) for 10 min, and 200 μL of each mixture was transferred to 96-well microplate. The absorbance was measured at 415 nm using a microplate reader (Epoch, BioTek). A calibration curve was plotted from the absorbance of quercetin (0.005-0.1 mg/mL; y = 9.0545x -0.0142; R 2 = 0.9999), and the total flavonoid content was expressed as milligram of quercetin equivalent (QE) per gram of extract. Antioxidant assays The 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay The technique described by Brand-Williams et al. [21] with some modifications was applied for the determination of the DPPH scavenging capacity of extracts. Briefly, the extracts (40 μL) were serially diluted with methanol on a 96-well plate, followed by the addition of the DPPH solution (160 μL) prepared at 25 μg/mL. The mixture was incubated at room temperature in the dark for 30 min and the absorbance was measured at 517 nm using a microplate reader (Epoch, BioTek). Ascorbic acid and trolox were used as positive controls, methanol plus DPPH as negative control, and sample without DPPH as blank. The DPPH scavenging capacity was calculated at each concentration according to the formula (1) below: The inhibitory concentration (IC 50 ) was determined by plotting a non-linear curve of percentage DPPH scavenging capacity against the logarithm of different concentrations of the extract. The 2,2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) assay The method described by Re et al. [22] with some modifications was used for the determination of the ABTS radical scavenging capacity of the extracts. Firstly, the reaction solution was prepared by mixing a solution of ABTS (7 mM) with a solution of potassium persulfate (2.45 mM) at room temperature for 12 to 16 h. The optical density of the reaction solution containing the ABTS radical produced was calibrated to 0.70 ± 0.02 at 734 nm before use. Secondly, the extracts (40 μL) were serially diluted with methanol, followed by the addition of the ABTS radical (160 μL), and the optical density was measured after 5 min at 734 nm using a microplate reader (Epoch, BioTek). Two positive controls (trolox and ascorbic acid) were used. Methanol plus ABTS radical was used as negative control while extract without ABTS was considered as the blank. The percentage of ABTS scavenging capacity was calculated at each concentration according to the formula (1) above, and the inhibitory concentrations (IC 50 ) values were determined as indicated in the previous paragraph. Anti-inflammatory assays Nitric oxide inhibitory assay The method published by Dzoyem and Eloff [23] was used to determine the nitric oxide inhibitory activity of the extracts. The RAW 264.7 macrophages were obtained from the American Type Culture Collection (ATCC) (Rockville, MD, USA), and were grown at 37°C with 5% CO 2 in a humidified environment in Dulbecco's Modified Eagle' s Medium (DMEM) high glucose (4.5 g/L) containing L-glutamine (4 mM) and sodium pyruvate (Hyclone™) supplemented with 10% (v/v) fetal bovine serum (Capricorn Scientific Gmbh, South America) and 1% penicillin-streptomycin-fungizone (PSF). Nitric oxide (NO) production by RAW 264.7 macrophages was measured using the Griess reagent (Sigma Aldrich, Germany) after 24 h of lipopolysaccharide (LPS) stimulation in the presence or absence of the extracts or quercetin used as positive control. Briefly, the RAW 264.7 macrophages were inoculated at a density of 2 × 10 4 cells per well in 96 well-microtitre plates, and the cells were left overnight to allow attachment to the bottom of the plate. The cells were treated with different concentrations of the extracts dissolved in DMSO with the final concentration of DMSO not exceeding 0.5%. Thereafter, the cells were stimulated by addition of LPS at a final concentration of 1 μg/mL per well. The cells treated with only LPS were considered as the negative control. After 24 h of incubation at 37°C with 5% CO 2 in a humidified environment, the supernatant (100 μL) from each well of the 96-well microtitre plates were transferred into new 96-well microtitre plates, and an equal volume of Griess reagent (Sigma Aldrich, Germany) was added. The mixture was left in the dark at room temperature for 15 min, and the absorbance was determined at 550 nm on a microplate reader (Synergy Multi-Mode Reader, BioTek). The quantity of nitrite was determined from a sodium nitrite standard curve. The percentage of NO inhibition was calculated based on the ability of each extract to inhibit nitric oxide production by RAW 264.7 macrophages compared with the control (cells treated with LPS without extract). In addition, the cell viability was determined using the 3-(4,5-dimethythiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) assay [24]. The culture medium was aspirated from the plates, and replaced by fresh medium (200 μL) with 30 μL of thiazolyl blue tetrazolium bromide (5 mg/mL) dissolved in phosphate buffered saline. After incubation for 4 h, the medium was gently aspirated, and the formazan crystals were dissolved in 50 μL of DMSO and kept in the dark for 15 min at room temperature. The absorbance was measured spectrophotometrically at 570 nm on a microplate reader (Synergy Multi-Mode Reader, BioTek). Inhibition of soybean 15-lipoxygenase (15-LOX) enzyme The assay was performed according to the procedure of Pinto et al. [25] with slight modifications to the microtitre plate format. The assay is based on the formation of the complex (2) below. The IC 50 values of extracts or quercetin, which represent the concentration leading to 50% inhibition were calculated using the non-linear regression curve of the percentage (15-LOX) inhibition against the logarithm of concentrations tested. Cytotoxicity assay Cell culture The four cancer cell lines (MCF-7: human breast adenocarcinoma cells; HeLa: human cervix adenocarcinoma cells; Caco-2: human epithelial colorectal adenocarcinoma cells; A549: human epithelial lung adenocarcinoma cells) were obtained from the American Type Culture Collection (ATCC) (Rockville, MD, USA). These cells were grown at 37°C with 5% CO 2 in a humidified environment in Dulbecco' s Modified Eagle' s Medium (DMEM) high glucose (4.5 g/L) containing L-glutamine (4 mM) and sodium pyruvate (Separations, RSA) supplemented with 10% (v/v) fetal bovine serum (Capricorn Scientific Gmbh, South America). Non-cancerous African green monkey (Vero) kidney cells (obtained from ATCC) were maintained at 37°C and 5% CO 2 in a humidified environment in Minimal Essential Medium (MEM) containing L-glutamine (Lonza, Belgium) supplemented with 5% fetal bovine serum (Capricorn Scientific Gmbh, South America) and 1% gentamicin (Virbac, RSA). Cell treatment and assay procedure The cells were seeded at a density of 10 4 cells per well on 96-well microtitre plates, and were left overnight to allow attachment. After this, the cells were treated with different concentrations of extracts dissolved in dimethyl sulfoxide (DMSO), and further diluted in fresh culture medium. In each experiment, the highest concentration of DMSO (negative control) in the medium was 0.5%. After incubation for 48 h at 37°C with 5% CO 2 , the culture medium was discarded, and replaced by fresh medium (200 μL) with 30 μL of thiazolyl blue tetrazolium bromide (5 mg/mL) dissolved in phosphate buffered saline. The medium was gently aspirated after 4 h of incubation, and the formazan crystals were dissolved in 50 μL of DMSO, and kept in the dark for 15 min at room temperature. The absorbance was measured spectrophotometrically at 570 nm on a microplate reader (Synergy Multi-Mode Reader, BioTek). The viability of cells treated with the extracts was calculated for each concentration compared to the negative control. The 50% inhibitory concentrations (IC 50 ) for cancer cell lines and the 50% lethal concentrations (LC 50 ) for the non-cancerous cells were determined by plotting the non-linear regression curve of percentage of cell survival versus the logarithm of concentrations of each extract. The selectivity index (SI) values were calculated for each extract by dividing the LC 50 of the non-cancerous cell against the IC 50 of each cancer cell type in the same units. Evaluation of the induction of apoptosis on cancer cells The induction of apoptosis by the most active extracts from each plant was evaluated by measuring the caspase 3/7 activity on different cancer cell lines with the Caspase-Glo® 3/7 assay kit (Promega). All four cancer cell lines were seeded at a density of 10 4 cells per well on 96-well microtitre plates, and were allowed to adhere overnight. These cells were treated with the extracts at different concentrations (½ × IC 50 , IC 50 and 2 × IC 50 ) or DMSO (0.5%) as negative control, and the plates were incubated at 37°C with 5% CO 2 for 24 h. After treatment, the Caspase-Glo® 3/7 was prepared according to manufacturer' s guidelines, and 100 μL of the reagent was added per well and incubated for 1 h at room temperature in the dark. Following this incubation, the luminescence was measured on a microplate reader (Synergy Multi-Mode Reader, BioTek). The data was analysed, and expressed as percentage of the untreated cells (control) and fold change. Statistical analysis All experiments were performed in triplicate, and the results are presented as mean ± standard error of mean (SEM) values. Statistical analysis was carried out with GraphPad Instat 3.0 software. The Student-Newman-Keuls test was used to determine P-values for the differences observed between the extracts while Dunnett's test was used to compare the extracts with the control. Results were considered significantly different when P< 0.05. Yield of extraction and phytochemical content of crude extracts The voucher specimen numbers (PRU) and the yield of extraction of each plant material in a particular solvent are summarized in Table 1. The highest yield of extraction was observed with C. pseudopulchellus with all the three solvents used. Extraction with ethanol had the highest yield of extraction among the plant species. The phytochemical content of all extracts is presented in Table 2, and significant differences have been noted between total phenolic content (TPC) and total flavonoid content (TFC) of the plant materials extracted with the three solvents used. Organic solvents (acetone and ethanol) extracted more of these compounds compared to water. The acetone leaf extract of C. gratissimus had the highest TPC with 222.29 mgGAE/g whereas the highest TFC was obtained with the acetone and ethanol leaf extracts of C. sylvaticus with 82.76 and 84.54 mgQE/g respectively. Antioxidant activity of extracts Two antioxidant assays which involved the measurement of colour disappearance caused by free radicals such as DPPH and ABTS were used. As expected, the free radical scavenging activity of the extracts was concentration-dependent (data not shown) and the IC 50 values determined are presented in Table 2. The antioxidant activity varies within extracts from the same plant and between extracts from different plants. It should be noted that a lower IC 50 value indicates a stronger antioxidant potency of the sample tested. Therefore, the ethanol leaf extracts from all the three plants have good antioxidant potency when compared with acetone and water extracts from the same plant. Among all the extracts from the three plants, the ethanol leaf extract of C. gratissimus had the highest antioxidant potency with IC 50 values of 32.18 and 34.95 μg/mL respectively for the DPPH and ABTS radical scavenging activity. Ascorbic acid and trolox, known as potent antioxidant compounds, had the best antioxidant potency with IC 50 values of 1.92 and 3.92 μg/mL (ascorbic acid); 2.21 and 4.64 μg/mL (trolox) respectively for the DPPH and ABTS radical scavenging activity ( Table 2). Anti-inflammatory activity of extracts The anti-inflammatory activity of leaf extracts was determined using the nitric oxide (NO) and 15-lipoxygenase (15-LOX) inhibitory assays. Nitric oxide inhibitory effect of extracts on LPS-stimulated RAW 264.7 macrophages All the extracts from the three Croton species had inhibitory activity on NO production in a concentration-dependent manner ( Fig. 1a and b). Water leaf extracts of the three plants had the lowest NO inhibitory effect except for the water extract from C. gratissimus that had a good inhibitory activity. Acetone and ethanol leaf extracts of the plants had the highest NO inhibitory activity compared with their respective water leaf extracts. The IC 50 values were calculated, and are presented in Table 2. Acetone leaf extracts from the three plants had the lowest IC 50 values, which are not significantly different from the IC 50 values obtained for the ethanol leaf extracts. However, the acetone leaf extract of C. pseudopulchellus had an IC 50 value (34.64 μg/mL) significantly (P < 0.05) lower than the IC 50 of the ethanol extract (53.49 μg/mL) from the same plant. The acetone leaf extract of C. pseudopulchellus therefore had the highest NO inhibitory potency. Quercetin, used as positive control, had the highest NO inhibitory potency with IC 50 of 5.82 μg/mL. The cell viability of LPS-stimulated RAW 264.7 macrophages after treatment with the extracts and quercetin is presented in Fig. 1c. The acetone and ethanol leaf extracts as well as quercetin were slightly cytotoxic on LPS-stimulated RAW 264.7 macrophages with percentage of cell viability varying between 62 and 96%. The water leaf extracts were less cytotoxic with cell viability greater than 76% at the highest concentration (100 μg/mL) tested. Lipoxygenase inhibitory activity of extracts The ferrous oxidation-xylenol orange (FOX) assay was used to determine the 15-lipoxygenase inhibitory activity of different extracts from the three Croton species, and the IC 50 values were determined using the non-linear regression curves (Additional file 1: Figure S1) and the results are presented in Table 2. All the extracts except the water extracts had better inhibitory activity against 15-lipoxygenase when compared to the positive control (quercetin). The IC 50 values of the active extracts (acetone and ethanol) from the three plants varied between 0.57 and 11.64 μg/mL which is significantly (P < 0.05) different from quercetin (24.60 μg/mL). Ethanol leaf extracts were more active than acetone leaf extracts from the same plant species, thus suggesting that ethanol extracted more anti-lipoxygenase compounds than acetone. The highest lipoxygenase inhibitory activity was obtained with the ethanol leaf extract of C. pseudopulchellus (IC 50 of 0.57 μg/mL). Selective cytotoxic effect of extracts on a non-cancerous cell versus cancerous cells Different extracts were tested for cytotoxicity against four cancerous (A549, Caco-2, HeLa and MCF-7) cell types as well as the non-cancerous African green monkey (Vero) kidney cells, and the graphs of cell viability against the concentrations tested are presented in Additional file 2: Figure S2, Additional file 3: Figure S3, Additional file 4: Figure S4, Additional file 5: Figure S5 and Additional file 6: Figure S6 respectively. The LC 50 and IC 50 values of extracts were determined from concentration-dependent graphs, and are presented in Table 3. Water leaf extracts had the lowest cytotoxic effect on both non-cancerous and cancerous cells with LC 50 or IC 50 greater than 533.33 μg/mL and 200 μg/mL, respectively. An exception was observed with the water leaf extract of C. sylvaticus that had good cytotoxicity (IC 50 of 45.62 μg/mL) on MCF-7 cells with a promising selectivity index greater than 21.92 (see Table 3). On the other hand, ethanol leaf extracts of C. pseudopulchellus and C. sylvaticus were more cytotoxic on both Table 3). In addition, the ethanol leaf extract and acetone leaf extract of C. sylvaticus were highly selective against A549 and MCF-7 cells with SI of 4.70 and 2.12, respectively. The same observation was made with the acetone leaf extract of C. pseudopulchellus which had Induction of caspase-dependent apoptosis by active extracts on cancerous cells In this assay, acetone leaf extracts of the three Croton species were used based on their high selectivity indexes or lower cytotoxicity to non-cancerous cells compared to other extracts. The activation of caspase-3 and -7 enzymes was differentially observed in all the four cancerous cells treated with the active extracts compared to the untreated controls (see Fig. 2). Caspase − 3 and − 7 enzymes were better activated after treatment with acetone leaf extracts of the three plants on HeLa and MCF-7 cells. The activation of these enzymes was also observed on A549 and Caco-2 cells only after treatment with the acetone leaf extracts of C. pseudopulchellus and C. gratissimus (Fig. 2b and c). These two extracts significantly (P < 0.05) induced caspase − 3 and − 7 activity in all cancerous cells at concentrations of ½ x IC 50 (1.24 to 1.56-fold change). A non-significant increase of the activity of caspase − 3 and − 7 was noted after treatment with acetone leaf extracts of C. sylvaticus on A549 and MCF-7 cells (1.10 to 1.13-fold change). The acetone leaf extract of C. gratissimus induced activation of caspase − 3 and − 7 activity in a concentration-dependent manner on HeLa cells (Fig. 2c), and the highest induction (1.83-fold change) was obtained at the concentration of 2 x IC 50 . Discussion Our study aimed to evaluate the antioxidant, anti-inflammatory and cytotoxic activities of three Croton species. The ethanol leaf extracts of the three plants were highly active in all experiments (except the NO inhibitory activity) compared to acetone and water leaf extracts. These results suggested that the antioxidant, anti-inflammatory and cytotoxic compounds extracted from the three plants are more concentrated in the ethanol leaf extract than in the acetone or water leaf extracts. We also investigated the potential relationship between the antioxidant, anti-inflammatory and cytotoxic activities of the active ethanol and acetone extracts. This relationship was analysed by determining the Pearson correlation coefficients (r) after plotting a linear curve with IC 50 values of each cancer cell on the y-axis against phytochemical content or IC 50 values of the antioxidant power (DPPH, ABTS) and anti-inflammatory activity (NO, 15-LOX) on the x-axis (Table 4). A linear correlation (r<0.5) existed between antioxidant, anti-inflammatory and cytotoxic activities, although this correlation was considered to be less strong. In fact, free radicals are well known to play a major role in the development of oxidative stress that can lead to many illnesses including cardiovascular diseases, diabetes, inflammation, degenerative diseases, and cancer [26]. Nitric oxide (NO), a molecule playing a crucial role in inflammatory response, can react with free radicals such as superoxides to produce peroxynitrites that can cause irreversible damage to cell membranes leading to the promotion of tumor growth and proliferation [27]. In addition, natural inhibitors of lipoxygenases have been shown to suppress carcinogenesis and tumor growth in a number of experimental models [28]. Moreover, several scientific reports have suggested that antioxidant and anti-inflammatory agents could be beneficial in the prevention and treatment of cancer [29]. Our results therefore suggest that the antioxidant or anti-inflammatory activities of extracts may contribute moderately to their cytotoxic activity. Phenolics and flavonoids are known for their contribution either directly or indirectly to the cytotoxic activity. In our study, we noted that the acetone and ethanol extracts of C. gratissimus which had the highest total phenolic contents (222.29 and 180.61 mgGAE/g respectively) were selectively cytotoxic to cancerous cells compared to non-cancerous. Indeed, due to their anti-and pro-oxidant potential, phenolics (which also include flavonoids) may have cytotoxic activity against different human cancer cells with little or no effect on normal cells. This selectivity in the cytotoxicity properties of phenolics has strengthened interest in formulating novel and less toxic anticancer products based on these types of compounds [30,31]. The goal of any chemotherapeutic treatment is to selectively attenuate or destroy pathogenic micro-organisms or cancerous cells with minimal side effects to the host cells [32]. This principle, known as selective toxicity, is the key to all chemotherapeutic treatment. In this study, the acetone and ethanol extracts of C. gratissimus were more selective with SI ranging between 1.91 and 6.25, and it therefore indicates that these extracts may be useful in the search for anticancer compounds. A cembranolide isolated from stem bark of Croton gratissimus had moderate activity against PEO1 and PEO1TaxR ovarian cancer cell lines [16]. In the present work, four cancerous (A549, Caco-2, HeLa, MCF-7) cells and a non-cancerous (Vero) cell line were used to evaluate the antiproliferative activity of the crude extracts from three Croton species. The use of these cancerous cells with the non-cancerous (Vero) cell line as cell models has been reported for comparison and determination of the selectivity indexes [33,34]. However, the cytotoxic effect on this non-cancerous (Vero) cell line of animal origin needs to be confirmed on other non-cancerous cells of human origin. The selective toxicity of acetone and ethanol extracts of C. gratissimus also suggested that the active compounds interact with special cancer-associated receptors or cancer cell special molecule (not found in non-cancerous cells), thus activating some mechanisms that cause cancer cell death [35]. The activation of caspase − 3 and − 7 enzymes was observed in all four of the cancer cell types treated with the active extracts compared to the untreated cells, which therefore reveals that apoptosis has taken place in the treated cells. Indeed, caspases − 3, and − 7 are known as "executioners" of apoptosis since they serve as substrates for initiator caspases in extrinsic or intrinsic apoptotic pathways [36]. It will be important to comprehensively investigate the mechanism of the activity, and this aspect will be addressed once the compounds responsible for the activity have been isolated. The aim of the current study was to explore the possibility that extracts have inhibitory activity on cancer cell growth. According to the United States National Cancer Institute, a crude extract is generally considered to have in vitro cytotoxic activity if the IC 50 is lower than 30 μg/mL [37]. Based on this statement, acetone and ethanol extracts of C. pseudopulchellus and C. sylvaticus were considered as more active on both cancerous A549 and MCF-7 cells. Differences in the selectivity indexes of these extracts on these two cancerous cells may be ameliorated through the isolation of active compounds which might reduce the toxic effects of the crude extracts. Studies are ongoing to isolate active compounds from these active extracts. Conclusion In summary, due to their selective toxicity between noncancerous and cancerous cells, with beneficial antioxidant and anti-inflammatory activities, the acetone and ethanol leaf extracts of Croton gratissimus may be useful against cancer and other oxidative stress related diseases. The isolation of active compounds from this extract will be of great interest to fully understand the mechanism of anticancer activity. In addition, acetone and ethanol extracts of C. pseudopulchellus and C. sylvaticus, which were cytotoxic to both cancerous and non-cancerous cells, may be further explored as sources of new cytotoxic compounds.
2018-11-16T22:09:30.899Z
2018-11-14T00:00:00.000
{ "year": 2018, "sha1": "e84d11f89e578815660aac5a73282d2ef7d3ba11", "oa_license": "CCBY", "oa_url": "https://bmccomplementmedtherapies.biomedcentral.com/track/pdf/10.1186/s12906-018-2372-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e84d11f89e578815660aac5a73282d2ef7d3ba11", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
267411252
pes2o/s2orc
v3-fos-license
Detection of the DNA methylation of seven genes contribute to the early diagnosis of lung cancer Background Low-dose Computed Tomography (CT) is used for the detection of pulmonary nodules, but the ambiguous risk evaluation causes overdiagnosis. Here, we explored the significance of the DNA methylation of 7 genes including TAC1, CDO1, HOXA9, ZFP42, SOX17, RASSF1A and SHOX2 in the blood cfDNA samples in distinguishing lung cancer from benign nodules and healthy individuals. Method A total of 149 lung cancer patients [72 mass and 77 ground-glass nodules (GGNs)], 5 benign and 48 healthy individuals were tested and analyzed in this study. The lasso-logistic regression model was built for distinguishing cancer and control/healthy individuals or IA lung cancer and non-IA lung cancer cases. Results The positive rates of methylation of 7 genes were higher in the cancer group as compared with the healthy group. We constructed a model using age, sex and the ΔCt value of 7 gene methylation to distinguish lung cancer from benign and healthy individuals. The sensitivity, specificity and AUC (area under the curve) were 86.7%, 81.4% and 0.891, respectively. Also, we assessed the significance of 7 gene methylation together with patients’ age and sex in distinguishing of GGNs type from the mass type. The sensitivity, specificity and AUC were 77.1%, 65.8% and 0.753, respectively. Furthermore, the methylation positive rates of CDO1 and SHOX2 were different between I-IV stages of lung cancer. Specifically, the positive rate of CDO1 methylation was higher in the non-IA group as compared with the IA group. Conclusion Collectively, this study reveals that the methylation of 7 genes has a big significance in the diagnosis of lung cancer with high sensitivity and specificity. Also, the 7 genes present with certain significance in distinguishing the GGN type lung cancer, as well as different stages. Supplementary Information The online version contains supplementary material available at 10.1007/s00432-023-05588-z. Introduction Lung cancer is the leading cause of cancer-related mortality globally, with about 2.2 million incidences and 1.8 million deaths in 2020 (Hughes et al. 2022).Late diagnosis is largely responsible for its extremely high mortality rate (Ji et al. 2023).The 5-year survival rate for patients with stage I disease is about 81%-85% while it decreases in 15%-19% for patients with higher stages (Begum et al. 2011;Blandin Knight et al. 2017).Therefore, early diagnosis of lung cancer is important, which can help improve the outcome of patients. Low-dose Computed Tomography (CT) is widely used for detection of pulmonary nodules, but the ambiguous risk evaluation often causes overdiagnosis and radioactivity.To this end, researchers have made efforts all the time in seeking blood markers for early diagnosis of lung cancer, with the most intensively investigated biomarkers including squamous cell carcinoma antigen (SCC-Ag), cytokeratin 19 fragment (CYFRA 21-1), carcinoembryonic antigen (CEA), and neuron-specific enolase (NSE) (Hu et al. 2023).Regretfully, the low sensitivity decreases the performances of those biomarkers.With the increased research focusing on the field of epigenetics, which regulates gene expression without altering the DNA sequence, its crucial roles in the diagnosis of lung cancer have been largely uncovered.DNA methylation is a well-known epigenetic alteration that involves the covalent addition of a methyl group to the cytosine residue of CpG dinucleotides, leading to transcriptional repression (Ansari et al. 2016).In lung cancer, several studies have published their data to support the potentially high values of gene methylation in the early diagnosis of lung cancer using the circulating free DNA (cfDNA) samples.For instance, Hu et al. (2023) developed a "7-DMR model" (7 differentially methylated genes (HOXB4, HOXA7, HOXD8, ITGA4, ZNF808, PTGER4, and B3GNTL1) to distinguish lung cancers from benign nodules, achieving the sensitivities of 89%/92%, specificities of 94%/100%, and accuracies of 90%/94% in the discovery cohort and validation cohort.Chen et al. (2020) demonstrated that the combination of CDO1, SOX17, and HOXA7 had the ability in distinguishing the smallest lung nodules among 1.1-2.0cm (sensitivity 74%; specificity, 93%), while the combination of CDO1, TAC1, and SOX17 was best in tumor sizes < 1.0 cm (sensitivity 71%; specificity, 82%).However, the performance of these models needs to be improved, leaving the combination of gene methylation panel as a problem demanding prompt solution. Sample collection From July 2022 to December 2022, 237 blood samples collected from 237 individuals were included in this study, including 92 patients with mass diseases, 92 patients with groundglass nodules (GGNs) and 53 healthy individuals.Inclusion criteria: aged > 18 years old; individuals with pulmonary nodule (for mass and GGN individuals); signed the informed consent; Exclusion criteria: combined with other tumors. Lung cancer patients who received any pretreatment therapy, including chemotherapy or radiotherapy, or had a history of other malignancies were not included.All patients received curative-intent resection.The blood sample was obtained from each patient prior to surgery and was immediately processed to isolate plasma.All patients with pathologically confirmed malignant lesions were staged according to the revised TNM guidelines classification criteria (Detterbeck et al. 2017).Patients with lung cancer were included as cancer group, those with histologically benign lesions as the control group.Plasma samples of 49 healthy volunteers were also considered as the control group. DNA isolation and quantitative multiplex methylation-specific PCR (qMSP) 3 mL plasma was collected from each individual and the cellfree nucleic acid was extracted using the plasma-free DNA extraction kit (Shanghai Rightongene Biotechnology Co. Ltd., Shanghai, China) based on the manufacturer's directions.Then, the DNA was eluted by 60 μL eluent buffer, which was used as a template for subsequent experiments.DNA concentration and purity were evaluated using a Qubit 3.0 Fluorometer (Thermo Fisher Scientific, Waltham, MA, USA).DNA was bisulfite-converted using the DNA Methylation kit (Shanghai Yuanqi, Shanghai, China).For methylation analysis, EpiTect MethyLight Master Mix (Qiagen) was used, together with fluorescent dye- (Chen et al. 2020) labeled probes, 50 ng of bisulfite-converted DNA and 100-300 nM of each primer.The DNA methylation of 7 genes, including TAC1, CDO1, HOXA9, ZFP42, SOX17, RASSF1A and SHOX2) in three multiplex qMSP assays were detected, with β-actin (ACTB) as the reference gene.ΔC t was calculated as follows: ΔC t = the Ct value (target gene)-the C t value (reference gene).The mixture DNA sample extracted from NCI-H596 and NCI-H460 at a ratio of 1:1 was used as a positive control.Buffy-coat gDNA extracted from the blood samples of healthy individuals and verified by the Sanger sequencing was used as the negative control.The primers of DNA methylation were synthesized according to an applied patent (No. 2022114063829) and the number of CpGs covered was listed in Supplementary Table 1.The sample was considered as successfully detected when the C t value of the reference gene (ACTB) was < 35.Based on this, the gene was defined as methylated when the C t value < 42.The Ct value was defined as 45 for the negative methylated gene in samples as the cycle of the PCR assay was set as 45. Construction of models for lung cancer diagnosis and IA stage prediction A lasso-logistic regression model was built for distinguishing cancer and control/healthy individuals or IA lung cancer and non-IA lung cancer cases.The model was visualized by receiver operating characteristic (ROC) curves, assessed through the area under the curve (AUC).Logistic regression analysis was performed in R open-source software version 4.0.2 and the pROC package was implemented for ROC analysis.Consideration of the variables including age, sex, and the ΔCt or the status of 7 gene methylation, we construct a model to distinguish lung cancer patients from the benign and healthy individuals with the best performance, with 5 benign and 48 healthy individuals as the control group.The formula was as follows: pre = -0.055613age+ 0.044842Δ Ct (TAC1FAM) + 0.033004ΔCt (HOXA9) + 0.0 550 91Δ Ct( ZFP42) + 0.014456ΔCt(RASSF1A)-0.021013ΔCt(SHOX2).For the IA stage prediction model, the formula was as follows: pre = 0.02510age-0.57283sex(male = 1) + 0.37636C DO1(positive = 1) + 0.376358ZFP42(positive = 1) + 0.1786 7SOX17(positive = 1) -0.16980RASSF1A(positive = 1) + 0.59558SHOX2(positive = 1). Statistical analysis All other statistical analyses were performed in IBM SPSS Statistics software for Windows version 24.0 (IBM Corporation, Armonk, NY, USA).Reported P values were 2-sided.P < 0.05 was considered to be significantly different.* represents P < 0.05, ** represents P < 0.01, and *** represents P < 0.001. Patient characteristics A total of 237 blood samples collected from 237 individuals were included in this study, including 92 patients with mass diseases, 92 patients with GGNs and 53 healthy individuals, among which 202 samples were tested successfully and included in the next analysis.Detailly, 74 cases of the mass group were tested successfully, including 72 patients with lung cancer and 2 patients with benign nodules, 80 cases of the GGNs group were tested successfully, including 77 patients with lung cancer and 3 patients with benign nodules, and 48 cases of the healthy group were tested successfully (Fig. 1). In total, 149 lung cancer were included, including 72 patients from the mass group and 77 patients from the GGNs group.As shown in Table 1, 51 (70.8%) and 21 (29.2%)male cases were found in the lung cancer patients from the mass and GGNs groups, respectively.Compared with the mass group, more female patients were in the GGNs group (53.2% vs. 29.2%,P = 0.006), together with a lower average age (60.7 ± 10.6 vs. 67.5 ± 8.8, P < 0.0001).In addition, more squamous carcinoma cases were found in the lung cancer patients from the mass group, together with less adenocarcinoma cases according to the histopathology (P < 0.0001).Moreover, 24 cases (35.8%) were diagnosed with IA stage for lung cancer patients from the mass group, while it increased to 47 (70.1%) for lung cancer patients from the GGNs group (P < 0.0001).Taken together, the clinical features such as sex, age, histopathology and pTNM stage were significantly different between the lung cancer patients from the mass and GGNs groups. The value of 7 gene methylation in the diagnosis of lung cancer Then, we explored the diagnosis value of the DNA methylation status of 7 genes including TAC1, CDO1, HOXA9, ZFP42, SOX17, RASSF1A and SHOX2 in lung cancer.The positive rates of the methylation of all 7 genes were significantly higher in the cancer group as compared with the healthy group (Fig. 2A).All 5 cases (100%) with benign nodule were positive for TAC1 methylation, while 2 (40.0%), 1 (20.0%), 3 (60.0%), 3 (60.0%), 2 (40.0%) and 0 (0%) of the 5 cases were positive for the methylation of CDO1, HOXA9, ZFP42, SOX17, RASSF1A and SHOX2, respectively (Fig. 2A).In addition, no obvious difference in the DNA methylation status of 7 genes was observed in lung cancer patients with different histopathology, as shown in Fig. 2B.Subsequently, we assessed the performance of a single gene in the diagnosis of lung cancer.ROC curves showed the AUC of a single gene was not good (0.546-0.716), as shown in supplementary Fig. 1A.Thus, we construct a model to distinguish lung cancer patients from benign and healthy Lung cancer patients with different stages.Gene marked red refers to the positive rate of this gene methylation shows a significant difference between groups (P < 0.05) individuals using the status of 7 gene methylation.The 5 benign and 48 healthy individuals were considered as the control group.Using the logistic regression, the model was constructed using the ΔCt values of the 7 genes together with patient's age and sex (male = 0, female = 1), achieving a sensitivity, specificity, and AUC of 86.7%, 81.4% and 0.891, respectively (Fig. 3A).These results revealed a potential role of 7 gene methylation in the diagnosis of lung cancer. The value of 7 gene methylation in distinguishing the GGNs type of lung cancer from mass type Moreover, we compared the DNA methylation status of 7 genes (TAC1, CDO1, HOXA9, ZFP42, SOX17, RASSF1A and SHOX2) in lung cancer cases from the mass and GGNs groups.Compared with the GGNs-original lung cancer, patients with mass original showed higher positive rates in CDO1 (P = 0.006) and RASSF1A methylation (P = 0.08), respectively (Fig. 2C).Also, we assessed the performance of a single gene in the diagnosis of lung cancer.ROC curves showed the AUC of a single gene was not good (0.421-0.789), as shown in supplementary Fig. 1B.we construct a model to predict whether the lung cancer patients from mass or GGN.Using the logistic regression, the model was constructed using the methylation status of 7 genes together with the patient's age and sex (male = 0, female = 1) with a sensitivity, specificity, and AUC of 77.1%, 65.8% and 0.753, respectively (Fig. 3B). These results revealed a potential role of the methylation of the 7 genes in distinguishing GGNs type lung cancer from mass type. The value of 7 gene methylation in the diagnosis of the early stage of lung cancer Prediction of the tumor size concerns the resection range, thus it is essential to accurately predict the tumor size before surgery.Here, we compared the DNA methylation status of 7 genes including TAC1, CDO1, HOXA9, ZFP42, SOX17, RASSF1A and SHOX2 in lung cancer cases with I-IV stages.A total of 91, 28, 15 and 1 patients with I, II, III and IV stages of lung cancer were included in this analysis, respectively.The results showed that the methylation rate of CDO1 and SHOX2 showed significantly different between the I-IV stages of lung cancer (Fig. 2D).In addition, we compared the methylation status of these 7 genes in patients with IA and non-IA stage.The positive rate of CDO1 methylation was significantly higher in the non-IA group as compared with the IA group (42.2% vs. 21.1%,P = 0.014), while the methylation status of other 6 genes showed no significant difference (Fig. 4).This result suggested that gene methylation may contribute to find lung cancer patients with different stages, which may show a guiding value in the resection range of lung cancer. Discussion In this study, we explored the significance of the DNA methylation of 7 genes including TAC1, CDO1, HOXA9, ZFP42, SOX17, RASSF1A and SHOX2 in the blood cfDNA samples in distinguishing lung cancer from benign nodules and healthy individuals.Our results first reveal that the methylation of these 7 genes has a big significance in the diagnosis of lung cancer and achieved a diagnostic model with high sensitivity and specificity. With the improvement in CT scanners and the increasing awareness of physical examination, more pulmonary nodules are identified in 1.6 million patients per year in the US (Mazzone and Lam 2022).At least 95% of all pulmonary nodules identified are benign, most often granulomas or intrapulmonary lymph nodes (Sun et al. 2020).This together with the radiation caused by the CT drive the development of ideal biomarkers which are expected to be further found in biological fluids for the non-invasive diagnosis of cancers, including lung cancer (Li et al. 2022b).Some lung cancer-related markers, including CEA, carbohydrate antigen 125 (CA125), cytokeratin 19 fragment (CY211), NSE, and SCC, have been widely reported.Among these biomarkers, the combination of CEA, CA125, CY211 and SCC showed the best performance with a sensitivity of 83.3%, a specificity of 62.9% and an AUC of 0.867 (Yang et al. 2018).In addition, Fig. 4 The DNA methylation status of seven genes in different groups.The positive rate of CDO1 methylation was significantly higher in the non-IA group as compared with the IA group (42.2% vs. 21.1%,P = 0.014) Muller et al. (Muller et al. 2017) constructed a model that includes the variables related to smoking history and nicotine addiction, medical history, family history of lung cancer, and lung function (forced expiratory volume in 1 s [FEV1]) with excellent discrimination (concordance (c)-statistic = 0.85).Ajona et al. (Ajona et al. 2021) developed a diagnostic model based on the quantification in plasma of complement-derived fragment C4c, CYFRA 21-1 and C-reactive protein (CRP) with an AUC of 0.86 and a specificity of 92%.Among the multiple biomarkers, DNA methylation shows good performance (P.Li et al. 2022a;Magenheim et al. 2022;Liang et al. 2021).SOX17, TAC1, CDO1, HOXA9 and ZFP42 were the 5 genes that were identified in the Cancer Genome Atlas (TCGA) with highly prevalent DNA methylation in lung squamous and adenocarcinoma, but not in normal lung tissue (Cancer Genome Atlas Research, 2012;Wrangle et al. 2014;Diaz-Lagares et al. 2016).Hulbert et al. (Hulbert et al. 2017) reported that the combination of CDO1, TAC1 and SOX17 in plasma showed a sensitivity, specificity and AUC of 86%, 78% and 77% in the diagnosis of non-small cell lung cancer with stage I and IIA from the individuals with noncancer.Abou-Zeid et al. (Abou-Zeid et al. 2023) reported that the methylation level of HOXA9 was significantly higher in NSCLC patients than controls (P > 0.001).Liu et al. (Liu et al. 2017) used the Mate-analysis through the systematic literature search yielded a total of 33 studies including a total of 4801 subjects (2238 patients with lung cancer and 2563 controls) and covering 32 genes.Their findings demonstrated that SOX17 (sensitivity: 84%, specificity: 88%), CDO1 (sensitivity: 78%, specificity: 67%), ZFP42 (sensitivity: 87%, specificity: 63%) and TAC1 (sensitivity: 86%, specificity: 75%) were the superior genes.In addition, Gao et al. (Gao et al. 2022) reported that the promoter methylation level of SHOX2 and RASSF1A was significantly higher in tumor samples at stage I-II than that in normal samples.Thus, the 7 genes (TAC1, CDO1, HOXA9, ZFP42, SOX17, RASSF1A and SHOX2) were included in this study and considered as the study subjects.Recently, Hu et al. (Hu et al. 2023) constructed a noninvasive 7-DMR model (7 differentially methylated genes, HOXB4, HOXA7, HOXD8, ITGA4, ZNF808, PTGER4, and B3GNTL1) to discriminate lung cancers and non-lung cancers including benign lung diseases and healthy controls, with a sensitivity of 81% and a specificity of 98%.Herein, we explored the value of other 7 genes in the diagnosis of lung cancer and achieved an increased sensitivity (from 81% to 86.7%) as compared with the 7-DMR model.We focused on the model's sensitivity to distinguish lung cancer and benign lung diseases and healthy controls, as this model aimed to find the potential cancer patients whom were recommended for further examination to confirm cancers. Recently, the increased number of GGNs attracted unprecedented attention.GGNs can be further classified into pure GGN (pGGN) and part-solid nodule according to the presence of solid components.About 20% of lung adenocarcinomas manifested as pGGN and showed favorable prognosis as compared with solid lung cancer (Mazzone and Lam 2022; Chang et al. 2013;Heidinger et al. 2017).Thus, the identification of the solid or pGGN is of importance.Herein, we demonstrated the value of 7 genes together with patients' age and sex in distinguishing GGN and solid lung cancer. In addition, correct prediction of the size of pulmonary nodule is crucial for the following surgery, which directly concern the extent of surgical resection.To this end, we compared the DNA methylation status of 7 genes including TAC1, CDO1, HOXA9, ZFP42, SOX17, RASSF1A and SHOX2 in lung cancer cases with different stages.The results showed that the methylation rate of CDO1 and SHOX2 showed significantly different between the I-IV stages of lung cancer.In addition, the positive rate of CDO1 methylation was significantly higher in the non-IA group as compared with the IA group.These results indicated the CDO1 and SHOX2 methylation have a certain significance for tumor staging of lung cancer. Collectively, this study reveals that the methylation of 7 genes (TAC1, CDO1, HOXA9, ZFP42, SOX17, RASSF1A and SHOX2) has a big significance in the diagnosis of lung cancer and achieved a diagnostic model with high sensitivity and specificity.Also, the 7 genes present with certain significance in distinguishing the GGN type lung cancer, as well as different stages.Further study with larger size samples will be carried out to further explore the significance of DNA methylation in distinguishing the various stages of lung cancer. Fig. 1 Fig. 1 Flowchart for finding lung cancer candidate diagnostic biomarkers Fig. 3 Fig. 3 Evaluation of the accuracy of the diagnostic model of the combination of the DNA methylation of seven genes in lung cancer.ROC curves showed the sensitivity, specificity and AUC of these 7 gene Table 1 The clinical information of the 149 lung cancer patients Fig. 2 The positive rate of 7 gene methylation in different groups.A Benign, healthy and cancer patients.B Lung cancer patients with different histopathology.C Lung cancer of mass and GGN types.D
2024-02-06T14:13:14.870Z
2024-02-05T00:00:00.000
{ "year": 2024, "sha1": "f52071b944f865507782272fd577623c81c66f6b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00432-023-05588-z.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8f87d9db6876616a23d0bd55adc02ae571ac1c5f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268380809
pes2o/s2orc
v3-fos-license
A platform-independent AI tumor lineage and site (ATLAS) classifier Histopathologic diagnosis and classification of cancer plays a critical role in guiding treatment. Advances in next-generation sequencing have ushered in new complementary molecular frameworks. However, existing approaches do not independently assess both site-of-origin (e.g. prostate) and lineage (e.g. adenocarcinoma) and have minimal validation in metastatic disease, where classification is more difficult. Utilizing gradient-boosted machine learning, we developed ATLAS, a pair of separate AI Tumor Lineage and Site-of-origin models from RNA expression data on 8249 tumor samples. We assessed performance independently in 10,376 total tumor samples, including 1490 metastatic samples, achieving an accuracy of 91.4% for cancer site-of-origin and 97.1% for cancer lineage. High confidence predictions (encompassing the majority of cases) were accurate 98–99% of the time in both localized and remarkably even in metastatic samples. We also identified emergent properties of our lineage scores for tumor types on which the model was never trained (zero-shot learning). Adenocarcinoma/sarcoma lineage scores differentiated epithelioid from biphasic/sarcomatoid mesothelioma. Also, predicted lineage de-differentiation identified neuroendocrine/small cell tumors and was associated with poor outcomes across tumor types. Our platform-independent single-sample approach can be easily translated to existing RNA-seq platforms. ATLAS can complement and guide traditional histopathologic assessment in challenging situations and tumors of unknown primary. Histopathologic assessment has been the primary modality for the diagnosis of human cancers since the 19 th century, and to this day remains the mainstay of diagnosis, risk stratification and staging.While the field has made countless advances, the art of pathology relies heavily on subjective visual inspection, with considerable levels of inter-observer variability in diagnosis [1][2][3] , which can impact treatment decisions.Tumors are molecularly complex and even pathologic specimens that appear visually similar may have widely different clinical behaviors.Furthermore, the origin of metastatic tumors is sometimes difficult to ascertain using traditional histopathologic approaches due to heterogenous features or tumor de-differentiation.Immunohistochemistry, in situ hybridization, as well as other techniques have emerged to augment morphology alone, and are routinely used clinically in identification of both the site of origin (e.g.prostate, breast, lung) and cancer lineage (e.g.adenocarcinoma, squamous cell cancer (SCC), etc.).However, there is a limit on how many stains can be applied, requiring a priori selection.Furthermore, the number of pathologists in the US has decreased by 18% between 2007 and 2017, while cancer cases have increased by 17%, which has yielded a 41% increase in workload for pathologists 4 .This shortage can greatly impact cancer care unless new methodologies to assist 1 Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA. 2 Department of Human Oncology, University of Wisconsin, Madison, WI, USA. 3 Carbone Cancer Center, University of Wisconsin, Madison, WI, USA. 4 Department of Medicine, University of Wisconsin, Madison, WI, USA. 5 Department of Biostatistics and Medical Informatics, University of Wisconsin, Madison, WI, USA. 6Department of Pathology and Laboratory Medicine, University of Wisconsin, Madison, WI, USA. 7Wisconsin State Laboratory of Hygiene, University of Wisconsin, Madison, WI, USA. 8 William S. Middleton Veterans Hospital, Madison, WI, USA. e-mail: shgzhao@humonc.wisc.edupathologists can be implemented.In recent decades, next generation sequencing (NGS) of DNA, RNA, and the epigenome have transformed our understanding of the alterations that define and drive carcinogenesis.NGS represents an extension of the histologic techniques described above and can be thought of as an indirect microscopy at the molecular scale.Rather than relying on fluorescence and visual assessment to identify and quantify macromolecules, quantitative NGS approaches can capture molecular features that are undetectable visually.NGS and other -omics techniques have exponentially increased the amount of data collected on cancer patients over the past decade, and numerous commercial assays are now used in the clinic.Interpretation of this quantity of data poses its own challenges, and computational techniques such as machine learning (ML) have emerged to turn data into useful clinical tools.However, the utility of these clinical tools depends strongly on the datasets on which the classifier is validated, and which clinical features of the tumors can be identified.While there are published tissue of origin prediction tools available, they lack sufficient validation on metastatic samples and neglect the critical diagnostic component of independent assessment of site of origin and cancer lineage.These models rely on a diverse range of data on which to train a classifier, such as DNA alterations [5][6][7][8] , DNA methylation [9][10][11][12][13] , and mRNA [14][15][16][17][18][19][20][21][22][23] or microRNA 24 expression.DNA alterations (mutation status, copy number alteration (CNA)) are widely assessed, but unfortunately, many oncogenes and tumor suppressors are altered across multiple cancer types, which can be a limiting factor of mutationbased cancer of origin ML models [5][6][7] .Despite these limitations, ML models using DNA alterations have achieved accuracies up to 88% across 24 cancer types on independent validation 6 .DNA methylation is an epigenomic alteration that regulates gene expression, with certain alterations being highly cancer type specific.Most DNA methylation ML models have only been validated in small institutional cohorts or in hold-out test sets, not true independent validation cohorts, limiting our ability to assess their generalizability.Expression of certain mRNAs and microRNAs have also been found to be tumor type specific, and ML models built on the expression of each have been shown to be highly accurate.One large study (TOD-CUP) 23 achieved an accuracy in independent validation of 91% across 4 cancer types in 1029 TCGA microarray samples, 94% across 4 cancer types in 2277 non-TCGA primary tumor samples, and 94% accuracy across 5 cancer types in 141 metastatic samples.A more recent deep learning-based model 15 achieved an accuracy in independent validation of 91.4% across 18 cancer types in 2085 samples from the ICGC dataset, including an accuracy of 88.1% in 395 metastatic samples.While these results represent an improvement over DNA alteration-based strategies, the vast majority are validated in primary tumor samples, with limited data on performance in metastatic samples, where site of origin is likely more difficult to predict due to tumor evolution and de-differentiation.In addition, cancer lineage (e.g.adenocarcinoma vs SCC) is a critical component of diagnosis and treatment planning but is often left out or paired with the site of origin, rather than being assessed as an independent axis. In this study, we created AI Tumor Lineage and Site (ATLAS) classifiers, trained on NGS from 8249 samples, that predict cancer site of origin (22 classes) as well as cancer lineage (8 classes).This independent classification is distinct from prior studies and improves clinical utility.This bimodular framework allows for separate evaluation of both important axes, for a total of 176 different possible combinations, and allows evaluation of lineage de-differentiation into more anaplastic or neuroendocrine forms.We then independently assessed the performance of our models on 10,376 tumor samples, including 1490 metastatic samples, the largest such validation of an expression-based classifier to date, especially in metastatic disease.In addition, our single-sample approach is platform-independent and agnostic to how the sample was collected and processed, producing accurate and interpretable predictions that can be applied to any existing RNA-seq platform.As tumor RNA-seq becomes routine, this tool can be readily integrated into pathologic clinical decision-making and provide objective and quantitative orthogonal information to help guide pathologic diagnosis. Modeling workflow and data overview To build the most comprehensive genomic classifier of cancer site of origin and lineage to date (Fig. 1a) we utilized 8249 samples from the Cancer Genome Atlas Program (TCGA, N = 7196) and the Cancer Cell Line Encyclopedia (CCLE, N = 1053) for ATLAS model training.The validation cohort consisted of 10,376 total samples, including 58 TCGA datasets (N = 3556, none overlapping with the training data) and 41 additional non-TCGA datasets (N = 6820).This included validation in primary tumors (N = 8886 from 97 datasets) and in metastatic tumors (N = 1490 from 17 datasets).The final training and validation cohorts included 22 cancer site of origin classes and 8 cancer lineage classes (Fig. 1b, c).Since many different RNA-seq platforms were used across datasets, each sample was independently normalized 25 with no required batch correction, allowing for a more clinically useful per patient normalization strategy.All training samples had gene expression data, mutation calls, and copy number alteration calls, which allowed for a comparison of each molecular feature in model building. Accurate predictions of cancer site of origin and lineage The first step of our workflow was to train separate models to predict for cancer site of origin and cancer lineage.We first evaluated the importance of different molecular features (i.e. gene expression, mutation, and copy number) and impact of the total number of molecular features in model performance (Fig. 2a).We assessed these two questions in our training data by using a 5-fold cross validation (CV) re-sampling schema (detailed in the methods).With regards to molecular feature type, we found that mutation status alone, copy number alone, or the combination of the two performed worse in CV than any combination that included gene expression.Since gene expression seemed to perform just as well alone as adding DNA alterations, we moved forward with a model using only gene expression.With regards to the number of features, CV performance increased initially as the number of features was increased, but plateaued for site of origin at around 500 features (including a binary sex variable) and lineage at around 200 features (only genes), which were used for the final models (detailed in methods).There was some overlap of genes between the two models (68 genes), but overall, the majority of genes in both models were unique and contributed to a final model framework that required only 632 features. The performance of these two models (comprising ATLAS) were then assessed in the independent validation cohort (Fig. 2b).Overall accuracy was 91.4% for cancer site of origin and 97.1% for cancer lineage (N = 10,376).However, there was a large difference in accuracy for site of origin between primary tumors (92.1%, comparable to prior studies; N = 8886) vs. metastatic tumors (86.8%;Fig. 2c; N = 1490).This difference is unsurprising given that the models were trained on primary tumors, in addition to tumor evolution and de-differentiation that occurs with progression to metastatic disease.Interestingly, the difference in performance for cancer lineage was less (97.4% in primary tumors vs. 95.7% in metastatic samples).While overall accuracy was high, gastro-intestinal (GI) and gynecologic (GYN) tumors tended to have worse classification accuracy (Fig. 2d).GI tumors were commonly mis-classified as other GI tumors, with hepato-pancreato-biliary (HPB) tumors (N = 1128) mis-classified as gastroesophageal in 15% of cases, and colorectal tumors (N = 337) misclassified as gastroesophageal tumors in 13% of cases (Fig. 2e).For GYN tumors, 10% of ovarian tumors (N = 498) were mis-classified as uterine tumors (Fig. 2f).To understand the impact of the binary sex variable on accuracy for the cancer site of origin model, all validation samples were run through the model with sex imputed as missing, resulting in a drop of accuracy from 91.4% to 90.7%.The benefit of including sex was primarily driven by improved accuracy in a subset of breast cancer (N = 45), ovarian cancer (N = 19), and cervical cancer (N = 7) samples.Median Shapley values 26 for each class prediction were obtained from the training data to identify the features that had the largest influence determining the site of origin and lineage classes, providing the top 10 features for each class in Supplementary Data 1.To confirm that features were specific to the tumor and not just normal tissue, we also report the top 10 features for each class among correctly predicted metastatic samples.This analysis confirmed that the sex variable was only a top feature in breast, ovarian and cervical cancer, and further identified many well-validated and novel markers that can be used to differentiate tumor types. Overall, the strength of the model prediction correlated well with the accuracy (Fig. 2g).For both site of origin and lineage, if the classifier prediction was ≥0.99 (encompassing 58.5% of the validation samples for cancer site, 75.1% of the validation samples for cancer lineage), this correlated with a 98-99% accuracy, even in metastatic samples.The correlation between model confidence and accuracy is important in interpreting the predictions and differentiating between high-confidence cases versus more equivocal ones.Finally, samples with low tumor purity (<50%, calculated by ESTIMATE 27 ) have worse accuracy compared to those with high purity (Fig. 2h) for both primary samples (N = 931 low purity samples; 10.5% of primary samples) and metastatic samples (N = 75 low purity samples; 5.0% of metastatic samples). Accurate distinction between adenocarcinoma versus SCC lineage across cancer sites Cancer site of origin and lineage are often intertwined.For example, tumors of the breast are predominantly adenocarcinomas, whereas tumors of the head & neck are predominantly SCC.For some sites, tumors can arise from either an adenocarcinoma or SCC lineage, a difference that is important to identify as it can impact treatment decisions.In order to ensure that our lineage classifier was accurately distinguishing lineage (as opposed to indirectly measuring it by predicting site), we further examined the accuracy at predicting cancer lineage stratified by cancer site of origin, specifically focusing on adenocarcinoma vs. SCC.In our validation dataset, three tumor sites (gastroesophageal, lung, cervix) had relatively large numbers (≥10) of both adenocarcinoma and SCC tumors, including 134 adenocarcinoma and 24 SCC gastroesophageal cancers, 469 adenocarcinoma and 168 SCC lung cancers, and 17 adenocarcinoma and 61 SCC cervical cancers.Overall, the accuracy of our lineage classifier in distinguishing between adenocarcinoma and SCC was high, ranging from 89% to 100% across the three sites (Fig. 3a). When we looked at the difference between the adenocarcinoma and SCC lineage scores, we saw clearly separate distributions between adenocarcinomas and SCCs (Fig. 3b). Sarcomatoid differentiation in mesothelioma Mesothelioma of the lung is a pleural-based tumor that arises from the mesothelium, commonly due to exposure to asbestos.This tumor type is unique in having three distinct subtypes, epithelioid, sarcomatoid (more aggressive), and biphasic (a mix of the epithelioid and sarcomatoid).Thus, it serves as an excellent tumor type in which to study the distribution of the sarcoma lineage score.Given the small total number of lung mesothelioma samples available (N = 88), we decided to remove all When the probability score was ≥ 0.99 (a majority of samples in all sub-groups shown), the models had very high accuracy (gcorrect prediction red, wrong prediction blue).When samples were stratified by low tumor purity (ESTI-MATE Tumor Purity < 0.5), accuracy of both models was found to be higher in samples with a high tumor purity (h).AUC is the area under the receiver operating characteristic (AUC-ROC) curve.HPB Cancer -Hepato-pancreato-biliary cancer. mesothelioma samples from TCGA and all other cohorts, excluding them from any of the training or validation thus far.Thus, the sarcoma lineage predictions would be made in a tumor type that the model was never trained on (termed Zero Shot Learning or ZSL 28 ).In these 88 lung mesothelioma samples, we first examined the distribution of the sarcoma lineage scores across subtypes, as well as comparing them to non-small cell lung cancer (NSCLC) tumor lineages (adenocarcinoma and SCC).The sarcoma lineage scores were higher in mesothelioma (N = 88, median = 0.0065) compared to the other NSCLC tumor types (N = 637, median = 0.000007; Wilcoxon rank-sum test P < 0.001; Fig. 4a).Within mesothelioma, the sarcomatoid and biphasic subtypes (N = 25) have higher sarcoma lineage scores (median = 0.042) compared to the epithelioid subtype (N = 63, median=0.003;Wilcoxon rank-sum test P < 0.001; Fig. 4a).The sarcoma lineage score had a high area under the receiver operating characteristic (AUC-ROC) curve for differentiating epithelioid samples from biphasic/sarcomatoid samples (AUC = 0.81; Fig. 4b).An optimal cut was identified and used to create high and low sarcoma lineage score groups, which were prognostic for survival (Fig. 4c, log-rank P = 0.049), with a median survival of 15.0 months and 23.9 months, respectively.The sarcoma lineage score results are remarkably consistent with the known phenotypic subtypes of mesothelioma, revealing an emergent property of our lineage models on which the model was not directly trained in an example of ZSL.Two mesothelioma samples had such high sarcoma lineage scores that they were classified as sarcoma by our model.In the original pathology data, one of these samples was reported as biphasic, and the other as epithelioid.We performed blinded re-review of the TCGA histological images by an institutional pathologist, who described both samples as biphasic with approximately 90% sarcomatoid differentiation (Fig. 4d).These cases illustrate the potential clinical utility of our molecular classifier.Divergence in initial pathologic review and strong molecular classifier results could suggest re-review or additional stains. De-differentiated lineage associated with neuroendocrine disease Tumors unfortunately do not remain static as they progress from primary to metastatic tumors and evolve under various selective pressures such as treatment.De-differentiation into more anaplastic tumors is a wellestablished phenomenon across cancer types 29 .Neuroendocrine differentiation is a specific example of this, associated with a more aggressive phenotype in prostate cancer 30,31 and lung cancer 32,33 .We characterized the degree of differentiation by focusing on the cancer lineage model predictions for this analysis, which would produce eight cancer lineage scores for each sample.Each sample will have a maximum cancer lineage score, which we collected and labeled as a "differentiation score".The rationale behind this categorization was that a weaker resemblance towards a particular lineage indicates a more de-differentiated and anaplastic tumor. We first evaluated the performance of this differentiation score in identifying malignant neuroendocrine tumors.Because TCGA does not include neuroendocrine samples, no neuroendocrine tumors were included in model training, or any of the validation results up to this point.Therefore, we identified an additional 198 neuroendocrine samples (neuroendocrine prostate cancer and small cell lung cancer) from 8 cohorts.The distribution of lineage scores for 8 selected highly differentiated tumors showed very confident predictions for a single cancer lineage (Fig. 5a).This is in contrast with selected neuroendocrine samples, that exhibited de-differentiation towards a more heterogenous distribution of lineage scores with a lower maximum score (Fig. 5b), supporting our rationale for the differentiation score.We noted a clear global decrease of the differentiation score in neuroendocrine tumors (N = 198, median = 0.868) compared to non-neuroendocrine lineages (N = 10,376, median = 0.999; Wilcoxon rank-sum test P < 0.001; Fig. 5c).The differentiation score produced a high ROC AUC (Fig. 5d) for differentiating non-small cell lung cancer (N = 606; NSCLC) from small cell lung cancer (N = 137; SCLC; AUC 0.963) and for differentiating metastatic prostate adenocarcinoma (N = 721) from neuroendocrine prostate cancer (N = 61; NEPC; AUC 0.834), representing another example of ZSL with an emergent property of the lineage model on which it was never directly trained. De-differentiated lineage associated with worse survival across cancers In addition to neuroendocrine differentiation, tumors can also dedifferentiate into more anaplastic tumors that are thought to be more aggressive 29,34 .Therefore, we hypothesized that de-differentiation broadly measured by a lower differentiation score would confer worse outcomes across cancer types.We examined all datasets with overall survival data, focusing on subgroups with sufficient samples in each survival outcome group (≥10) and enough variance in the differentiation score (≥0.001).Given that metastatic samples would be expected to have lower differentiation scores, we stratified samples into subgroups based on the cancer site of origin and primary versus metastatic site of biopsy.A reduction in differentiation score results in a significant decrease in the hazards ratio (HR) across eight subgroups (primary melanoma Fig. 3 | Accurate distinction between adenocarcinoma versus SCC lineage across cancer sites.Focusing on all cancer sites that had at least 10 adenocarcinoma and squamous cell carcinoma (SCC) samples-gastroesophageal cancer (134 adenocarcinomas, 24 SCCs), lung cancer (469 adenocarcinomas, 168 SCCs), and cervical cancer (17 adenocarcinomas, 61 SCCs)-the cancer lineage model maintained highly accurate predictions for all subtypes (a -darker shades of green represent higher accuracy).Each sample has both a squamous cell carcinoma probability score and an adenocarcinoma probability score.The difference between these two scores is plotted b, showing that the vast majority of samples had scores corresponding strongly to the appropriate cancer lineage.For the presented boxplots, boxes show the interquartile range, encompassing the middle 50% of the data, the median is indicated by a line within the box, whiskers extend to 1.5× the interquartile range, and points beyond this are plotted as outliers. Discussion Herein, we developed ATLAS, a 22-class cancer site of origin classifier and 8-class cancer lineage classifier trained in 8249 tumor samples.RNA expression using ~600 genes appeared to distinguish site of origin and lineage better than DNA alterations, consistent with the literature 6,16,23 .Interestingly, we show that DNA alterations can be used to build models that perform quite admirably, particularly when Lung Mesothelioma Lung using both variant mutations and copy number alterations, but these alterations do not provide any additional information beyond what is captured by RNA expression.We show that the RNA expression classifiers achieve 91.4% accuracy for site of origin and 97.1% accuracy for lineage on a validation dataset of 10,376 tumor samples, the largest and most comprehensive validation of an expression-based classifier to our knowledge.This accuracy is particularly impressive given the wide range of RNA-seq techniques used across the validation data from TCGA and 41 other cohorts, indicating that our approach is truly platform-independent. Histopathologic assessment continues to be the gold standard for diagnosing cancer site of origin and cancer lineage.However, NGS methods could be used to augment histopathology.In cases where it is challenging to determine the primary, a NGS method could help guide the immunohistochemical workup, resolve conflicting staining results, and provide additional information in otherwise unclassifiable cases.Beyond improving accuracy in cases where there is uncertainty, this method can also quantitate the degree of uncertainty. No approach to classify tumor types is perfect, either histopathology or NGS-based, and variability will always be present [1][2][3] .In both cases, an assessment of the confidence of the classification is critical in the interpretation of results.In clinical practice, pathologists routinely indicate when diagnosis is uncertain, or should be interpreted with caveats, such as scant tissue, high levels of necrosis or treatment effect, or unclear staining patterns 35 .A challenge of machine-learning NGS approaches is that the final prediction can seemingly come out of a black box (i.e.without comprehensible mechanistic detail), especially in more complex models 36 .Therefore, it is critical that the model predictions themselves contain information on the strength of those predictions in order to provide context for interpretation.An inaccurate prediction is obviously not optimal, but a confidently inaccurate prediction is far worse.A major strength of our classifier is the correlation between accuracy and the prediction score itself (ranging from 0 to 1).The highest scoring and thus most confident predictions, representing the majority of predictions, achieve remarkable accuracies of 98-99%, even in metastatic samples.As the scores and confidence falls, the prediction accuracy also decreases, but this is a quantifiable and reportable result.A physician therefore is able to interpret a low-confidence score of 0.5 very differently than a high-confidence score of 0.99.Future work can explore how such an approach can be incorporated into diagnostic workflows and aid pathologists. Another unique strength of our approach is the separation of site of origin and lineage into separate classifiers.While the two are certainly related, many sites can give rise to multiple tumor lineages.Both site and lineage ultimately contribute to the final tumor phenotype, and thus we felt it was critical to examine lineage separately.Our classifier accurately Fig. 5 | De-differentiated lineage associated with neuroendocrine disease.For every sample the cancer lineage model produced 8 prediction scores to correspond to the 8 lineage subtypes.The results of selected samples from the validation cohort were then plotted on a radar plot to evaluate the heterogeneity of prediction scores.Most samples had very strong predictions for a single lineage subtype a. Neuroendocrine samples, including neuroendocrine prostate cancer (NEPC) and small cell lung cancer (SCLC), had more heterogenous predictions b, noting that the max probability was lower in these samples compared to non-neuroendocrine samples.This max prediction probability (Differentiation Score) was compared across all samples and noted that neuroendocrine samples had lower scores when compared to all other samples (Wilcoxon rank-sum test P < 0. distinguishes between different lineages even within the same site (e.g.gastroesophageal, lung, cervix), and is capable of zero-shot learning, identifying sub-lineages in tumor types on which the model was never directly trained (e.g.mesothelioma, neuroendocrine prostate, and small cell lung cancer).Perhaps the most interesting emergent behavior of our model is the ability to identify more de-differentiated or anaplastic tumors, that have concomitantly worse survival across cancer sites of origin.Lineage differentiation is not fixed, and plasticity is a well described phenomenon across cancer types, especially for adenocarcinomas transitioning to aggressive neuroendocrine tumors in prostate and lung primaries 37 .To our knowledge, this is the first pan-cancer signature of lineage de-differentiation and anaplasia that is also integrated into a tumor site of origin and lineage classifier.While the majority of pathology reports offer clear identification, a substantial 35% are reported by oncologists to contain ambiguous language 38 .While less common, a still substantial 1-2% of cancers are cancers of unknown primary, which presents treatment challenges for clinicians 39 .With RNA-seq of tumors becoming more integrated into standard clinical NGS assays, the platform-independent classifiers we describe herein could complement traditional pathologic assessment, especially in more challenging cases.The ability to globally quantify confidence levels in predicting cancer site of origin, lineage, and tumor de-differentiation is particularly useful, providing a more reproducible quantitative measure than traditional histopathology.These models can continue to be refined as new datasets become available, especially for rare tumor types not currently well represented.The results from such a tool could easily be added to existing clinical RNA-seq reports, complementing traditional histopathologic assessment in cancer research, clinical trial design, and ultimately clinical practice. Methods Data collection and organization To develop the models included in this study we sought out a variety of large cancer databases for training and validation-the Cancer Genome Atlas Program (TCGA) 40,41 , the Cancer Cell Line Encyclopedia (CCLE) 42 , the International Cancer Genome Consortium (ICGC) 43 , and cBioPortal 44,45 .Given the standardized format of data located in cBioPortal, we downloaded the TCGA data and most validation datasets from there, while the CCLE, ICGC, and pan-cancer analysis of advanced and metastatic tumors (POG570) data 46 were downloaded from their respective organizational repositories.We focused only on samples that had RNA expression data available, utilizing DNA mutation and copy number data from the TCGA training cohort only to compare these molecular features against RNA expression in cross-validation. The goal of our workflow was to predict both cancer site of origin and cancer lineage.Given the heterogeneity of the datasets and understanding that too many classes can result in poor predictions, we consolidated the model classes into 22 cancer site of origin classes and 8 cancer lineage classes (Fig. 1).Of note, the neuroepithelial class for the cancer lineage model represents paragangliomas/pheochromocytomas.Primary site (non-metastatic) samples from the TCGA Pan-Cancer Atlas 41 and samples from the CCLE 42 were used for model training.Any sample in the CCLE or validation cohort that did not match a cancer subtype in the TCGA Pan-Cancer Atlas dataset was removed (N = 531).Lung mesothelioma samples represented a small cohort of samples in the TCGA and likewise would be a useful cancer type to validate the cancer lineage model scores on, and so all lung mesothelioma samples were removed from the primary training and validation cohort.Neuroendocrine prostate cancer (NEPC) and small cell lung cancer (SCLC) samples were not present in the TCGA and so were not part of the validation cohort, but we did use these samples as part of a secondary analysis to evaluate dedifferentiation.These secondary analyses on mesothelioma, NEPC and SCLC samples allowed for an evaluation on how the cancer lineage model performed on data that was not included in training. To produce more accurate and generalizable models we utilized both patient samples (TCGA) and cell lines (CCLE) in the training set to help overcome some of the limitations of both datasets-patient samples from the TCGA will have some non-tumor related normal tissue present that can confound training, in contrast to cell lines which lack this normal tissue but unfortunately will also lack a tumor microenvironment.The training set included 1053 CCLE samples and 75% of the primary site samples from the TCGA Pan-Cancer Atlas (N = 7196).The validation dataset included the remaining 25% of the primary site TCGA Pan-Cancer atlas samples, older TCGA samples that did not overlap with the Pan-Cancer atlas, all metastatic TCGA samples, and novel samples downloaded from the ICGC, cBioPortal and POG570 (N = 10,376).Validation focused on adult malignancies and, in addition to 58 TCGA validation datasets 40,41 , produced a cohort of 39 independent primary site datasets 30,43, and 15 independent metastatic datasets 30,43,46,[64][65][66][67][68][69][70][71][72] .To increase the number of metastatic samples for validation we also included samples from the west coast dream team (WCDT) metastatic prostate cancer dataset that were reported as adenocarcinoma 73,74 .The secondary analysis of neuroendocrine differentiation included SCLC and NEPC samples from 5 studies in cBioPortal 30,[69][70][71]75 , POG570 dataset 46 , the WCDT 73,74 , and an additional dataset of SCLC samples from Jiang et al. 76 . Sequencing data processing The sequencing data utilized in this workflow included RNA expression, mutations status, and copy number alteration.The RNA expression training data focused on datasets that were not gene-normalized (not Z-score adjusted), and thus RNA expression validation datasets that only included such data were removed.There was a lot of heterogeneity in the per sample normalization schemes used on the expression data, including RSEM, FPKM, RPKM, TPM, CPM, and TMM, including some microarray datasets, with high model accuracy present across normalization schemes.To account for these differences we ran a second normalization on all samples, prior to training and validation, utilizing a per sample Yeo-Johnson transformation 25 that aims to create a normalized distribution for each sample.This step was sufficient for model training and validation and no further batch correction was required. DNA data were evaluated only in training to compare to the accuracy of expression-based models.DNA mutation data was filtered to include only coding mutations and was turned into a binary classification (mutant/wildtype).Copy number alteration data was translated into a ternary call (copy number loss, no copy number change, and copy number gain). Model building Data from the TCGA Pan-Cancer Atlas and CCLE were combined into a single group for model building.Sex was included in the cancer site of origin model, and no other clinical variables were included.We first filtered the feature set of both models to include 12,247 genes by removing those with missing expression values and those with the 10% lowest median expression.This gene set was then used to train 6 models -models based on RNA expression, DNA mutations, and copy number alterations that separately predicted the cancer site of origin and cancer lineage.We then ran our modeling workflow (XGBoost, described further below) and optimized the number of trees hyper-parameter based on a five-fold cross validation (CV) re-sampling schema.Hyperparameter optimization based on the CV resamples identified the best six models, which we then proceeded to evaluate with a model variable importance function 77 to rank genes in order of most to least important for the model (producing a different rank for the 6 models).This rank list was then utilized to determine how many features would be included in the final models (across a range of 5-2000 features, which include the binary sex variable for the cancer site of origin model).This produced the results in Fig. 2a, which allowed us to identify RNA-seq expression as sufficient for model building and likewise evaluate the minimum number of features required to create an optimal model.We first selected the expression-based cancer of origin and cancer lineage models that produced the best accuracy (1000 features for both models), evaluated the resampling 95% confidence interval of that accuracy, and then followed the curve in Fig. 2a to identify the first feature count to fall within that 95% confidence interval (500 features for the cancer of origin model and 200 features for the cancer lineage model).This step was essential to prevent overfitting to the training set and to allow for a more efficient modeling procedure, as a model with more features would take longer to run (for model training, imputing missing values on validation, and making predictions on validation). Model validation The locked in cancer site of origin and cancer lineage models were then evaluated for their performance on the validation dataset.Some samples in the validation cohort had missing values, and so we performed a k-nearest neighbors' imputation (k = 5) so there would be no missing values when a sample was fed into the model.While the model used (XGBoost) can handle missing values, we observed a validation accuracy of 86.7% with no imputation, compared to our reported accuracy of 92.5% with imputation.Each sample prediction produced 22 probability scores for the cancer site of origin and 8 probability scores for the cancer lineage, with each score corresponding to a class, the scores adding up to one within each model, and a class call produced based on the highest probability score for that sample.The class prediction was utilized to evaluate performance of the models on validation.The probability scores were utilized to evaluate confidence in a prediction and define a differentiation score that was equal to the maximum cancer lineage score for a sample. Statistics and reproducibility All data collection and analysis were performed on our lab Linux server, which included 120 CPU cores, 2 TB of RAM and a single NVIDIA Tesla T4 15GB GPU.All workflow was completed in R (version 4.3.2) and utilized the cBioPortalData package for downloading cBioPortal data 78 , tidyverse 79 , tidymodels 80 , vip 77 , survminer 81 , and fmsb 82 packages.The model procedure utilized the extreme gradient boosting (XGBoost) machine learning model 83 , which brings together the concepts of decision trees, ensemble learning and gradient boosting into one unified, efficient and highly accurate framework.We utilized XGBoost for all our modeling workflow as it tended to produce similar/improved accuracy on the CV training resamples compared to a random forest model and was able to run significantly faster and utilize our servers GPU.The only hyperparameter in the XGBoost model that we tuned was the number of trees, which we optimized with a five-fold CV scheme. For our modeling workflow we utilized the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy to evaluate the model.Sensitivity and specificity for multi-class classification utilized a one-versus-all, macro-averaging scheme 80 .The AUC that is reported for all multi-class problems represents the Hand-Till method for multiclass classification problems 84 .For the secondary analysis evaluating the continuous cancer lineage sarcoma probability scores in mesothelioma, we developed the binary classes based on the ROC curve of the continuous score and found the split with the minimum distance from the ROC curve to the point where specificity and sensitivity are both one.This cut was determined only on the best split to separate epithelioid lung mesothelioma samples from biphasic/sarcomatoid samples (Fig. 5).Given that this split was not based on optimizing a split in survival, there was no data leakage in creating these prognostic groups.Prognostic significance was evaluated based on overall survival utilizing the Kaplan-Meier estimator and logrank p-values for the mesothelioma sarcoma groups and the Cox regression hazard ratios with 95% confidence intervals for the differentiation score forest plot.All survival data was censored at 5 years to allow for similar comparisons. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.Lineage XGBoost model parameters.The available workflow can take as input a table of samples, where the rows are individual samples and the columns are individual model features (genes and binary sex).The input can be expression from a microarray or RNAseq with any normalization schema, just no per gene normalization across a whole cohort.The model treats each sample independently, first performing a Yeo-Johnson transformation of the gene expression across a sample, conversion of the binary sex variable into a dummy variable, k-nearest neighbor imputation of missing values, and finally a center and scaling of each variable based on parameters determined during model training.A prediction is then made using the locked XGBoost model, providing both the class prediction and class probabilities.Per sample computation time depends on the number of missing values in a sample, with per sample run time across the validation cohort ranging from 1 to 22 s (approximately 7 h for all 10,376 samples). Fig. 4 | Fig. 4 | Sarcomatoid differentiation in mesothelioma.The sarcoma cancer lineage score was evaluated further in mesothelioma samples to determine the models ability to identify subtypes that were not present in model training.The sarcoma score was higher in pleural mesothelioma samples (sarcomatoid type [N = 2; no boxplot shown], biphasic type [N = 23], and epithelioid type [N = 63]) compared to nonsmall cell lung cancer samples (Wilcoxon rank-sum test P < 0.001; adenocarcinoma [N = 469] and squamous cell carcinoma [N = 168]), and also was higher in mesothelioma biphasic/sarcomatoid subtypes compared to epithelioid subtypes (Wilcoxon rank-sum test P < 0.001; a.The continuous sarcoma score was effective in differentiating epithelioid pleural mesothelioma samples from biphasic/sarcomatoid mesothelioma samples (AUC = 0.81, b).To create binary sarcoma score groups, an optimal cut-point was identified in the lung mesothelioma ROC curve (red X in b)that minimized the distance to the point where sensitivity and specificity were both one.These binary sarcoma score groups were prognostic for lung mesothelioma samples (logrank P = 0.043; clow sarcoma score blue [N = 48], high sarcoma score red [N = 39]; dotted red line represents a null AUC of 0.5).Our in-house pathologist reviewed the two pathologic specimen that had the highest sarcoma scores to compare our molecular classification score against pathologic review (intermediate magnification, measuring bar represents 100 µm; d.For the presented boxplots, boxes show the interquartile range, encompassing the middle 50% of the data, the median is indicated by a line within the box, whiskers extend to 1.5× the interquartile range, and points beyond this are plotted as outliers.AUC is the area under the receiver operating characteristic (AUC-ROC) curve. Fig.5| De-differentiated lineage associated with neuroendocrine disease.For every sample the cancer lineage model produced 8 prediction scores to correspond to the 8 lineage subtypes.The results of selected samples from the validation cohort were then plotted on a radar plot to evaluate the heterogeneity of prediction scores.Most samples had very strong predictions for a single lineage subtype a. Neuroendocrine samples, including neuroendocrine prostate cancer (NEPC) and small cell lung cancer (SCLC), had more heterogenous predictions b, noting that the max probability was lower in these samples compared to non-neuroendocrine samples.This max prediction probability (Differentiation Score) was compared across all samples and noted that neuroendocrine samples had lower scores when compared to all other samples (Wilcoxon rank-sum test P < 0.001; c-neuroendocrine red, nonneuroendocrine blue; Neuroendocrine [N = 198], Sarcoma [N = 147], Adenocarcinoma [N = 7256], Neuroepithelial Cancer [N = 40], Germ Cell Tumor [N = 54], Melanoma [N = 501], Glioma [N = 718], Lymphoid/Myeloid Neoplasm [N = 1054],and Squamous Cell Carcinoma [N = 606]).This continuous differentiation score was then evaluated for its ability to differentiate metastatic prostate adenocarcinoma samples (PRAD) from NEPC samples (AUC = 0.833) and differentiate non-small cell lung cancer samples (NSCLC) from SCLC samples (AUC = 0.963; d metastatic prostate cancer green [N = 782], primary lung cancer orange [N = 743]; dotted red line represents a null AUC of 0.5).For the presented boxplots, boxes show the interquartile range, encompassing the middle 50% of the data, the median is indicated by a line within the box, whiskers extend to 1.5× the interquartile range, and points beyond this are plotted as outliers.AUC is the area under the receiver operating characteristic (AUC-ROC) curve.WCM Weill Cornell Medicine, SMC Samsung Medical Center, FHCRC Fred Hutchinson Cancer Research Center, ECDT East Coast Dream Team, UCologne University of Cologne, WCDT West Cost Dream Team. Fig. 6 | Fig. 6 | De-differentiated lineage associated with worse survival across cancers.Samples with survival data in the validation cohort were stratified based on their cancer site of origin and biopsy site (primary versus metastatic; primary sarcoma [N = 100], primary ovarian cancer [N = 356], primary breast cancer [N = 2182], primary bladder urothelial carcinoma [N = 108], metastatic melanoma [N = 399], primary lung cancer [N = 611], primary glioma [N = 615], primary hepatopancreato-bilary cancer [N = 424], metastatic breast cancer [N = 121], metastatic ovarian cancer [N = 25], primary uterine cancer [N = 170], primary adrenal gland cancer [N = 26], and primary melanoma [N = 88]).Eight of the subgroups evaluated had significantly improved survival with increasing differentiation score, while all other subgroups trended in that direction.Hazard Ration on x-axis has been log10 adjusted.Dotted vertical red line represents a hazard ratio of 1.The black dots represent the hazard ratios and error bars represent the 95% confidence interval for the hazard ratio. Cancer Site of Origin Cancer Lineage Adenocarcinoma Fig. 1 | Modeling workflow and data overview.Modeling workflow a depicts the primary workflow for model building-data partitioning (training versus validation), training data feature selection (determine best sequencing data and model features to build an effective model), data pre-processing (such as normalizing expression data and imputing missing values), and model selection.Once an optimal model is selected using only the training data, a validation dataset is used to validate the locked model.The training data included the Cancer Genome Atlas (TCGA) and Cancer Cell Line Encyclopedia (CCLE) cell line samples.Validation was completed on over 10,000 patient samples, including over 1400 metastatic samples (b -TCGA orange, CCLE green, non-TCGA purple).Two models were builta cancer site of origin model with 22 classes and a cancer lineage model with 8 classes (c), with validation samples for all classes. Accurate predictions of cancer site of origin and lineage.Cancer site of origin and cancer lineage models were trained and performance was evaluated on five-fold cross-validation resamples, noting top performance with gene expression (agene expression/mutations/copy number alterations grey, gene expression/copy number alterations orange, gene expression/mutations yellow, gene expression purple, mutations/copy number alterations green, copy number alterations blue, mutations blue, with order of legend matching position of curves), with no improvement when combined with mutation and copy number calls.The finalized models included 500 features and 200 features for the cancer site of origin and cancer lineage models, respectively.Model validation accuracy on 10,376 samples was 91.4% for the cancer site of origin model and 97.1% for the cancer lineage model (baccuracy blue, sensitivity purple, specificity green, AUC is red).Performance for these models was worse on metastatic samples, but still very high accuracy at 86.8% and 95.7%, respectively (cdarker shades of green represent higher accuracy).Model accuracy by predicted class is shown (d), noting worse performance in gastrointestinal (GI) sites and gynecologic sites (e-f).All model prediction classes had a corresponding probability score, with the maximum score corresponding to the predicted class. versus Squamous Cell Carcinoma
2024-03-15T06:18:45.782Z
2024-03-13T00:00:00.000
{ "year": 2024, "sha1": "b67385d3d9b7de6b3544695b66b3043e452139ea", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s42003-024-05981-5.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "82c6c01e3fadbc5f7df6fffe6a6ad3f48360a8a1", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
244868604
pes2o/s2orc
v3-fos-license
Global Health Security Preparedness and Response: An Analysis of the Relationship between Joint External Evaluation Scores and COVID-19 Response Performance Objectives The COVID-19 pandemic has highlighted the importance and complexity of a country’s ability to effectively respond. The Joint External Evaluation (JEE) assessment was launched in 2016 to assess a country’s ability to prevent, detect and respond to public health emergencies. We examined whether JEE indicators could be used to predict a country’s COVID-19 response performance to tailor a country’s support more effectively. Design From April to August 2020, we conducted interviews with Centers for Disease Control and Prevention country offices that requested COVID-19 support and previously completed the JEE (version 1.0). We used an assessment tool, the ‘Emergency Response Capacity Tool’ (ERCT), to assess COVID-19 response performance. We analysed 28 ERCT indicators aligned with eight JEE indicators to assess concordance and discordance using strict agreement and weighted kappa statistics. Generalised estimating equation (GEE) models were used to generate predicted probabilities for ERCT scores using JEE scores as the independent model variable. Results Twenty-three countries met inclusion criteria. Of the 163 indicators analysed, 42.3% of JEE and ERCT scores were in agreement (p value=0.02). The JEE indicator with the highest agreement (62%) was ‘Emergency Operations Center (EOC) operating procedures and plans’, while the lowest (16%) was ‘capacity to activate emergency operations’. Findings were consistent with weighted kappa statistics. In the GEE model, EOC operating procedures and plans had the highest predicted probability (0.86), while indicators concerning response strategy and coordination had the lowest (≤0.5). Conclusions Overall, there was low agreement between JEE scores and COVID-19 response performance, with JEE scores often trending higher. JEE indicators concerning coordination and operations were least predictive of COVID-19 response performance, underscoring the importance of not inferring country response readiness from JEE scores alone. More in-depth country-specific investigations are likely needed to accurately estimate response capacity and tailor countries’ global health security activities. INTRODUCTION Since the emergence of SARS-COV-2, the virus that causes COVID-19, in December 2019, more than 216 million cases and approximately 4.5 million deaths have been reported globally as of August 2021. 1 2 The COVID-19 pandemic highlights the importance of a country's emergency response capacity to effectively control a novel public health threat. 3 4 The pandemic has prompted local and regional lockdowns, varying levels of quarantine and social distancing measures and the redistribution of health resources Strengths and limitations of this study ► To our knowledge, this is the first study to examine the Joint External Evaluation (JEE) in a systematic and methodical approach among multiple countries using an aligned scoring paradigm with another assessment. ► This is also the first study to introduce a novel assessment tool specific to measuring a country's COVID-19 emergency response capacity. ► A limitation of our study is the alignment of two scoring systems (one with a five-point range and the other a three-point range), which impacted the accuracy of the newly aligned scoring system. ► Another limitation of this study is bias from the Emergency Response Capacity Tool (ERCT) assessment because data collection was collected only from the perspective of Centers for Disease Control and Prevention country office staff. ► Finally, the time gap between completion of the JEE assessment (2016-2018) and ERCT assessment (2020) and the alignment of the two for this specific study does not account for the socioeconomic and geopolitical events that may have occurred in between the time frames that may have affected response capacity. Open access from routine public health programmes to COVID-19 response efforts. 5 Understanding a country's level of preparedness can help support appropriate recommendations on resource allocation, establishment of policies and legislation, response planning and standard operating procedure development and personnel deployment. In collaboration with the World Health Organization (WHO) and United Nations' member states, the Global Health Security Agenda (GHSA) establishes a number of core capacities for preparing for and responding to global public health emergencies. 6 In coordination with GHSA, the Joint External Evaluation (JEE) for health security was launched by WHO in 2016 as a voluntary, multisectoral, peer-to-peer evaluation. Using 49 indicators on a fivepoint scale, the JEE assesses a country's ability to prevent, detect and respond to public health emergencies across 19 technical areas. 7 8 The JEE assessment helps countries identify critical gaps within their public health systems by technical area, in order to prioritise actions to strengthen preparedness and response capacity. 9 High JEE scores reflect intermediate to high capacity in responding to a public health emergency, and low JEE scores reflect low capacity in responding and weak or poor systems and processes. 9 10 From April to August 2020, the United States Centers for Disease Control and Prevention (CDC) received requests from over 30 countries around the world for COVID-19 response capacity support. As countries responded and planned for ongoing SARS-COV-2 response activities, we questioned whether we could use existing assessments such as the JEE to inform critical areas that needed strengthening during the response. As JEE indicators are broad, often encompassing an amalgamation of multiple more detailed but critical components for emergency response capacity, to tailor specific technical support and interventions during COVID-19, CDC pursued the development of a new tool. 7 Aligned to JEE indicators and scoring, CDC's 'Emergency Response Capacity Tool (ERCT)' was developed for a systematic approach to assess and prioritise gaps in a country's response capacity through examination of the country's COVID-19 operational performance. 11 As we used the tool, we wanted to assess whether we could have used the JEE, which is often conducted in peacetime, to predict how a country would respond to a public health event like COVID-19. To better understand this, we examined COVID-19 response performance in relation to specific JEE indicators to assess whether the JEE could be used to predict a country's COVID-19 response capacity. We hypothesised that countries scoring lower in certain JEE indicators would continue to have challenges and deficits in responding to COVID-19, while countries with higher JEE scores would have responded more effectively to the current COVID-19 pandemic. METHODS From April to August 2020, we used the ERCT to collect information on COVID-19 response performance in countries hosting a CDC country office meeting the following criteria: (1) requested CDC support for responding to the COVID-19 pandemic and (2) completed the JEE (version 1.0) between 2016 and 2018. 7 The ERCT addresses and scores competencies in four technical areas: (1) public health systems integration; (2) multidisciplinary rapid response teams; (3) emergency operations centres (EOCs)/incident management system; and (4) risk communications and community engagement operations. The four competencies included a total of 28 indicators aimed at assessing a country's emergency response systems, strategic planning, standard operating procedures and workforce capacity in responding to the COVID-19 pandemic. 12-14 The ERCT scoring scale was 1-3. A score of '1' indicated a country has no competency proficiency; '2' indicated limited competency or proficiency; and '3' indicated full competency or proficiency. We requested CDC Country Office staff to complete the ERCT, scoring the country's response performance according to the 28 indicators. During follow-up phone interviews, we reviewed provided scores with the CDC country office staff to ensure the indicator was interpreted correctly and the score accurately reflected the country's response performance. Countries' anonymity is maintained to protect disclosure of countries' challenges and gaps in responding to the COVID-19 pandemic. We obtained JEE scores from the JEE version 1.0 reports on the WHO's website for the 23 countries included in this analysis. 15 Country-specific scores for 49 JEE indicators were downloaded and merged into a single Microsoft Excel 2016 spreadsheet. 16 The ERCT indicators were more specific and detailed with multiple ERCT indicators contributing to one JEE indicator. Four of the 49 JEE version 1.0 indicators aligned directly with the ERCT indicators, a 'one-to-one' alignment. For the remaining indicators, we calculated the mean ERCT score across the various detailed indicators that aligned to a single JEE indicator, a 'grouped mean' alignment (table 1). Because the JEE score ranges from '1' (indicating that implementation has not occurred) to '5' (indicating that implementation has occurred, is tested, reviewed and exercised and that the country has a sustainable level of capability for the indicator) and the ERCT used scores of 1-3, we modified the scales to match for this analysis. 15 A JEE score of 1 was matched to an ERCT score of 1, a JEE score of 2 and 3 was matched to an ERCT score of 2 and a JEE score of 4 and 5 was matched to an ERCT score of 3. To ensure accuracy in transforming the JEE indicator to the three-point ERCT scale, two authors independently examined the JEE and ERCT scoring criteria as well as interview qualitative data notes collected from the follow-up phone interviews. If a discrepancy was noted, then a third author would review the scoring and provide an adjudicated score. The final database included 16 17 We conducted an agreement analysis to assess consistencies in JEE and ERCT indicator scores across the 23 countries. We initially calculated a strict agreement (transformed JEE score=ERCT score) analysis for all available indicators. Strict agreement is calculated as the percent of scores that were the same for the transformed JEE score and ERCT score of all possible scores. We additionally calculated weighted kappa statistics. The weighted kappa statistic accounts for random variability and closeness of agreement between ERCT and transformed JEE scores. A weighted kappa value above 0.2 generally reflects fair agreement; higher values suggesting stronger agreement. 18 We calculated strict agreement and weighted kappa statistics for all indicator scores combined and then stratified by each JEE indicator, indicator score matching category (i.e., one to one or grouped mean) and the year the JEE was conducted. We assumed that capacity should have remained the same or increased from the date of completing the JEE (2016-2018) and the date of implementing the ERCT in 2020. To control for this assumption, we created an additional binary variable, 'J≤E', defined as whether the JEE score≤ERCT score (yes/no). We used outcomes from generalised estimating equation (GEE) models to generate predictive probabilities of each JEE indicator score on the COVID-19 response capacity performance (ERCT scores), adjusting for possible correlations of country-specific scores across several indicators (ie, indicator scores are likely to be similar in a given country). 19 We ran the GEE model including the 'J≤E' variable against each of the variables listed previously. From the GEE model estimated coefficients, we transformed them to predicted probabilities. This gave us predicted probabilities of concordance between JEE and ERCT scores for each JEE indicator. For the GEE model, we assigned the JEE indicator 'EOC operating procedures and plans (R.2.2)' as the reference indicator because R.2.2 was the only indicator with evidence of initial agreement. As this research looked at a country's overall performance during COVID-19 through the perspective of CDC country office staff, patients or the public were not involved in designing, conducting, reporting or the dissemination plans of our research. 20 The final project database included 163 observations from the eight JEE indicators, which were successfully aligned to 28 ERCT indicators (figure 1). Of the 163 observations, agreement was highest (n=36) for JEE and ERCT indicator scores of '2' (table 2). Agreement (n=28) was lowest for indicators with JEE scores of '2' and ERCT scores of '1'. The agreement between the JEE and ERCT scores across the 163 observations included in this analysis showed an agreement of 42.3% and a weighted kappa value of 0.134 (p-value=0.02) ( In our stratified analysis, JEE indicator scores were generally higher than the corresponding ERCT scores (table 3). One-to-one matching had a concordance of 43.7% (p-value=0.01) and a higher transformed JEE score of 44.7% with a weighted kappa below 0.2 in comparison with the grouped mean. There was no statistical significance in variance in agreement among the years that the JEE was completed despite the slightly higher agreement (44.1%) but low weighted kappa statistic (0.14) in 2016. In the GEE model, JEE indicators were the predictor, and J≤E was the outcome (figure 2). From the GEE model, the highest predicted probability of agreement was 'EOC operating procedures and plans (R.2.2)' with a predicted probability of 0.86. The lowest predicted probability of agreement was 'internal and partner communication and coordination (R.5.2)' and 'emergency operations programme (R.2.3)' with a predicted probability of 0.5. DISCUSSION Capacity to respond to the COVID-19 pandemic is multifactorial and complex-varying by context, existing resources, priority areas, and historical challenges. We developed and implemented the ERCT to assess several competencies related to response performance. Findings from the ERCT were often discordant with scores generated from previously conducted JEEs, where the transformed JEE scores (35.6%) were often higher than the ERCT scores (22.1%). With the 2-4 years from when the JEE was conducted, we expected there to be a similar or increase in capacity between the time of the completion of the JEE and when the ERCT assessment was conducted. However, the overall low agreement (42.3%) between the two assessments, JEE and ERCT, could have resulted from several factors. These could include: the data collection method of both assessments, the timeframe in which both were conducted, and the lack of congruency between the assessments. For the ERCT, CDC country office staff completed the assessment tool and rated the country's response performance, whereas the JEE is completed by a JEE team in-country composed of multisectoral external and internal subject matter experts. When examining the timeframe of both assessments, the ERCT was conducted during an active response between April to August of 2020, while the JEE was conducted at a specific point in time between 2016 and 2018, prior to the start of the COVID-19 pandemic. Furthermore, JEE indicators are quite broad with multiple emergency response operational factors included under one JEE indicator, which may have affected the specificity of the JEE scoring. Open access Additionally, the ERCT scores assessed response performance specifically to COVID-19, whereas the JEE is not specific to one particular emergency and is not conducted during an active public health emergency event. Regardless of the underlying factors, this trend may indicate that a more detailed competency analysis, such as the ERCT, may be required for these particular indicators to provide a more accurate assessment of a country's ability during a response, specifically in the context of COVID-19. At the individual indicator level, the high agreement between specific indicators was notably in the capacity related to EOC operating procedures and plans, which showed the highest strict agreement and predicted probability between JEE and ERCT scores. These indicators are tangible and discrete (e.g., EOC plans exist or do not exist, EOC activation occurs or does not occur) and thus may lend themselves to be more easily assessed and measured prior to a large-scale response. Conversely, those indicators related to strategic planning (e.g., legal authority, policies, communication and partner coordination) were more discordant and generally received lower ERCT scores than transformed JEE scores. The identified trends and questions raised in this investigation highlight the importance of future studies to continue investigating this concordance to inform countries on how best to plan for global health security activities and prioritise their emergency response capacity development and implementation. This initial investigation of the role of JEE indicators in predicting the ability to respond effectively to COVID-19 included several limitations. First, because of the higher specificity of the ERCT indicators, the JEE and ERCT scorings and indicators needed to be adjusted and aligned, respectively, which could have contributed to the low agreement between the scores. The alignment of two scoring systems (one with a five-point range and the other a three-point range) may impact the accuracy of the scoring system for the JEE and ERCT adjustments to account for a proper depiction of indicators. Furthermore, the more detailed ERCT indicators required the mean ERCT score to be taken across various indicators to align to a single JEE indicator. Second, the data collection for the ERCT assessment was from the perspective of CDC country office staff only. This potentially creates an external view bias, as well as a limited perspective compared with JEEs, which are scored based on multiple Open access diverse subject matter experts and the government's own assessment. Third, there was a gap in time between the JEE assessment (2016-2018) and ERCT assessment (2020). Although we tried to control for this with the development of the 'J≤E' variable, there may have been changes in capacity in that time frame (e.g., socioeconomic and geopolitical events) that we could not account for in this investigation. Finally, this analysis included only 23 countries selected through convenience sampling (i.e., countries requesting CDC assistance, and thus, these trends may not be representative of all countries). CONCLUSION This analysis offers a novel opportunity to examine COVID-19 predicted response capacity across several countries. Although limited in sample size to make conclusive statements, this analysis included geographically and economically diverse countries, which may indicate applicability beyond the countries sampled for this investigation. Despite the number of limitations highlighted in this study, especially due to operational research studies being difficult to translate capacity building efforts to transformed response operations, this is the first study to examine the JEE in a systematic and methodical approach among multiple countries using an aligned scoring paradigm with another assessment. The trend of ERCT scores being lower than JEE scores underscores the need for a country's vigilance when inferring their strategic response readiness from JEE scores alone and in allocating resources for global health security initiatives. This trend, along with concordance variability among JEE indicators, warrants further investigation to assess response capacity and its relationship to response performance to better understand preparedness and capacity measures translated to broader public health outputs and outcomes. Additionally, this investigation may indicate the need to re-examine some of the JEE indicator's specificity and accuracy in assessing a country's capacity, especially concerning strategic response planning. As countries around the globe undergo the JEE process and use it to determine and prioritise their global health security activities, we believe understanding the relevance of the results during an active and specific public health event is of utmost importance from the country-level perspective and to the larger global health response community supporting the JEE initiative. Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors. Disclaimer The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the US Centers for Disease Control and Prevention. Competing interests None declared. Patient consent for publication Not applicable. Provenance and peer review Not commissioned; externally peer reviewed. Data availability statement Data are available on reasonable request. Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
2021-12-04T06:16:38.154Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "0c19ff0b8485807d555d75090b56ff7809aa191d", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/11/12/e050052.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "45b67ff963f194d0af0df1298bd72cf10e9f2796", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
40295752
pes2o/s2orc
v3-fos-license
Recommendations for the diagnosis of human papilloma virus (HPV) high and low risk in the prevention and treatment of diseases of the oral cavity, pharynx and larynx. Guide of experts PTORL and KIDL The role of human papilloma viruses (HPV) in malignant and nonmalignant ENT diseases and the corresponding epidemiological burden has been widely described. International head and neck oncology community discussed growing evidence that oral HPV infection contributes to the risk of oro-pharyngeal carcinoma (OPC) and recommended HPV testing as a part of the work up for patients with OPC. Polish Society of ENT Head Neck Surgery and National Chamber of Laboratory Diagnosticians have worked together to define the minimum requirements for assigning a diagnosis of HPV-related conditions and testing strategy that include HPV specific tests in our country. This paper briefly frames the literature information concerning low risk (LR) and high risk (HR) HPV, reviews the epidemiology, general guidance on the most appropriate biomarkers for clinical assessment of HPV. The definition of HPV-related cancer was presented. The article is aiming to highlight some of major issues for the clinician dealing with patients with HPV-related morbidities and to introduce the diagnostic algorithm in Poland. a b s t r a c t The role of human papilloma viruses (HPV) in malignant and nonmalignant ENT diseases and the corresponding epidemiological burden has been widely described. International head and neck oncology community discussed growing evidence that oral HPV infection contributes to the risk of oro-pharyngeal carcinoma (OPC) and recommended HPV testing as a part of the work up for patients with OPC. Polish Society of ENT Head Neck Surgery and National Chamber of Laboratory Diagnosticians have worked together to define the minimum requirements for assigning a diagnosis of HPV-related conditions and testing strategy that include HPV specific tests in our country. This paper briefly frames the literature information concerning low risk (LR) and high risk (HR) HPV, reviews the epidemiology, general guidance on the most appropriate biomarkers for clinical assessment of HPV. The definition of HPV-related cancer was presented. The article is aiming to highlight some of major issues for the Introduction The role of human papilloma viruses (HPV) in the pathogenesis of diseases of the oral cavity, throat, larynx and maxillo-ethmoid complex So far, 200 various types of HPV (human papillomavirus) have been identified. This group can be divided into the following classes: a, b, g, Mu and Nu. Some of the viral genotypes mentioned above are sexually transmitted. Due to their varied oncogenic potential the following types can be distinguished: high risk (HR) HPV, i.e. HPV 16, HPV 18, HPV 31, HPV 33 and HPV 45, associated with more than 80% of cervical cancers and low risk (LR) HPV, i.e. HPV 6 and HPV 11 associated with mild intraepithelial low-grade lesions or laryngeal papillomas. Viruses from the b group infect the skin, whereas the remaining types (g, Mu and Nu groups) are responsible for the formation of papillomas that usually are not subject to neoplastic transformation [1]. Infections with HR HPV are responsible for 600,000 cases of cancer of the cervix, vulva, vagina, anus, oral cavity and oropharynx diagnosed annually on the whole world. Moreover, they are causative factors for condylomas and recurrent papillomas of the respiratory system. The incidence of cervical cancer has been declining for many decades, but the incidence of anal cancer as well as head and neck cancers has been a growing problem. Consequently, an interdisciplinary research programme Eurogin 2011 [2] on the prevention and treatment of HPV-dependent diseases has been established in order to discuss the problems of morbidity, registers and diagnostic and therapeutic recommendations. The role of HPV in the development of HNSCC Within recent decades epidemiological and molecular studies have indicated that human papilloma viruses play a role in the development of some subsets of HNSCC. The contribution of HPV in the cancerogenesis of cancers of the oral cavity and oropharynx was for the first time suggested by zur Hausen in 1999 [4] and Syrjanen in 2005 [5], then confirmed by research groups of Erdmann [6], Nair [7] and Gillison [8] based on the following observations: HPV tropism towards epithelial cells, changes in the genome of human keratinocytes in in vitro studies, widely documented oncogenic potential of HR HPV subtypes in the development of cervical cancer and morphological similarities between the epithelium of the oral cavity and throat, and the epithelium of the genitourinary tract. HPV DNA in precancers and invasive carcinoma of the oral cavity is observed three and five times more frequently, respectively, when compared to healthy mucous membrane [5,9]. A HR HPV infection is observed in 25% of oral cavity cancers, and HPV 16 is observed in 70% of HPV-positive cancer tissue specimens; in the case of oropharyngeal cancers these values are higher, namely 35% and 90%, respectively [8,10,11]. HPV 16 increases the risk for HNSCC four times. Different primary locations have different significance with regard to predilection for developing and prevalence of HR HPV in HNSCC: it significantly more often occurs in the oropharynx in comparison with the oral cavity and larynx [5,[12][13][14][15][16][17]. The presence of HR HPV in cancer tissue has become the basis to distinguish a separate subset of HNSCC: HPV-associated cancers. HPV-associated cancers have different biological parameters which are associated with a profile of gene expression, frequency of the TP53 mutation and the p16 expression. They are poorly differentiated in a histological examination [18]. The mean age of patients with HPV-associated cancers is significantly lower and the contribution of addictions that are considered to be classical risk factors, namely smoking and alcohol abuse is significantly lower or negative; and sexual habits are different: a higher number of sexual partners and oral sex. An association between a tumour and HPV 16 is a documented prognostic factor: locoregionally advanced pharyngeal cancers have a 60% lower risk of mortality and a 30% better 5-year survival rate [19][20][21]. A difference in 5-year overall survival rates is a resultant of many factors: younger age, better general condition, lower burden, a higher response rate to radio and chemotherapy, and a lower risk of second primary cancer. Smoking more than 10 cigarettes daily reduces favourable chances of survival for patients with HPV-associated HNSCC [17,[22][23][24][25]. Despite more aggressive metastases to the cervical lymphatic system the prognosis is still better than for HPVindependent cancers [26]. The role of LR HPV is still controversial. HPV 6 and 11 types are observed in a low percentage of HNSCC, but they are not considered to be ''completely mild'' in the region of the head and neck [27]. It has been proven that mild papillomas with HPV 6 and 11 aetiology are subject to malignant transformation following irradiation. This observation is confirmed by an aggressive course observed for papillary cancers of the urinary tract. LR HPV is observed in 25% of HNSCC and this group has poorer outcomes of treatment including radiochemotherapy. A LR HPV coinfection has to be studied further and such studies may in the future be a reason for changing a treatment strategy (radiation therapy replaced by surgery). The incidence of HPV-associated HNSCC In Europe the incidence of HPV-associated HNSCC ranges from 20% in the Netherlands to 41% in Switzerland, 55% in Germany and 62% in France. In the USA the rates are higher, 64% to 72%. Discrepancies in these results stem from the following: (1) differences in the ethnicity and geographical regions, (2) different sensitivity of diagnostic methods used (varied materials: biopsy, scrapings, smears, brush smears, biopsy specimens), (3) different methods to store and process material (fresh, frozen, formalin-fixed paraffinembedded specimens), (4) methods to detect HPV [28,29]. Since the 1980s the incidence of HNSCC has gradually decreased, in particular in the USA, what is related with the limitation of classic risk factors such as tobacco smoking and alcohol abuse. Along with this tendency it is possible to observe varied numbers of cases depending on the association between HNSCC and HPV. The acute peak of the incidence of HPV-associated HNSCC is observed worldwide, in Europe this peak is higher than in the USA [30]. A significant increase in the number of cases has been observed for primary locations with a high rate of HPVassociated cancers: cancers of the tonsils and base of the tongue with a parallel increase in HPV infections within the oral cavity and the throat [13,[31][32][33][34][35]. At the same time, in HPV-associated locations there is an increased incidence of cancers in lower-age groups (45)(46)(47)(48)(49)(50)(51)(52)(53)(54) in comparison with older cohorts. The increase in the number of cases is the highest among men aged 40-64 years. It can be partially explained by the frequency of HR HPV infections of the genital organs: in the healthy population 29-65% among men and 25% among women. The incidence of oral cavity infections is 10% and 3.6%, respectively. Moreover, there are immunological differences between sexes, and there is a higher number of sexual partners among men [36][37][38]. Risk factors for HPV-associated HNSCC The reasons accounting for the increase in the number of HPV-associated cancer cases are not clear; however, they seem to be associated with the fact that sexual habits have changed. An HPV infection of the oral cavity is associated with a 4-to 12-times increased risk of HNSCC development. The following have been proven: (1) a direct relation between a viral infection of the genital organs and the presence of an HPV infection in the oral cavity, (2) a lowered age of sexual initiation, (3) a high number of sexual partners, (4) lack of condom use, (5) oral sex. Open mouth kissing is also a source and a favouring factor of viral transmission. Higher rates of tonsil cancers have been observed in partners of women with cervical cancer. Synchronous cancers of the oral cavity have been confirmed in partners infected with the same HPV subtype [25,[39][40][41]. There have been no long-term studies on HPV transmission among sexual partners. A higher risk of an HPV infection has been confirmed in people with post-transplantation immunosuppresion and HIV-infected. Precancerous conditions of the oral cavity The presence of HPV DNA in precancerous lesions of the oral cavity and throat is estimated to be wide, 0-85%, with predominant HPV 16 and 18 genotypes [42]. Miller 2012 and White 2012 [43,44] identified HPV in 43% of fresh or frozen specimens, and in 12% of paraffin-embedded specimens from lesions that were clinically assessed as leukoplakia, but without dysplasia in a pathological examination. Verrucous leukoplakia is definitely more potently related to HPV, and it is a rare and aggressive form of proliferative leukoplakia with a high (90%) risk of malignant transformation, treatment-resistant and mainly developing in elderly women. Its aetiology is not associated with cigarette smoking or a fungal infection, and HPV 16 was confirmed in 25.5-80% of cases. On the other hand, in lichen planus the contribution of a viral infection has not been unambiguously confirmed, with the exception of cases with a clinical presentation including ulcerations that have been linked to the HPV 18 genotype [42,45]. Oropharyngeal cancers (OPCs) The number of OPC cases worldwide is 137,000 cases, and the number of deaths reaches 96,000 annually; 61,500 cancers origin from anatomical areas where cancerogenesis has been documented to be causally linked with HPV (palatal tonsils, base of the tongue). The HPV distribution in patients with OPCs has been estimated to be in a wide range of 5-70% [9]. The last metaanalysis [30] indicated that until 2000 the share of HPV infections was 41%, until 2004 -72% and after 2004 there was an increase up to 96%. The presence of HPV in OPCs is higher in the USA (60%) than in Europe (40%, EUROCARE project) and other parts of the world (33%). Increasing tendencies in the number of OPC cases and an increased incidence of HPV infections have been confirmed by all researchers, regardless of the studied population and applied methodology. In the USA the number of HPVassociated cases has increased by 225% and of those not associated with HPV has decreased by 50% [46]. Within the last two decades HPV as the main causative factor of cancerogenesis has replaced cigarette smoking and alcohol abuse. HPV virulence as well as long-term consequences of this infection have been emphasised: the total risk of OPC development in patients with alcohol problem is 5.5, in those with smoking addiction 19.5, and in the case of synergistic effects of both addictions 56.5, whereas it is 230.0 in the case of an HPV infection [47]. The majority of HPVassociated OPCs is associated only with HPV 16 (84-87% of palatal tonsil cancers). The remaining types 18, 33, 35 are responsible for a small number of cases. Approximately 3% of cancers can be caused by an infection with LR HPV: 6 and 11. The effects of an HPV infection on the prognosis in OPC has been documented. Risk reduction for HPV-associated cancers was 15%, and the risk of death was lower by 38% [48]. The ECOG (European Cooperation Oncology Group) phase II protocol and TROG 02.02 and phase III TAX have unanimously indicated that HPV in this location is associated with a significantly better prognosis [49]. With regard to HPV-associated OPCs with class III/IV organ involvement ECOG has completed recruitment of patients for phase II trials on deintensification of radiochemotherapy. Reports on the outcomes of TORS surgery for HPV-associated and independent cancers have not yet been finally known. Oral cavity cancers The role of HPV in the cancerogenesis of oral cavity cancers remains controversial. The meta-analysis of Syrjanen [50] has indicated a quadruple increase in the risk of oral cavity cancers in patients with an HPV 16 infection; however, the risk level estimated with this method turned out to be identical as in the case of malignant transformation of precancerous conditions. The analysis by Ribeiro [51] has not confirmed a statistically significant relationship between an HPV infection and oral cavity cancers. The most common type of HPV identified in oral cavity cancers is HPV 16; only reports from the South Africa [52] have indicated the role of only HPV 18, whereas the coincidence of both subtypes is common [53]. Types that are detected more rarely include HPV 8, HPV 31, HPV 38, HPV 66 [54]. LR HPV which is also rarely detected is a bystander and not driver of cancerogenesis. HPV DNA is observed in 25% of oral cavity cancers. HR HPV E6/E7 RNA is present in 85% of HPV-associated cancers and in 33.7% of all cancers of this location. When HPV is confronted with classic risk factors it can be seen that cigarette smoking is more significant (the risk of cancer in smokers and alcohol consumers is significantly higher than in seropositive patients) [16]. There is also no evidence to support a better prognosis in HPV-associated cancers [55]; it is the only tendency without statistical significance [53]. The primary location discussed above has to be studied further in order to define accurately the role of HPV as a potential causative and prognostic factor. Laryngeal cancers HPV DNA is detected in approx. 9.6% of normal larynxes. The HPV rate observed in cancers of this organ is slightly higher (up to 23%). HPV 16 is the most frequently detected viral type in laryngeal cancers; however, the highest differentiation of types was observed for HNSCC in this location: HPV 18,26,31,33,39,36,45,51,52,58,59,. No demographical differences were observed. The HPV transcriptional activity was reported in only few cases, although the number of viral copies in tumours was significantly higher when compared to non-malignant lesions. It supports the theory of a ''driving'' role of HPV in this anatomical location [59]. No effects of HPV on the prognosis in laryngeal cancers were observed [60]. This virus is thought to be one of cancerogenesis promoters; however, it is not an independent, isolated causative factor. Tumours of the maxillo-ethmoid complex Data regarding asymptomatic carriers of HPV in the nasal cavity are scarce, and in polyps this virus is present in nearly 3% of cases [61]. An aetiological relationship between HPV and inverted papillomas (IP) has been documented; viral DNA is detected in approx. 25% of tumours. The HPV rate significantly increases in IP with high grade dysplasia and IP with neoplastic cancer. In the aetiology of mild IPs the role of LR HPV has been confirmed, and a thesis has been suggested that HR HPV is responsible for transformation into malignant cancers and their rapid progression [62]. Reports on the role of HPV in tumours of the maxilloethmoid complex are rare, and based on small, heterogeneous research groups, and take into account different tumour tissues. HPV DNA is observed in approx. 21-30% of cancers, and HPV 16 is the most detected type. Final conclusions regarding the effects of HPV on the clinical course and prognosis cannot be made due to insufficient experience collected so far [63]. Salivary gland tumours Literature reports on the role of HPV in the development of salivary gland tumours are scarce, and based on small groups (up to 20 patients); in addition, the study results are inconsistent. It has been detected that HR HPV is a cancerogenesis promoting factor in mucous-epithelial cancers the incidence of which has been on the increase within the last two decades [64]. The presence of HPV DNA has also been observed in Warthin tumours, polymorphic adenomas, lobular cancer [65], and the highest rate was observed in cysticglandular cancers [66]. The significance of HPV in the development of salivary gland tumours has not yet been proven finally; nonetheless, recent reports are sceptical [67]. The screening programme and prevention of HPV-associated cancers Current screening programmes are aimed to detect lesions early, and consequently to limit the incidence and mortality due to cancers. The countries that have introduced cytologi-cal screening have managed to significantly limit the incidence and mortality rates in women due to cervical cancer. An infection with oncogenic types of human papillomavirus is a factor necessary for the initiation and progression of dysplastic lesions in the cervix. It has been demonstrated that cytology screening tests combined with HPV diagnostics are more sensitive with regard to detection of cervical intraepithelial neoplasia (CIN) when compared to cytology screening tests alone. When a precancerous condition is detected appropriately early it is possible to introduce effective, cheap and fast treatment in order to achieve treatment success of almost 100%. As a response to a sudden and unexplained increase in the number of HPV-associated HNSCC cases in 2011 there was an attempt to design first screening tests similar to cytology screening tests for cervical cancer. It was proven [68] that in exfoliative cytology specimens of the oral cavity an HPV infection was observed significantly more rarely in the control group when compared to patients with cancer. Moreover, there is clinical evidence that HPV DNA obtained in a brush smear indicates a higher risk of cancer or a local relapse after the treatment has been completed [69]. In a screening study [70] two groups of patients were compared: those reporting due to complaints in the throat and HIV-infected patients in whom the risk of tonsil cancer is two to six times higher than in a general population; however, no relationship between an HPV 16 infection and atypical manifestation of squamous epithelial cells or signs of dysplasia was observed, consequently, the screening study was assessed negatively. Currently, there are no simple and reliable tools to attempt screening tests for an HPV infection of the oral cavity and oropharynx. There have been no commonly approved screening tests for precancerous conditions of the oral cavity [39]. In the United Kingdom, as a result of the introduction of a routine HPV test using a PCR method as part of a screening programme the prices of tests decreased, and currently, their costs are a fraction of all costs associated with the whole oncological programme; nonetheless, there is no evidence to support the cost-efficacy of such proceedings [71]. The significance of vaccination with a quadrivalent HPV vaccine has been well-known as part of prophylaxis for cervical cancer in women. However, there are no data on how to prevent transmission and expression of oncogenic HPV of the oral cavity, although there are reasons to believe that vaccinations in this field might be potentially effective [72]. In the USA vaccination for HPV 16, 18 in boys has been considered; however, based on an initial analysis it cannot be justified when the whole population of women is subject to vaccination and the cost-efficacy is not sufficient [73]. The problem of vaccination as a prophylactic measure for OPCs is still an open question and multicentre trials are necessary. Recurrent respiratory papillomatosis (RRP) RRP in children is the most common non-malignant disease of the respiratory tract in this age group. It is thought that each interaction of the human papillomavirus with a susceptible cell may result in infection [74,75]. When diagnostic capabilities are considered as a criterion of classification the following three basic types of infection can be distinguished [75,76]: clinical (apparent)the presence of macroscopically visible flat or exophytic, sometimes balloting papillomatous formations; subclinicalvisible only during a histological examination; latent (dormant)there are no visible macroscopic or microscopic lesions, and the appropriate diagnosis can be made only based on the results of molecular biology tests. The transition between different infection types is multidirectional and conditioned by the following: susceptibility of epithelial cells to HPV-associated proliferation, impaired functioning of cell-mediated immune responses, tissue damage, action of steroid hormones and coexistence of infections caused by other viruses [5,77]. RRP epidemiology In the years 1970-1990 the incidence rate was at the level of 0.6-4.3 new cases annually per 100 thousand children; and it was lower in Europe when compared to other continents [78]. Based on the data from Centers for Disease Controland Prevention and Recurrent Respiratory Papillomatosis Task Force the incidence rate in Europe decreased to 0.24 in 2009 [78]. In the same period the average incidence rate in the USA was 0.73 and ranged from 0.12 to 2.13 depending on a state [79]. Discrepancies in the results are conditioned by many factors: geographical variations, different estimates of morbidity rates in different regions of the world, diffuse ''distribution'' of specific viral types in different populations. In Thailand the same rate is estimated at 2.8 and remains 12 times higher than in European countries, whereas in Africa it is 4. In the USA nearly 1000 of new cases are diagnosed per 305 millions of people, and in the 62-million population of the United Kingdom 103 patients are under supervision of all ENT centres [79]. Risk factors The confirmed presence of pointed or flat condylomas in the genital tract of a mother during a spontaneous labour increases the risk of RRP even 231 times when compared to cases without clinically apparent signs of infection [80]. However, the possibility of RRP in a child born under such conditions is less than 1% [81]. A theory describing a vertical route of foetal infection during the passage through an infected birth canal [82,83] is commonly accepted; the estimated risk of HPV infection transmission from a mother to a child during the passage is 1/80 to 1/400 births [84]. It has been demonstrated that children with RRP only exceptionally were born during a delivery by a caesarean section [5]. Nonetheless, the role of a caesarean section as a method to prevent HPV transmission appears to be limited. A situation regarding an infection with highly oncogenic viruses is different. If an HPV 16 infection was diagnosed in the mother's cervix, the presence of the same viral type was observed in smears of the upper respiratory tract in as many as 50% newborns. Clinical studies also indicate an extrasexual route of infection transmission confirmed in newborns, infants and small children whose mothers were not infected with HPV. Moreover, the presence of papillomaviruses in the amniotic fluid and umbilical blood, and syncytiotrophoblast cells in pregnant women with an active infection in the cervix suggests a possibility of an intrauterine infection. A consequence of a perinatal infection can include RRP that develops after the period of several months or even years when the virus is lying dormant in macroscopically normal tissues [82,84]. Clinical manifestation of RRP In the course of RRP the number of cases has two peaks: between 2nd and 5th year of life, what is typical of a clinical form in children, and there are single cases in the period preceding puberty [85][86][87]. In the case when this disease develops in the first or second year of age [86], and in particular before the sixth month of life the prognosis is especially unfavourable, and in this last group the mortality rate reaches even 100% [88,89]. Early disease development is correlated with HPV 11, associated with frequent relapses and the need to repeat endoscopic procedures to restore the patency of the respiratory tract at least four times a year. The risk of lesions spreading into the trachea, bronchi and pulmonary parenchyma is increased. When the disease develops later the clinical course is milder [77,86,89,90]. In elderly patients HPV 6 is detected more often. In such cases the lesions are usually less numerous and cause critical disturbances associated with the respiratory tract obstruction relatively rarely [86]. Identification of HR HPV DNA as a diagnostic tool in HNSCC cases The significance of the patient's assessment prior to treatment selection and introduction The assessment of a patient with HNSCC of a primary location in the oropharynx includes molecular diagnostic tests for HPV as well as a classic histological examination. As a result it is possible: (1) to obtain a complete diagnosis, (2) to select HPV-associated cancers, (3) to inform the patient of a disease nature and a prognosis, (4) to schedule further treatment [21,[91][92][93]. With regard to current knowledge (ASCO 2012) HPV-associated cancers are subject to a different management model; due to a significantly better response to radiochemotherapy and a better prognosis their treatment has been deintensified, namely radiation doses have been reduced. As a result of this strategy it is possible to avoid complications or reduce side effects and achieve cure rates as high as previously [17,94,95]. An accurate diagnosis and classification into two different therapeutic pathways are of basic significance in order to compare treatment outcomes that have been achieved so far in different arms of phase II clinical trials and it is also a point to start designing phase III trialsit is a potential step towards individualised treatment [71,97]. Treatment ''deintensification'' and efficacy of targeted treatment is still under debate [97,98]. A histopathological assessment In order to prepare biologically adequate classification based on morphological parameters and molecular markers the role of a pathologist and an increasing role of a laboratory diagnostician (who performs some assays that are combined with a medical diagnosis at a later stage) are more and more important. The regulation of diagnostic minima for a complete diagnosis of HPV-associated HNSCC is an urgent task that has not been accomplished yet. Due to a special role of HPV 16 it is recommended to introduce tests in order to confirm an infection, in particular in the case of OPCs. The determination of the HPV status for HPV-associated cancers can be performed using one test or a combination of several tests. In accordance with the guidelines of College of American Pathologists (Tab. I and II) a histopathological examination has been supplemented with a requirement to add the p16 INK4A assay, and the rules how to assess the HPV presence have been changed. Changes are quoted according to fragments of the protocol on how to prepare diagnostic material. An assessment of a tumour tissue with molecular biology techniques A general rule used to identify and genotype HPV is based on the detection of hybrids formed by specific DNA or RNA probes with complementary DNA fragments from a tested tissue sample. Differences in specificity and sensitivity of individual methods are associated with the method of material preparation (direct detection or the need to replicate viral DNA) and with methods to detect hybrids obtained. The test sensitivity is the lowest number of viral DNA copies in a cell that can be detected using a given technique. The PCR ( polymerase chain reaction) method is based on the replication of a studied DNA fragment. It makes it possible to detect one copy of a virus per ten cells. Moreover, it allows for simultaneous determination of many HPV types. The price associated with high sensitivity is susceptibility to contamination of a tested sample with a ''foreign'' DNA from outside the sample. GP5+/6+ starters are the most commonly used for the detection of HR HPV DNA in a PCR reaction. Moreover, commercially available HPV genotyping tests and sequencing methods are also used. This test can be performed using fresh or frozen tissue specimens as well as paraffin-embedded specimens and smears. Fresh or frozen tissue specimens are the best Table I -Guidelines for completing the protocol histopathology material for tests. When selecting material for tests it is necessary to agree this with a laboratory. Southern-blot or Dot blot methods have high sensitivity (from 0.1 to 1 DNA copy per cell) and specificity. They are based on the hybridisation of viral DNA with a complementary probe and make it possible to identify the whole viral genome. On the other hand, they require a large amount of material for the analysis, and are complex and time-consuming. A separate group of tests based on a PCR technique includes the analysis of viral transcripts of the E6/E7 genes using RT-PCR (reverse transcriptase PCR). Reverse transcriptase PCR is one of the most sensitive methods of detecting low numbers of mRNA in a specimen. Thanks to RT-PCR it is possible to determine the level of viral expression. The test for viral oncogenes: E6 and E7 is thought to be a ''gold standard'' by some authors as these oncogenes interact with cellular proteins such as p53 and pRB. The viral E7 protein joins with pRB resulting in the release of the E2F factor from the pRB:E2F complex and the expression of proteins that are necessary for DNA replication, whereas the viral E6 protein interacts with the p53 protein. It has been demonstrated that the overexpression of viral E6 and E7 genes is necessary for the cells to start malignant transformation. Viral transcripts can be tested using fresh or frozen tissue specimens. Due to a very poor quality of RNA from paraffin-embedded specimens this material is not recommended for a routine use. In situ hybridisation (ISH) is a method allowing for the analysis of viral transcripts directly in the studied tissue, and it does not require RNA isolation or amplification. The use of RNA probes for HPV E6/E7 makes it possible to analyse integration and transcriptional viral activity in the studied material. Thanks to the analysis of viral transcripts (mRNA HPV) using PCR methods or their direct visualisation in tissues using ISH it is possible to distinguish a chronic infection from a transitory one, and it can be a valuable supplement to diagnostic procedures. Fluorescence in situ hybridisation (FISH) is another method to detect HPV DNA. This method is complex and requires a fluorescence microscope. An additional obstacle is an unassessable image being a result of e.g. a large number of signals of integration between HPV and the host genome. This test is characterised by high specificity but relatively low sensitivity -85-88%. Paraffin-embedded specimens can be used for this test. An immunohistochemical assay for the p16 INK4A expression is a method of a direct analysis of malignant transformation associated with HPV in a cell. It is thought that the p16 INK4A overexpression is a result of binding of the pRB protein by the viral E7 protein and an interaction between the viral E6 protein and the p53 protein as a result of a negative feedback. Many authors claim that this test has very high sensitivity 90-100%, but its specificity is low, 80%, what may translate into a large share (20%) of falsely positive results. Summing up, in patients with OPCs it is absolutely necessary to assess HR HPV DNA. This test allows for viral identification, but it does not make it possible to assess its transcriptional activity. In HR HPV DNA (+) patients viral genotyping with special consideration for HPV 16 and 18 should be considered. The p16 INK4A expression test combined with the HPV DNA test increases the sensitivity of the assay. The HPV mRNA test may be a valuable supplement, but it is not absolutely required. It makes it possible to detect an integrated form of the virus. Among many methods to analyse HPV in a tumour tissue the following are used the most frequently: immunohistochemical (IH) detection of p16 INK4A , in situ hybridisation (ISH) and PCR. In the case of lymph nodes (diagnostic material collected with various techniques) immunohistochemistry is recommended in order to assess HPV 16 and/or p16 INK4A . An attempt to establish a diagnostic algorithm The authors attempting to establish a diagnostic algorithm think that the diagnosis of HPV-associated HNSCC has to be based on two parameters combined: tumour features (HPV DNA or p16) and the HPV E6/7 test [99,100]. A combined method of detecting HPV DNA using a PCR and GP5+/6+ starters, and an immunohistochemical analysis of the p16 INK4A expression has sensitivity of 96-97% and specificity of 94-98%. A potent relationship between the classification of a tumour as HPV-associated and a prognosis (assessed based on disease-dependent survival rates) is confirmed only when both parameters are used together. What is more, seronegative patients with regard to E6/E7 antibodies have a significantly higher risk of death despite the p16 INK4A overexpression as this parameter can be modulated by other causative factors that are independent of HPV [101]. In accordance with the guidelines of the National Comprehensive Cancer Network 2011 for head and neck cancers a single test which is recommended for OPCs in order to determine the prognosis is p16 INK4A IHC; its value has been proven, it is widely available, inexpensive, easy to interpret and with clear recommendations regarding the cut-off value [93]. In order to confirm the value of this method two clinical studies RTOG-1016 and RTOG-0920 have been started. An assessment of a brush smear This test should be considered if tissue material is not available. The assessment of an HPV infection should be performed with the HR HPV DNA test. Due to a very low amount of cellular material that can be obtained from a smear and lack of commercially available tests that have been validated using material from ENT smears this test might not be available. A suggested management regimen in patients with oropharyngeal cancers ( Fig. 1) Quality standards in diagnostic procedures for HPV, an assessment of their quality and a diagnostic value and laboratory interpretation and authorisation of laboratory results Collecting and transporting material for HPV DNA tests A laboratory performing a HPV DNA test provides a test order form, detailed instructions on how to collect, store and transport samples. Smear collection A smear should be collected with a brush or a nylon swab from suspected places, and then suspended in a transport medium that has been validated by a laboratory. It is recommended to secure material in a buffer for liquid-based cytology and an HPV test. The amount of the transport medium should not exceed 1.5-5 ml. Cells collected with this method are well preserved, and as a result they can be transported at room temperature (15-30 8C) and it will not have an adverse effect on the test results. Closed test tubes with cells suspended in a liquid transport medium should be stored in accordance with the manufacturer's guidelines. The collection and transport system has to always be validated against a used diagnostic test in order to avoid preanalytical errors and to provide reliable results. Test tubes with smears have to include the following patient's data: full name, PESEL (personal id. no.) and a collection date or a barcode. Tissue material Tissue fragments for the HPV DNA test with a PCR method can be frozen or suspended in a test tube with a medium for liquid-based cytology and the HPV test, and then stored in a fridge (temp. 2-8 8C). provides additional protection. With regard to transport of frozen material its thawing and refreezing have to be avoided. Material is transported in closed test tubes or in containers, in a closed collective package labelled ''infectious material''. Test methods A laboratory uses test methods that correspond to current medical knowledge and have been appropriately validated by this laboratory. Falsely negative results Falsely negative results can be obtained when there is a too low number of cells collected for a test, or a sample was stored inappropriately or sample transport conditions were inappropriate, or there were analytical errors. In HPV DNA molecular diagnostics it is necessary to use tests with an internal cellular control (i.e. amplification and detection of the b-globine gene) that provides information whether the following steps were performed correctly: smear collection, DNA extraction and amplification processes, and moreover, it protects against issuing a falsely negative result. Falsely positive results High sensitivity of molecular methods to detect HPV DNA is associated with a risk of falsely positive results due to contamination of a tested sample by low amounts of DNA from other patients. Therefore it is recommended to use tests with enzymatic protection against issuing a falsely positive result due to contamination (i.e. UNG and dUTP). Presentation and issuing results of HPV DNA tests The report form presenting results of the HPV DNA test is in accordance with the ordinance of the Minister of Health and has been authorised by a laboratory diagnostician with a title of a specialist in laboratory medical genetics or medical microbiology, with at least 2-year experience in molecular diagnostics. The report form regarding laboratory tests can be transferred in an electronical form what complies with legal requirements. Requirements regarding a laboratory A laboratory has to meet quality standards stated in the ordinance of the Minister Health on quality standards for medical diagnostic and microbiological laboratories, in particular: it prepares, implements and uses procedures to accept, register and attach laboratory labels on study material and makes the procedures available to ordering parties that confirm they have familiarised themselves with these procedures, it performs an internal quality control of tests and participates in an external quality control in accordance with the ordinance of the Minister of Health. NOTE! Laboratory tests performed according to recommendations and principles stated above should always be correlated with a routinely performed histopathological test that is assessed by a medical professional specialising in pathomorphology. When diagnostic procedures are limited only to HPV diagnostics it is not possible to distinguish whether a lesion is mild (and is only proliferative) or whether it is in situ carcinoma or invasive cancer. LR HPV DNA identification in RRP Studies that have so far been conducted in populations of children are not systematised, and are only of an exploratory nature; moreover, they are conducted based on heterogeneous protocols and using different techniques [105]. Factors affecting large discrepancies in results obtained are associated with the sensitivity of test methods used to identify LR HPV DNA [77]. In the table below there are specific features of selected molecular biology techniques commonly used to identify genetic material of the human papillomavirus in tissue specimens (Tab. III). Summary State of knowledge and directions of studies in the field of HR HPV 1. Meta-analyses presented in the literature unambiguously prove the role of HPV in the development of HNSCC, in particular oropharyngeal cancers. 2. Incidence trends unambiguously indicate an increasing number of HPV-associated head and neck cancers, first of all located in the oropharynx. 3. The presence of HPV as a biological feature of a cancer is a documented favourable prognostic factor for pharyngeal cancers; however, the role of HPV remains unclear with regard to other primary locations of HNSCC. 4. The method to detect HPV in patients with HNSCC has been prepared and standardised. A routine test for cancers of the oral cavity and oropharynx with regard to the HPV status is recommended as a standard procedure. 5. So far risk groups have not been selected and no screening tests have been undertaken to allow for early diagnostics of precancerous conditions of the oral cavity and oropharynx in combination with an HPV infection. EUROGIN 2011 [2] a road map presenting prophylaxis and treatment of HPV-associated diseases points at the following future directions of studies: 1. The assessment of an HPV infection in HNSCC (located outside the oropharynx). A molecular assessment of improved results of treatment with radiochemotherapy in the case of HPV-associated HNSCC. 3. An assessment of the incidence rate in the population for an HPV infection of the oral cavity and HPV distribution in different parts of this anatomical location. 4. A test of a natural history of an HPV infection of the oral cavity. 5. The efficacy of HPV vaccines in the prophylaxis of HPV 16 infections of the oral cavity. 6. A potential use of HPV tests in a screening programme for infections of the oral cavity. 7. A precise characteristics of HPV-positive precancerous conditions. Items not covered by the EUROGIN 2011 programme: 1. Using the level of antibodies against HPV to monitor patients treated for HPV-associated HNSCC; initial reports indicate that the levels of E6 and E7 expression which are reduced in the period after treatment in comparison with the baseline might be associated with remission and elevated levels suggest a persistent cancer or a relapse [48,106,107]. The assessment of LR HPV in papillomatous laryngeal cancers; a LR HPV coinfection may be a reason to change selected treatment (radiation therapy replaced by surgery). Summing up, despite lack of a finally established algorithm and fully validated test methods it is necessary to determine the presence of HPV in cases where the value of such diagnostic procedures have been proven, namely in the oropharyngeal cancers and in selected cases of recurrent respiratory papillomatosis.
2018-04-03T05:29:06.148Z
2013-05-01T00:00:00.000
{ "year": 2013, "sha1": "2d017016a42a3072d53e5ddd82f54d23ce33f698", "oa_license": "CCBYNC", "oa_url": "https://ruj.uj.edu.pl/xmlui/bitstream/handle/item/259069/skladzien_et-al_recommendations_for_the_diagnosis_of_human_papilloma_virus_2013.pdf?isAllowed=y&sequence=1", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "97b85108d7e6f04017c084515048cea7681bf183", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
210158145
pes2o/s2orc
v3-fos-license
Researching social innovation: is the tail wagging the dog? Background Social Innovation in health initiatives have the potential to address unmet community health needs. For sustainable change to occur, we need to understand how and why a given intervention is effective. Bringing together communities, innovators, researchers, and policy makers is a powerful way to address this knowledge gap but differing priorities and epistemological backgrounds can make collaboration challenging. Main text To overcome these barriers, stakeholders will need to design policies and work in ways that provide an enabling environment for innovative products and services. Inherently about people, the incorporation of community engagement approaches is necessary for both the development of social innovations and accompanying research methodologies. Whilst the 'appropriate' level of participation is linked to intended outcomes, researchers have a role to play in better understanding how to harness the power of community engagement and to ensure that community perspectives form part of the evidence base that informs policy and practice. Conclusions To effectively operate at the intersection between policy, social innovation, and research, all collaborators need to enter the process with the mindset of learners, rather than experts. Methods – quantitative and qualitative – must be selected according to research questions. The fields of implementation research, community-based participatory research, and realist research, amongst others, have much to offer. So do other sectors, notably education and business. In all this, researchers must assume the mantel of responsibility for research and not transfer the onus to communities under the guise of participation. By leveraging the expertise and knowledge of different ecosystem actors, we can design responsive health systems that integrate innovative approaches in ways that are greater than the sum of their parts. Background Communities and social innovators develop and drive solutions to challenges, empowered by the desire for change. In health, as in many other sectors, innovations have been spurred in response to problems which have been ignored or inadequately addressed by formal systems. Social innovations in health have been further promoted through financial incentives offered through initiatives such as the Grand Challenges [1] or by those offered through global health research funding bodies. However, a limited understanding of the components of an innovation that underpin its success (or failure) can limit the ability to learn from its implementation, or to replicate or scale it up to other communities and populations. This presents a missed opportunity for health systems that could draw lessons for active engagement with communities in designing and implementing health services and interventions that are sustained in communities. Collaborations between health researchers and policy makers, social innovators, and communities have the potential to address this knowledge deficit. Along these lines, a recent forum brought together social innovators, researchers, and other key stakeholders to nurture collaboration and establish a research agenda. The forum provided the opportunity for rich, in-depth discussion and engagement and largely, achieved the planned objectives [2]. However, a number of fault lines were also revealed. These primarily stemmed from the differing priorities and perceived notions of success of the range of stakeholders. Social innovators and communities were engrossed in the day-to-day activities required to meet health needs, implement programmes, and create the change they want to see and live. For researchers, adherence to methodological rigour was paramount to ensure that any evidence generated could fit into scientific paradigms for reproducibility. From the perspective of health policy makers, social innovations provided an opportunity to devolve responsibility to service providers that were more acceptable to the community, but also raised the challenge of sustainable financing. Attempts to instrumentalise the utility of social innovation for the various stakeholders risked losing the very values that underpinned and motivated social innovation. In this paper, we explore the features of social innovation that contribute to its success and unpack the challenges of undertaking research within the context of community-driven social innovation. We posit the notion that research needs to be in service to the community and to social innovation, and therefore requires innovation in approaches and design in order to balance rigour against the realities of working with, and responding to, community-driven demand. We also present suggestions for practices that researchers and social innovators could adopt to build and maintain mutually beneficially collaborations. Conceptualising social innovation in health Phills et al. [3] describe social innovation as "a novel solution to a social problem that is more effective, efficient, sustainable, or just than existing solutions and for which the value created accrues primarily to society as a whole rather than private individuals". The arenas of healthcare, education, and environmental sustainability have been ripe areas for innovation [4]. Whilst the creation of locally designed solutions is not a new phenomenon, technological advancements and increasing globalisation have facilitated major leaps in the scale and impact of solutions. Social innovations are wide ranging, encompassing products, services, behavioural practices, and models or policies. Many are a combination of these. Innovations do not have to be new inventions, or new to the world, but their deployment should be novel either to the beneficiary group or in the way in which they are applied. For example, Riders for Health [5], uses a very familiar producta motorcycleto address major challenges in last-mile health delivery, particularly for rural communities. Working with ministries of health, Riders for Health teaches community health workers how to ride motorbikes, as well as basic maintenance and repair skills. Riders also provide transportation services for both medical necessities and people. Now operational in eight countries, each country team operates as an independent organisation with the flexibility to adapt to local contexts and needs. ICHAPP, a Brazilian Indigenous Community Health Agent Professionalisation Programme [5], aims to improve health care services in remote indigenous communities by blending traditional practices and cultural beliefs with biomedical approaches. As well as recognising communities' contextual realities, the accompanying training and certification programme for indigenous community health agents has enabled them to receive a salary for their work. In both these cases, innovations were developed with and adapted to the needs of beneficiary communities, illustrating the social orientation of social innovation in 'both ends and means' [6]. Community engagement and social innovation Community engagement has been conceptualised in a number of different ways from Arnstein's seminal Ladder of Participation [7] to Reed's more recent Wheel of Participation [8]. They all seek to address three common features of engagement: 1) Why? What are the motivations for engagement, be they pragmatic (better outcomes), normative (an expectation that stakeholders/ publics should participate in major decisions that affect them), or to enhance trust in decision-making processes?; 2) How is the engagement being carried out? Methods are often presented along a continuum of engagement from communicating information, consulting for feedback, to collaboration and co-production; and, 3) Who is the initiating party? Is the engagement being driven from the bottom-up, or is top-down? Historically, the underlying rationale for adopting community engagement approaches has been inherently value-laden. Arnstein's 'ladder of citizen participation' [7] presents the degree of co-operation and participation between different actors with two rungs of nonparticipation at the base, followed by three rungs of tokenism, and finally three levels of citizen power. The visual of a ladder combined with Arnstein's focus on the potential power (im)balances between different stakeholders implies a scale of 'bad' to 'good' engagement. This perspective is rooted in the deployment of community engagement in situations where power dynamics are unequal and community mobilisation serves as a form of activism. Social innovation, by its very nature, is accommodating of bottom-up endeavours. However, it would be naïve to imply that power imbalances do not occur. They may be driven by factors including unequal availability of resources, access, and expertise. When the type or number of stakeholders engaged in social innovation expands, especially with the introduction of experts, e.g. researchers, or influential stakeholders, power dynamics are likely to come into play. However, there are many ways in which communities and other stakeholders can engage, and methods to deploy, that aim to flatten power hierarchies. These range from individual reflexive practices in which participants consider their own biases and privileges, to facilitation techniques that provide space for debate and accommodate different modes of participation. Depending on the nature of the project and the needs, wants, and other commitments of communities, 'shallower' levels of engagement, such as consultation' may be appropriate or even desired. In response to a values-based approach, and in an attempt to mainstream community engagement, practitioners have increasingly presented utilitarian arguments for the benefits of engagement, including increased initiative effectiveness and sustainability, and economic benefits [9]. This tension continues to be visible in the interactions between social innovators and researchers. An argument grounded in rights or values is sufficient for social innovation because ultimately, it's about people. The challenge for health researchers is that these aren't things that are typically measured. Thus, there is a need to balance the intrinsic value and instrumental benefits of community engagement approaches to produce positive health outcomes while managing the risk of engagement becoming just a tick-box exercise. Research challenges at the intersection of social innovation and health Research in the context of social innovation in health is trying to find solutions and approaches that meet community needs and make a unique contribution to the development of resilient health systems. However, what 'success' looks like to the different stakeholders involved can vary. Operationally relevant results, i.e. data that guide intervention iteration, increase efficiencies, and maximise impact, are useful for innovation implementers. For researchers, measures of health outcomes are key while for policy makers, such research is most useful when it addresses not only how and why an intervention works, but how to implement it sustainably. This can present challenges for researchers used to methods that rely on an intervention having fixed and/or controlled variables, such as the 'gold standard' of medical research, randomised controlled trials. The growth of the field of implementation research (IR) has started to address these gaps. Implementation research seeks to understand what, why, and how interventions work in a given context. With its origins in several different research fields, IR has an inherently mixed methods approach to study design as well as being sufficiently flexible to account for and adapt to changes in the intervention being implemented [10]. The focus on users, whether communities, policymakers etc., is another strength of IR, moving research beyond a focus on the production of knowledge. It means that a collaborative approach to research design is integral to any project and requires that community and broader stakeholder engagement approaches are treated as being on a par with research methodologies in terms of importance. This can present challenges, from a lack of knowledgethe approaches are not a standard part of the curriculum for health researchersto a lack of methodological trust [11]. Fields of research practice that are grounded in community and participatory approaches can provide insights into ways of working. For example, realist research approaches [12] seek to understand how the outcomes of complex interventions and programmes are impacted by context. This requires a greater engagement by researchers in understanding the issues from a grassroots perspective. Similarly, methods from participatory action research [13] and community-based participatory research are a valuable additions to a researcher's toolkit [14]. In each of these approaches, co-design is essential. Beyond the sciences, a lot can be learnt from other sectors including design research, business, and education. There is much to gain by venturing beyond the boundaries of individual epistemological backgrounds. In many cases, "we have learned to create the small exceptions that can change the lives of hundreds. But we have not learned how to make the exceptions to the rule to change the lives of millions" [15]. Adoption of innovations by ministries of health and incorporation into health systems is a powerful approach for providing long term sustainability, and potential to scale, for innovations that improve lives. For implementation uptake to be successful an adequate understanding of what's working is needed. This includes identifying the critical components, the human and financial resources required, and economic costings that provide information on return on investment (ROI) or service delivery savings [16]. This data is most useful when it also addresses how a given programme fits into a broader policy portfolio or aligns to political priorities. Lacking this suite of information hinders uptake and delays or denies communities access to effective services. Bringing it all together Chipatala Cha Pa Foni (CCPF): Health Centre by Phone is a mHealth programme in Malawi that provides health advice over the phone. The initial idea was the combination of two winning submissions to an innovation competition funded by the Bill and Melinda Gates Foundation. Launched in 2011 by Non-Governmental Organisation (NGO) VillageReach, the concept was further developed in collaboration with traditional leaders, community health workers, and district health staff from the Malawian Ministry of Health. A data-based approach was embedded from the outset, with the implementing organisation working in partnership with a research organisation, Invest in Knowledge, who conducted the project evaluation [17]. This combination of innovators, target communities, policymakers, and researchers has resulted in the refinement of the idea, an expanded focus from maternal and child health to general health advice, service increase from one to 28 districts, and transition of the service from the NGO to the national government [5]. A collaboration with health researchers from the outset of an innovation is the exception rather than the rule. That doesn't mean that a research component cannot subsequently be integrated, however in doing so, researchers must retain the mantel of responsibility for research and not transfer the onus to communities and innovators under the guise of participation. When exploring perceived barriers to collaborating with researchers, social innovators mentioned that many researchers expected them to take on the data collection elements of research which they lacked the capacity (time, resource, and skills) for. Whilst collaborations can present good opportunities for skills transfer and capacity development within communities, they also provide an opportunity for Master's and PhD students to collect data while gaining experience of health innovation in communities. This could be a way to balance the tension between adhering to 'standards of evidence' and not extinguishing innovation or the enthusiasm of community-generated projects. Conclusions Partnerships between social innovators, communities, policy makers and researchers can leverage the experience and expertise of each to gain and advance vital knowledge. To be effective brokers at this intersection, researchers need to be willing to enter the process as learners, not just experts. Too often we think about participatory models of engagements from a single perspective: the expert researcher coming in and engaging communities with their work. By adopting a more holistic approach, valuing each stakeholder as a holder of expertise as well as a recipient of new information, and emphasising the co-creation of knowledge and convergence of goals, we improve the chances of long-term behaviour change. The ultimate goal of achieving good health and wellbeing [18] unifies communities, innovators, researchers and policymakers. Combined with a global push towards 'people-centred healthcare' [19], the flourishing number of social innovations in the delivery of health services driven directly by communities, grassroots organisations, and social innovators should not be surprising. To achieve meaningful progress, all stakeholders are going to have to come together with no one element dominating. By leveraging the expertise and knowledge of different ecosystem actors, we can design responsive health systems that integrate innovative approaches and take us closer to achieving 'health for all'.
2020-01-13T14:57:00.845Z
2020-01-13T00:00:00.000
{ "year": 2020, "sha1": "f8908b49caf5c173b6bcca80b9157e52c074aad4", "oa_license": "CCBY", "oa_url": "https://idpjournal.biomedcentral.com/track/pdf/10.1186/s40249-019-0616-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f8908b49caf5c173b6bcca80b9157e52c074aad4", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Medicine", "Sociology" ] }
247171272
pes2o/s2orc
v3-fos-license
Therapeutic Effects of Berberine on Liver Fibrosis are associated With Lipid Metabolism and Intestinal Flora Liver cirrhosis is a form of liver fibrosis resulting from chronic hepatitis caused by various liver diseases, such as viral hepatitis, alcoholic liver damage, nonalcoholic steatohepatitis, autoimmune liver disease, and by parasitic diseases such as schistosomiasis. Liver fibrosis is the common pathological base and precursors of cirrhosis. Inflammation and disorders of lipid metabolism are key drivers in liver fibrosis. Studies have determined that parts of the arachidonic acid pathway, such as its metabolic enzymes and biologically active products, are hallmarks of inflammation, and that aberrant peroxisome proliferator-activated receptor gamma (PPARγ)-mediated regulation causes disorders of lipid metabolism. However, despite the ongoing research focus on delineating the mechanisms of liver fibrosis that underpin various chronic liver diseases, effective clinical treatments have yet to be developed. Berberine (BBR) is an isoquinoline alkaloid with multiple biological activities, such as anti-inflammatory, anti-bacterial, anti-cancer, and anti-hyperlipidemic activities. Many studies have also found that BBR acts via multiple pathways to alleviate liver fibrosis. Furthermore, the absorption of BBR is increased by nitroreductase-containing intestinal flora, and is strengthened via crosstalk with bile acid metabolism. This improves the oral bioavailability of BBR, thereby enhancing its clinical utility. The production of butyrate by intestinal anaerobic bacteria is dramatically increased by BBR, thereby amplifying butyrate-mediated alleviation of liver fibrosis. In this review, we discuss the effects of BBR on liver fibrosis and lipid metabolism, particularly the metabolism of arachidonic acid, and highlight the potential mechanisms by which BBR relieves liver fibrosis through lipid metabolism related and intestinal flora related pathways. We hope that this review will provide insights on the BBR-based treatment of liver cirrhosis and related research in this area, and we encourage further studies that increase the ability of BBR to enhance liver health. INTRODUCTION Liver cirrhosis is a major global disease burden and leads to increased morbidity. (de Marco et al., 1999). The major causes of liver cirrhosis are viral hepatitis, alcoholic liver diseases and nonalcoholic fatty liver diseases, and some parasitic diseases. (Pinzani et al., 2011;Tsochatzis et al., 2014;Itaba et al., 2019). Liver fibrosis is the pathological hallmark and precursor of cirrhosis, and it is dependent on the activation of hepatic stellate cells (HSCs). (Tsuchida and Friedman 2017;Kisseleva and Brenner 2021). Recently, it has been determined that the balance between liver tissue regeneration and fibrosis, and their relationship with disorders of the liver microenvironment, play an important role in the pathology of liver fibrosis. (Poisson et al., 2017;Faillaci et al., 2018). Aberrant inflammatory processes in the liver and primary metabolic pathways in hepatocytes, together with intestinal flora, shape the liver microenvironment, with lipid metabolism playing a crucial role in this regard. Liver cirrhosis is thus a complicated, multi-phase, multi-pathway disease, whose pathogenesis remains to be fully characterized. MECHANISMS OF HEPATIC STELLATE CELLS ACTIVATION AND LIVER FIBROSIS Liver fibrosis is a wound healing process that is triggered by chronic liver damage. A central event of fibrogenesis is the transdifferentiation of quiescent HSCs to a myofibroblastic phenotype. (Friedman 2008;Wilson et al., 2015). Vitamin A-rich, lipidstoring HSCs show increased proliferative activity and fibrotic potential when activated by various types of liver stimuli. (Iredale 2007;Ramachandran et al., 2012). These activated HSCs can also release inflammatory signals to maintain myofibroblast activity and recruit adjacent normal cells for further activation and extracellular matrix (ECM) deposition, resulting in liver metabolism dysfunction and intrahepatic reconstruction. (Tsuchida and Friedman 2017;Henderson et al., 2020;Ginès et al., 2021;Kisseleva and Brenner 2021). HSC activation is controlled by multiple mechanisms, such as Hedgehog signalling, nuclear factor kappa B (NF-κB) signalling and mitogen-activated protein kinase (MAPK) activity. (Hellerbrand et al., 1998;Choi et al., 2009;S et al., 2009;Kim et al., 2017). Abnormal endoplasmic reticulum (ER) stress, oxidative stress, autophagy and ferroptosis, accompanied by inflammasome-associated signals, are common features of fibrogenesis. (Koo et al., 2016;Kim et al., 2017;Yang et al., 2021;Yi et al., 2021;Zhang et al., 2021). However, the pathogenesis of HSC activation and liver fibrosis remains unclear. A growing body of evidence has revealed that HSC activation and fibrogenesis are associated with the versatility of liver metabolism and the tightly controlled homeostasis of intestinal flora. The dysregulation of lipid metabolism often presents as an imbalance between lipid synthesis, uptake and oxidation, which in turn causes liver inflammation and fibrosis. (Moustafa et al., 2012;Ristić-Medić et al., 2013;Arain et al., 2017). Acetyl-CoA carboxylase (ACC) inhibition has been shown to perturb fatty acid β-oxidation and de novo lipogenesis to reduce the sources of energy for HSC activation. (Hernández-Gea and Friedman 2012;Trivedi et al., 2021). Interestingly, peroxisome proliferatoractivated receptor-gamma (PPAR-γ) and sterol regulatory element binding protein-1c (SREBP-1c), which are markers of quiescent HSCs, have been shown to modulate the adipogenic programme and thereby regulate HSC activation. (Eberlé et al., 2004;Tsukamoto 2005;Tsukamoto et al., 2006;Shao et al., 2016). The activation of farnesoid X receptors (FXRs) can suppress HSC activation and liver fibrosis via the reduction of triglycerides. (Maloney et al., 2000;Fiorucci et al., 2004). Surprisingly, it has also been suggested that intestinal flora can serve as independent regulators of liver metabolism, thereby influencing the progression, prognosis and regression of liver fibrosis. (Wei et al., 2013). This insight has been conceptualised as the gutliver axis and has been a focus of recent studies on fibrogenesis. (Albillos et al., 2020;Bajaj and Khoruts 2020). Chen et al. showed that compared to healthy patients, cirrhosis patients have higher proportions of pathogenic Enterobacteriaceae and Streptococcaceae and lower proportions of beneficial Lachnospiraceae. (Chen Y. et al., 2011). In addition, the experimental antibiotic-mediated reduction of intestinal flora has been shown to decrease the abundance of microorganisms in the liver microenvironment, thereby alleviating liver fibrosis. (Seki et al., 2007). Conversely, germ-free mice show more severe ECM deposition and liver fibrosis than normal mice. (Mazagova et al., 2015;Henderson et al., 2020). These results suggest that some intestinal flora are hepatoprotective and others are harmful, and that the dysbiosis of intestinal flora is a key driver of HSC activation and liver fibrosis. Thus, lipid metabolism and intestinal flora may warrant exploration as targets for drugs for the treatment of liver fibrosis. Despite mounting histological evidence suggesting that liver fibrosis is reversible, (Ni et al., 2014;Mogler et al., 2015) no methods can completely halt the pathological process. Current drugs applied in liver fibrosis treatment are primarily based on anti-inflammation and oxidative stress with limited effects. (Czaja 2014). Therefore, there is an urgent need to develop and validate effective therapies that specifically target liver fibrosis. Data show that berberine (BBR) functions via multiple networks to protect liver, resisting fibrosis and improving metabolism. (Zhang BJ. et al., 2008;Sun et al., 2009;Kumar et al., 2015;Zhang et al., 2016). In this review, we examine the underlying lipid metabolism-related and intestinal flora-related mechanisms of the biological activity of BBR and its therapeutic potential against liver fibrosis. 3 Sources, Bioavailability and Pharmacokinetic Characteristics of Berberine and its Derivatives BBR, or 2,3-methylenedioxy-9,10-dimethoxyprotoberberine chloride, is a quaternary ammonium salt with a molar mass of 336.36122 g/mol. (Caliceti et al., 2016). It is one of a group of benzylisoquinoline alkaloids found in plants of the genus Berberis, such as B. vulgaris (barberry), Phellodendron amurense (Amur cork tree), and Coptis chinensis (Chinese goldthread), and the latter two species are used in Chinese herbal medicines. (Yin et al., 2008). BBR has a plethora of biological activities, such as antibacterial, antiinflammatory, (Kumar et al., 2015) anticancer, (Pandey et al., 2008) antidiabetic, (Zhang Y. et al., 2008) and hypolipidemic (Kong et al., 2004a) activities. However, BBR self-aggregates, does not effectively permeate into tissues, is subject to efflux and hepatobiliary re-excretion, (Feng et al., 2015) and undergoes firstpass processing in the small intestine. Thus, BBR is poorly absorbed in the body, and has an absolute bioavailability of 0.36%. (Liu et al., Frontiers in Pharmacology | www.frontiersin.org March 2022 | Volume 13 | Article 814871 2010a). BBR is metabolized in the liver by oxidative demethylation, which is performed by the cytochrome P450 enzyme system (mainly by CYP2D6, CYP1A2 and CYP3A4), to yield four major phase I metabolites (demethyleneberberine, berberrubine, jatrorrhizine, and thalifendine) ; these are subsequently glucuronidated via UDP-glucuronosyltransferase (UGT) to their corresponding phase II metabolites. (Guo et al., 2016;Liu et al., 2016). These BBR metabolites act on the same targets as BBR (e.g., AMPK and the low-density lipoprotein receptor (LDLR)) but with a lower potency. (Li et al., 2011a). Ultimately, BBR and its derivatives are excreted primarily by hepatobiliary and renal pathways. Thus, there is a need for effective strategies to improve the oral bioavailability of BBR to enable its effective use in clinical settings. ANTI-FIBROSIS EFFECTS OF BERBERINE IN THE LIVER Several studies have demonstrated the efficacy of BBR against fibrotic diseases in vivo, including pulmonary fibrosis, (Chitra et al., 2015) myocardial fibrosis, (Zhang et al., 2014) renal fibrosis, and adipose tissue fibrosis, (Xu X. et al., 2021) and multifaceted causal relationships illustrate the efficacy of BBR against liver fibrosis. Bansod et al., 2021). As a multifunctional drug used in traditional Chinese medicine, berberine (BBR) can be used to treat various liver diseases. (Yang et al., 2021). The latest study from our team shows that BBR is a potential anti-liver fibrosis agent. In fibrotic mouse models, we found that BBR alleviates liver fibrosis by inducing ferrous-ion redox reactions to activate reactive oxygen species (ROS)-mediated ferroptosis in hepatic stellate cells, which suggests a possible strategy for the treatment of liver fibrosis. (Yi et al., 2021). Similar effects of BBR in carbon tetrachloride (CCl 4 )-induced liver fibrosis models were also demonstrated by other team recently. (Bansod et al., 2021). The activity of BBR against these multifactorial chronic diseases may be attributable to its multitargeted mode of action. (Zhang et al., 2011). Inflammation and oxidative stress are key drivers of liver fibrosis, and it has been clearly demonstrated that BBR has anti-inflammatory and anti-oxidative activities. (Zhou et al., 2008;Jeong et al., 2009;Shang et al., 2010). It is therefore that the activity of BBR against liver fibrosis has been explored in many studies during recent years. (Zhang BJ. et al., 2008;Sun et al., 2009;Zhang et al., 2016). As shown in Figure 1, various mechanisms of action of BBR have been widely explored, such as its regulation of HSC activation, oxidative stress, inflammation, lipid metabolism, AMPK and ER stress, and NF-κB-and PPARγ-related signalling pathways. Direct Effects of Berberine on Liver Fibrosis The fundamental feature of liver fibrosis is the abnormal activation of HSCs, and BBR has been shown to be a potential treatment for thioacetamide (TAA)-, CCl 4 -, ethanol-and high FIGURE 1 | Therapeutic effects of berberine (BBR) on liver cirrhosis are associated with lipid metabolism and intestinal flora. BBR is converted to dihydroberberine (dhBBR) and other metabolites by the action of nitroreductase or specific intestinal microorganisms. dhBBR, other metabolites and unmetabolised pro-BBR in turn act on intestinal flora (such as anaerobes) to regulate the microorganism ecosystem and concentrations of intestinal metabolites, such as short-chain fatty acids. Unmetabolised pro-BBR, BBR derivatives and intestinal metabolites enter the liver through the portal vein, and thereafter relieve liver fibrosis by modulating lipid metabolism and regulating hepatic signalling. The potential mechanisms by which BBR reduces fibrosis include the regulation of oxidative stress, ER stress, AMPK, NF-κB and PPARγ signalling and the modulation of immune and inflammatory responses through the production of lipid mediators. Frontiers in Pharmacology | www.frontiersin.org March 2022 | Volume 13 | Article 814871 cholesterol-induced liver fibrosis models; in these contexts, it likely acts by suppressing HSC activation and downregulating alpha-smooth muscle actin (α-SMA) and transforming growth factor-β1 (TGF-β1) levels. (Sun et al., 2009;Domitrovic et al., 2013;Eissa et al., 2018;Bansod et al., 2021). Previous studies have indicated that the direct beneficial effects of BBR involving modulation of the expression of multiple genes involved in HSC activation, cholangiocyte proliferation and liver fibrosis are linked to the downregulation of two important ribonucleotide molecules that promote liver fibrosis progression: microRNA34a and long noncoding RNA H19. . Another commonly reported mechanism is the induction of HSC cycle arrest in G1 phase, which inhibits HSC activation and prevents liver fibrosis. . In addition, BBR has been revealed to have direct antifibrotic activity in bile duct ligation-induced liver fibrosis, due to its suppression of HSCs activation, and (partly) due to its inhibition of the AMPK signalling pathway. . However, other studies have found that BBR exerts hepatoprotective effects and prevents liver fibrosis by activating the AMPK signalling pathway. Wang et al., 2016;Bansod et al., 2021). BBR was also shown to activate the AMP-activated protein kinase (AMPK) pathway and inhibit macrophage polarisation and TGF-β1/Smad3 signalling, thereby alleviating tissue fibrosis. (Xu X. et al., 2021). ER stress may be another target of BBR treatment, and it has indeed been confirmed that a reduction in ER stress was the most logical explanation for the fact that BBR hinders the progression of hepatic steatosis to fibrosis. . Moreover, BBR was shown to directly relieve liver injury-induced hepatic metabolic disorders by decreasing ER stress in hepatocytes (Yang et al., 2021), and the inhibition of Akt/FoxO1 signalling-mediated reduction of oxidative ER stress has been associated with BBR treatment of liver fibrosis. (Bansod et al., 2021). In other work, Zhang et al. reported that BBR prevents hepatic fibrosis by regulating the antioxidant system and lipid peroxidation in multiple hepatotoxic factor-induced fibrosis models, which was reflected by improved liver function, an increased antioxidant index and a decrease in fibrosis markers. (Zhang BJ. et al., 2008;Bansod et al., 2021). BBR-mediated normalization of liver function, suppression of inflammation, amelioration of ECM deposition and prevention of fibrosis correlate with NF-κB-and PPARγ-regulation. (Cao H. et al., 2018). Many anticancer agents, such as methotrexate, (Sadeghian et al., 2018) doxorubicin and cyclophosphamide, (Germoush and Mahmoud 2014) are hepatotoxic (and thus cause hepatitis, steatohepatitis, liver cell necrosis, liver fibrosis or cirrhosis), and it is imperative to identify ways to limit this hepatotoxicity. It is therefore encouraging that anticancer drug-induced liver histopathological changes, including fibrosis, are significantly decreased by BBR treatment in animal studies. Germoush and Mahmoud, 2014). Orally administered BBR displayed therapeutic effects in cirrhotic patients in a 1982 Japanese clinical study, with these effects being due to BBR inhibiting intestinal bacterial tyrosine decarboxylase. (Watanabe et al., 1982). Moreover, some randomized, placebo-controlled trials have found that BBR has positive effects in hyperlipidemic patients with virus hepatitisrelated cirrhosis. (Riccioni et al., 2018). However, there are few clinical reports proving that BBR can alleviate cirrhosis, and properly designed clinical trials must be performed to determine this. To this end, our group is currently performing a randomized controlled trial to assess whether BBR can trigger the regression of hepatitis B-related fibrosis (ChiCTR1900023426), and our preliminary results are encouraging. Effects of Berberine Metabolites on Liver Fibrosis As the absolute bioavailability of BBR is extremely low (<1%) and more than half of the pro-BBR is not absorbed by the intestine, BBR is converted by intestinal flora into absorbable metabolites such as dihydroberberine (dhBBR), oxyberberine (OBB), canadine and other compounds. (ENRIZ et al., 2006;Liu et al., 2010a;Chen W. et al., 2011;Feng et al., 2015). Two of these metabolic products of BBR, dhBBR and OBB, exhibit superior anti-inflammatory and anti-oxidant functions compared to pro-BBR as they modulate intestinal flora and inhibit TLR4-MyD88-NF-κB and MAPK signalling, resulting in the reduction of levels of the pro-inflammatory cytokines tumour necrosis factor (TNF)α, interleukin (IL)-17, interferon (IFN)-γ, IL-1β and IL-6 and immunoglobulin IgA. Tan et al., 2019;Li et al., 2020). Previous studies have revealed that dhBBR reduces inflammation via an NOD-like receptor pyrin domaincontaining 3 (NLRP3) inflammasome-related mechanism, which likely reduces the release of caspase-1,apoptosisassociated speck-like protein (ASC) and IL-1β to inhibit pyroptotic cell death, which is a form of programmed cell death that occurs in liver fibrosis. (Xu et al., 2021a;de Carvalho Ribeiro and Szabo, 2021). dhBBR may even have better anti-sclerotic effects than BBR. (Chen et al., 2014). It has been reported that OBB treatment enhances superoxide dismutase (SOD), catalase (CAT) and glutathione peroxidase (GSH-Px) activities and decreases reactive oxygen species (ROS), malondialdehyde (MDA) and myeloperoxidase (MPO) concentrations to reduce oxidative stress. Liver function recovery mediated by OBB might therefore hinder the progression of liver diseases and promote liver regeneration. . OBB has also been shown to ameliorate the pathological deterioration of adipocytes and hepatocytes via the AMPK pathway and stimulate energy expenditure to control lipid homeostasis at smaller dosages than BBR. Moreover, OBB was demonstrated to inhibit macrophage migration and trigger a phenotypic conversion of M1 macrophages to M2 macrophages to relieve the inflammatory burden of the liver. . Surprisingly, the BBR derivatives dhBBR, canadine, stylopine and coptisine were reported to inhibit TGF-β1-induced collagen secretion in vitro fibrotic conditions and possess antiinflammatory, anti-fibrotic, wound-healing promoting and cytoprotective activities. (Pietra et al., 2015). In summary, the metabolites of BBR appear to have similar effects to those of BBR prodrug in terms of anti-inflammatory, anti-oxidant and anti-fibrosis activities. Moreover, the former appears safer and more efficacious than BBR itself. Thus, BBR and its derivatives must be examined in future research on liver fibrosis. BERBERINE ALLEVIATES LIVER FIBROSIS BY MODIFYING LIPID METABOLISM Pharmacological and clinical evidence has clearly demonstrated the efficacy of BBR in the treatment of metabolic diseases, including non-alcoholic fatty liver disease and hyperlipidaemia. These effects are partly based on the regulation of various metabolic processes, such as the inhibition of lipogenesis and adipose tissue fibrosis and the mechanical reduction of hepatic steatosis. (Xu X. et al., 2021). The liver is a major site of lipid metabolism and there is a potential link between liver fibrosis and disorders of lipid metabolism. Moreover, the dysregulation of arachidonic acid metabolic pathways are partly responsible for disorders of the liver microenvironment, which lead to liver fibrosis or cirrhosis with various etiologies. (Hayashi et al., 2001;Song et al., 2008;Isomoto 2009;Ishihara et al., 2012;Ristić-Medić et al., 2013;Arain et al., 2017). Rather than only being a consequence of liver cirrhosis, dysfunctional lipid metabolism forms a vicious cycle with cirrhosis. (Moustafa et al., 2012;Gaggini et al., 2019). Changes in the lipid profiles of patients with chronic liver disease may indicate a progression towards fibrosis, and certain lipid profiles represent different stages of liver fibrosis. (Gaggini et al., 2019). Yang et al. recently reported that BBR improves lipid metabolic disorder in tunicamycin-induced liver injury. (Yang et al., 2021). BBR could also significantly reduce hepatic lipid accumulation by modulating fatty acid synthesis and metabolism to prevent the progression of non-alcoholic steatohepatitis and liver fibrosis. . Several latent mechanisms, such as oxidative stress (Zhang BJ. et al., 2008) and ER stress mechanisms, Yang et al., 2021) have been extensively explored for their roles in the ability of BBR and its metabolites to treat hepatic injury, progressive fibrosis and cirrhosis, but few details are known on the mechanism by which BBR regulates lipid metabolism in liver fibrosis patients. The administration of BBR has been found to simultaneously mitigate the expression of genes related to lipogenesis, inflammation and fibrosis; this represents another possible mechanism underpinning the effect of BBR against liver fibrosis, and is likely related to hypolipidemic mechanisms. . Non-alcoholic fatty liver diseases (NAFLDs) are chronic progressive diseases, and approximately one third of NAFLD cases progress from hepatitis to non-alcoholic steatohepatitis (NASH) to liver fibrosis or cirrhosis. (Cicero et al., 2018). A meta-analysis of the efficacy of BBR in NAFLDs found that BBR delayed or repressed the fibrotic process in the development of NAFLDs by improving lipid parameters and alleviating hepatic steatosis. . Moreover, clinical findings have indicated that BBR increases liver function in hyperlipidemic patients with alcoholic liver cirrhosis or hepatitis C cirrhosis by creating a positive feedback loop with serum cholesterol, triglyceride and low-density lipoprotein-c (LDL-C), without causing any adverse effects. (Zhao et al., 2008a). Regulation of Triacylglycerol Metabolism by Berberine Elevated concentrations of triglycerides are a key contributor to lipid profile disorders and metabolic syndrome, and the ability of BBR to decrease hepatic and blood concentrations of triglycerides has been convincingly proven in both animal experiments and human studies. (Hu et al., 2012;Li et al., 2018). As such, BBR is used in many countries as a lipid-lowering drug for hyperlipidemia treatment. (Affuso et al., 2010). Animal experiments demonstrated that the pre-administration of BBR can reduce triglyceride accumulation in the liver caused by tunicamycin administration, thus treating liver injury. In particular, compared with the control group, the BBR group downregulated lipid metabolism-related gene expression of stearoyl-Coenzyme A desaturase 1 (SCD1). (Yang et al., 2021). The triglyceridereducing efficacy of BBR is attributable to its decreasing triglyceride biosynthesis and increasing triglyceride oxidation. AMPK is essential for the control of lipid metabolism in terms of lipogenesis or lipid degradation, due to its affecting transcription factors and metabolic enzymes. (Fryer and Carling 2005). BBR also decreases the deposition of lipids in the liver by regulating AMPK, which balances fatty acid biosynthesis and oxidation. (Boudaba et al., 2018). Zhu et al. discovered that BBR can activate the AMPK-SREBP-1c pathway, which results in downregulation of the expression of hepatic stearoyl CoA desaturase 1 and other triglyceride (TG)-biosynthesis related genes, and in the attenuation of hepatic TG deposition, which alleviates NAFLD. . Animal studies show that BBR can improve high fat diet-induced increases in serum triglyceride concentrations, thereby ameliorating hepatic steatosis and fibrosis via the SIRT3/ AMPK/ACC pathway. (Yu-pei et al., 2019). Moreover, high-fat dietinduced hepatic steatosis is significantly inhibited by BBR treatment as reflected by the upregulated expression of proteins implicated in fatty acid oxidation, including ACC and carnitine palmitoyltransferase IA. (Zhang et al., 2019b). BBR can also regulate the LDLR pathway, by which BBR downregulates fattyacid biosynthesis genes and upregulates fatty acid oxidation genes in adipocytes. (Lee et al., 2006;Lee et al., 2007). Xu et al. also found that BBR could improve systematic lipid homeostasis by promoting fatty acid β-oxidation; specifically, it causing deacetylation of long-chain acyl-CoA dehydrogenase via SIRT3 activation. . A meta-analysis of clinical trials was consistent with the evidence reviewed above, as it found that BBR lowered blood TG concentrations in a dose-dependent manner. (Dong et al., 2013). Therefore, the regulation of triacylglycerol metabolism may be a critical part of the mechanism of action of BBR (Figure 2). Regulation of Cholesterol Metabolism by Berberine The ability of BBR to decrease cholesterol concentrations was first described in human, animal and cell test in 2004, (Kong et al., Frontiers in Pharmacology | www.frontiersin.org March 2022 | Volume 13 | Article 814871 2004a) and BBR was subsequently found to have the same effect in diabetes mellitus (Zhang Y. et al., 2008) and cirrhosis (Zhao et al., 2008a) patients. It was also found that the phase I BBR metabolite berberrubine may decrease cholesterol concentrations by targeting LDLR expression. (Cao S. et al., 2018). The utility of BBR as a cholesterol-lowering drug has been consistently validated in clinical research, and it is widely used as a drug in Asian populations. Clinical trials and diverse disease models have been designed to confirm the beneficial effects of BBR on the regulation of cholesterol homeostasis. Abnormal cholesterol homeostasis was reversed to varying degrees after BBR intervention, and this was extensively probed in human studies and in hyperlipidemic and diabetic animal models. (Wang and Zidichouski 2018). Numerous randomized controlled trials have demonstrated that BBR supplementation improves blood profiles of total cholesterol, LDL-C, and highdensity lipoprotein C (HDL-C), but some studies have not observed in HDL-C. (Kong et al., 2004a;Derosa et al., 2013;Wei et al., 2016). Moreover, the ability of statins to reduce cholesterol concentrations is significantly enhanced by BBR. (Moss and Ramji 2016). The mechanisms by which BBR regulates cholesterol concentrations are related to anti-inflammatory processes or the post-transcriptional upregulation of LDLR expression. (Kong et al., 2004a;Pirillo and Catapano 2015). Proprotein convertase subtilisin/kexin type 9, which controls LDLR degradation, is inhibited by BBR, and is thus linked to the ability of BBR to decrease cholesterol concentrations. (Barbagallo et al., 2015). In addition, the adenosine triphosphate-binding cassette transporter A1 gene, which is involved in cholesterol efflux, is upregulated by BBR administration. (Lee et al., 2010). Similarly, BBR accelerates cholesterol excretion by inhibiting adipocyte enhancer-binding protein 1 or augmenting cholesterol-binding receptor, which account for its hepatoprotective properties. (Zarei et al., 2015) (Figure 2). Effects of Berberine on the Arachidonic Acid Pathway Arachidonic acid is an essential unsaturated fatty acid that is stored under physiological conditions in cell membranes in the form of phospholipids. It is released from the phospholipid pool under various stimuli with the aid of phospholipase A2 (PLA2), and is subsequently converted into a wide variety of biologically active metabolites that induce the inflammatory cascade, such as 15-deoxy-Δ12,14-prostaglandin J2 (15 days-PGJ2), thromboxane B2 and prostaglandin E2 (PGE2). (Funk 2001). Cycloxygenase (COX), lipoxygenase (LOX) and cytochrome P450 (CYP450) are key enzymes in the metabolism of arachidonic acid. (Funk 2001;Calder 2009). Some metabolic enzymes involved in the arachidonic acid pathway can be inhibited by BBR, and this has been illustrated in various pathological processes. Specifically, BBR can affect the activity of metabolic enzymes such as PLA2 (Huang et al., 2002;Li et al., 2015;Yarla et al., 2016a;Zhao et al., 2017), arachidonate 5-lipoxygenase (5-LOX) and COX-2 (Guo et al., 2008;Feng et al., 2012;Li et al., 2012;Wang and Zhang 2018), and in turn affects the production of downstream metabolites such as PGE2 and 5hydroxyeicosatetraenoic acid to regulate disease course. (Huang et al., 2002;Zeng et al., 2011). A MetaboAnalyst system analysis showed that the arachidonic acid pathway affects biological activity of BBR in cancer interventions. . For example, the anti-hepatocellular carcinoma effects of BBR involve inhibition of cytosolic PLA2 and COX-2. . Extensive studies by various research groups have proven that BBR inhibits COX-2 expression and thereby decreases the production of PGE2. (Kuo et al., 2005; Singh et al., . BBR is also a major element of a traditional Chinese medicine recipe, and in this form has been found to inhibit the expression of COX-2 and 5-LOX, and decrease the production of the inflammatory metabolites PGE2 and leukotriene B4, thereby ameliorating the effects of the inflammatory cascade. . Furthermore, studies of metabolic enzymes (particularly COX-2) have suggested that BBR benefits liver fibrosis in an arachidonic acid pathway-dependent manner. Domitrović et al. discovered that BBR provides protection against CCl 4 -induced liver injury in a concentration-dependent manner, which is partly attributable to a reduced COX-2 related-inflammatory response. (Domitrović et al., 2011). Similarly, the suppression of COX-2-driven inflammatory responses is also involved in the protective effects of BBR against drug-induced hepatotoxicity. (Germoush and Mahmoud 2014). Liver fibrosis is preceded by inflammation and can ultimately lead to liver cancer, and both of the latter have been widely reported to benefit from BBR treatment, due to its inhibition of the arachidonic acid pathway, (Domitrović et al., 2011;Germoush and Mahmoud 2014;Li et al., 2015;Yarla et al., 2016a;Zhang F. et al., 2019) which contains calcium-independent PLA2 and COX-2. These findings regarding the pathological development of chronic liver diseases and the biological function of BBR suggest that BBR and its derivatives could be used to treat liver fibrosis via arachidonic acid related mechanisms (Figure 2). Peroxisome Proliferator-Activated Receptor Gamma as a Potential Target of Berberine It is well accepted that PPARγ is a key molecule involved in the pathogenesis of liver fibrosis. (Zardi et al., 2013). PPARγ is encoded by the gene PPARG, and agonists of PPARG (e.g., IFC305 and pioglitazone) obstruct liver fibrosis by inhibiting HSC activation and regulating the expression of adipogenic-and fibrogenic-related genes. (Yuan et al., 2004;Perez-Carreon et al., 2010). PPARγ is also a crucial transcriptional regulator of genes involved in lipid metabolism, liver fibrosis, fat metabolism and adipocyte differentiation for adipose tissue development and functional maintenance. (Hazra et al., 2004;Ueki et al., 2004). Interestingly, BBR regulates lipid metabolism by precisely controlling the transcription and translation of nuclear reporters. (Zhou et al., 2008). PPARs have one of two different types of effects on fatty acid metabolic process, depending on their subtype. On the one hand, BBR inhibits TG production by acting with PPARα to enhance the expression of the gene coding for the fatty-acid oxidation enzyme carnitine palmitoyltransferase IA. (Zhou et al., 2008;Yu et al., 2016). On the other hand, PPARγ supports de novo fatty acid synthesis and fatty acid uptake. (Zhou et al., 2008). Zhou et al. showed that reduced PPARγ expression may be associated with the downregulation of adipogenic genes in the presence of BBR. (Zhou et al., 2008). High-throughput screening assays have also suggested that natural extract of BBR contains potential agonists of all PPAR subtypes (Xia et al., 2013;Tu et al., 2016;Yu et al., 2016) and that these can regulate the progression of liver diseases by acting as ligands. Interestingly, arachidonic acid metabolic products have also been reported to be PPARγ ligands and transcriptional activators (Xu et al., 1999;Choi and Bothwell 2012) that inhibit the activation of inflammatory signals, thereby modulating hepatic fibrosis via PPARγ regulation (Figure 2). The anti-fibrosis effect of PPARγ agonists (such as 15 days-PGJ2) has been observed in scarring models and has manifested as TGF-β-induced decreases in the extracellular matrix of HSCs. These findings imply that BBR acts on liver fibrosis via arachidonic acid pathway-mediated PPARγ activation. This is supported by research showing that arachidonic acid derived 15dPGJ2 attenuates fibrotic diseases by activating PPARγ, and that this effect is potentiated by co-administration of 15dPGJ2 and BBR. (Guan et al., 2018). Thus, it appears that PPARγ is a key target of BBR. In conclusion, studies have confirmed the relationship between BBR, lipid metabolism pathways and subsequent signalling cascades, especially the arachidonic acid pathway. BBR may therefore relieve fibrosis by regulating PPARγ and restoring lipid homeostasis via modulation of arachidonic acid metabolism. More comprehensive studies on the effects of BBR on PPARγ, enzymes and downstream metabolites in the arachidonic acid pathway are needed to further elucidate appropriate clinical applications. CONTRIBUTIONS OF INTESTINAL FLORA TO THE BIOLOGICAL FUNCTION OF BERBERINE The regulation of intestinal flora by BBR application is a novel treatment strategy. BBR improves intestinal flora dysregulation and restores the gut barrier, effectively reducing plasma lipid concentrations and lipolysis. (Xu X. et al., 2021). BBR can also significantly reduce the levels of the opportunistic pathogens and increase the levels of probiotics. (Xu X. et al., 2021). With respect to the contributions of intestinal flora to the biological function of BBR in the treatment of liver diseases, Yang et al. showed that BBR alleviates tunicamycin-induced liver injury by regulating intestinal flora in mice, which it achieves by modulating the ratios of Prevotellaceae and Erysipelotrichaceae in the intestine. (Yang et al., 2021). Intestinal Flora Improve the Efficiency of Berberine Absorption Although the oral bioavailability of BBR is limited, intestinal flora promotes the absorption and enhance the efficacy of BBR. The BBR metabolites generated by intestinal flora are considered to be crucial to the biological activity of BBR; in particular, dihydroberberine (dhBBR), which has less biological activity than BBR but approximately five times the intestinal absorption rate of BBR. (Feng et al., 2015). Thus, the conversion of BBR to dhBBR, which is catalysed by nitroreductase, is the rate-limiting step that controls the amount of BBR entering the blood. Nitroreductase is present in many intestinal bacteria, such as Staphylococcus aureus, Enterococcus faecium, Lactobacillus casei and L. acidophilus. (Feng et al., 2015). The role of intestinal nitroreductase in potentiating BBR bioavailability is supported by the fact that BBR has greater efficacy in individuals with higher nitroreductase activity. (Wang et al., 2017b). Moreover, BBR increases the populations of probiotics containing nitroreductase, such as Clostridia. (Lemmon et al., 1997;Cui et al., 2018). After entering intestinal tissues, dhBBR is immediately reoxidised to BBR. (Feng et al., 2015). These findings indicate that intestinal flora derived nitroreductase may be a biomarker of the therapeutic efficacy of BBR ( Figure 3). Crosstalk Between Bile Acid and Intestinal Flora BBR also restores bile acid homeostasis by targeting multiple pathways that markedly inhibit inflammation, thereby alleviating non-alcoholic steatohepatitis and liver fibrosis. . Bile acids serve as key regulators of lipid and glucose homeostasis, energy consumption and inflammation. (Yuan and Bambha 2015). Additionally, bile acids play critical roles in the homeostasis of intestinal flora, which may in turn regulate the size and composition of the bile acid pool that maintains normal bile acid excretion and hepatoenteral circulation. (Hofmann 1999;Ridlon et al., 2006;Rajilic-Stojanovic 2013). However, abnormal biliary secretion results in the destruction of microfloral structure, which adversely effects the abundance of bacteria responsible for bile acid catabolism, resulting in the improper excretion and reabsorption of conjugated bile acid. (Hedenborg et al., 1991). Nuclear receptor FXR and cell-surface receptor Takeda G protein-coupled receptor 5 (TGR5) can alter bile acidmediated metabolism by binding to bile acids. (Pathak et al., 2018). Thus, BBR continues to be pursued as a potential agonist of FXR and TGR5 binding of bile acids, as this may offer ways to increase the abundance of bacteria that promote the decomposition of conjugated bile acids and regulate bile acid signalling. Furthermore, BBR significantly increases the abundance of intestinal Firmicutes, especially Clostridium scindens, which primarily maintain metabolism and the hepatoenteral circulation of bile acids. (Gu et al., 2015). Studies have revealed that the lipid-modification function of BBR is possibly achieved via the modulation of bile acid metabolism, Meng et al., 2018) given that BBR regulates intestinal flora. (Gu et al., 2015). Thus, crosstalk between bile acid metabolism and intestinal flora might affect the absorption efficiency of BBR, which could be exploited in treatments for cirrhosis ( Figure 3). (Li et al., 2011b) and enhancive expression of ATGL and phosphorylation of HSL (Jia et al., 2017) also offer promising pathway for butyrate to alleviate hepatic steatosis and lipid deposition through lipogenesis breakdown and lipolysis promotion.③ Butyrate treatment obviously inhibited arachidonic acid metabolism by altering the expression of metabolic enzymes (COX, LOX) together with synthesis of arachidonic acid metabolites (PGE2). (Ardaillou et al., 1985;Kamitani et al., 1998). ④Butyrate mediated inflammation remission and further liver fibrosis alleviation via promoting anti-inflammatory cytokines IL-4, IL-10 and inhibiting inflammatory genes TGF -β1, IL-1α, IL-17α, TNF-α, F4/80. (Ye et al., 2018 , 2021). It is currently thought that microflora function as a virtual "endocrine organ" (Clarke et al., 2014) that generates a wide variety of products to regulate host metabolism through homologous receptors. Short-chain fatty acids (SCFAs), particularly butyrate, acetate and propionate, which are the final products of the fermentation of indigestible carbohydrates by anaerobic microbes, exert profound effects on intestinal function and host energy metabolism. (Nicholson et al., 2012). The regulation of lipid profiles by BBR is realized not only via its direct effects on the blood concentrations of lipids, but also via its promoting the generation of SCFAs (mainly butyrate) to indirectly affect the blood concentrations of lipids. (Wang et al., 2017a). Zhang et al. proved this by demonstrating that concentrations of SCFAs in the intestine were increased by BBR treatment, which improved resistance to metabolic diseases. . It has also been reported that BBR treatment leads to increases in the abundance of intestinal flora that secrete SCFAs and maintain host health, particularly Clostridia. (Gu et al., 2015;Byndloss et al., 2017;Cui et al., 2018). Effects of Butyrate on Lipid Metabolism 4-Phenylbutyric acid (PBA), a bioactive butyrate derivative with a long half-life, decreases ER stress and downregulates the transcription of numerous SREBP1-dependent lipogenic genes, which eventually leads to the inhibition of fatty acid biosynthesis. (Ren et al., 2013). However, butyrate also enhances fatty acid oxidation by activating peroxisome proliferator-activated receptor-γ coactivator 1-α, peroxisomal biogenesis factor 11 α, PPARα and PPARα-mediated fibroblast growth factor 21. (Weng et al., 2015;He and Moreau 2019). Moreover, butyrate-mediated ACC1 phosphorylation and inactivation not only inhibit fatty acid synthesis but also promote fatty acid oxidation by relieving malonyl CoA-induced carnitine palmitoyltransferase IA suppression. (McGarry et al., 1977;Hillgartner et al., 1995;Heimann et al., 2015). Additionally, AMPK-dependent phosphorylation of SREBP, (Li et al., 2011b) enhancement of the expression of adipose triglyceride lipase and phosphorylation of hormone-sensitive lipase (Jia et al., 2017) are pathways by which butyrate can alleviate hepatic steatosis and lipid deposition by inhibiting lipogenesis and promoting lipolysis. In particular, butyrate treatment inhibits arachidonic acid metabolism and thus suppresses inflammation, whereas reductions in butyrate concentrations aggravate NASH via an arachidonic acidinduced exaggerated inflammatory reaction. (Zhuang et al., 2017;Ye et al., 2018). Moreover, the administration of butyrate alters the expression of metabolic enzymes (e.g., COX and LOX) and thus affects the biosynthesis of arachidonic acid metabolites (e.g., PGE2). (Ardaillou et al., 1985;Kamitani et al., 1998). Butyrate has also been reported to improve impaired liver function and alleviate the progression of fibrosis, which has a protective effect in NASH via arachidonic acid metabolism regulation. (Ye et al., 2018). In contrast, another study found that SCFAs adversely affect lipid metabolism: Yu et al. showed that SCFAs, including butyrate, exacerbate lipid accumulation in 3T3-L1 cells (a type of adipocyte) by promoting the expression of lipogenic genes and proteins. (Yu et al., 2018). Overall, butyrate appears to decrease inflammation and improve lipid metabolism in the liver (Figure 3), but further studies are needed to fully characterize its mode of action. Effects of Butyrate on Inflammatory/Immune Reactions Research has shown that butyrate acts as a histone deacetylase inhibitor or acts on signalling receptors to suppress inflammation and thus postpone the development of liver diseases. (Le Poul et al., 2003;Donohoe et al., 2012;Gill et al., 2018). Butyrate decreases inflammation and alleviates further liver fibrosis by promoting production of the anti-inflammatory cytokines interleukin 4 (IL-4) and IL-10, and by inhibiting the expression of the genes coding for the inflammatory molecules transforming growth factor β 1, interleukin 1α (IL-1α), IL-17α, tumour necrosis factor α and F4/80. (Ye et al., 2018). Butyrate also suppresses the phosphorylation of MAPKs, the activation of NF-κB and the expression of downstream inflammatory signalling, thereby inhibiting inflammatory responses. (Ohira et al., 2013). Yukihiro et al. studied the important reciprocal interaction between immunity and inflammation, and revealed that microbiota-derived butyrate regulates transcription of the forkhead box protein P3 gene, which is positively correlated with concentrations of SCFAs and numbers of regulatory T cells. This resulted in the inhibition of inflammatory responses and ameliorated the development of colitis in T-cellabnormal mice. (Furusawa et al., 2013). Overall, the above findings indicate that excessive inflammation and immune dysregulation are largely responsible for disorders in the liver microenvironment that lead to liver fibrosis. Furthermore, the positive effects of butyrate on inflammatory and immune responses provide a reliable theoretical basis for the effects of BBR in liver cirrhosis therapy (Figure 3). Effects of Butyrate on Liver Fibrosis Researchers are increasingly exploring the ability of intestinal bacteria derived butyrate to alleviate liver fibrosis. For example, it has been found that the progression of fibrosis in methionine choline deficient diet induced NASH mice is substantially slowed by butyrate treatment, evidenced by a significant downregulation of the early fibrosis markers transforming growth factor-β1, smooth muscle α−actin and α-actin 2. (Ye et al., 2018). Butyrate's effects on intestinal flora, lipid metabolism and inflammation have been proposed to underlie its effects in these mice. (Ye et al., 2018). Additionally, butyrate hinders the progression of NASH to fibrosis by regulating arachidonic acid metabolism. (Ye et al., 2018). These results indicated that butyrate may decrease liver fibrosis (Figure 3), but the mechanism of this remains to be fully delineated. A balanced liver microenvironment is the basis for maintaining normal physiological functions, and an imbalanced liver microenvironment results in metabolic abnormalities, inflammatory activation and immune system perturbation. Butyrate produced by intestinal bacteria is absorbed through the intestinal mucosa, and then primarily distributed to the liver via portal veins, where it improves the liver microenvironment via mechanisms related to PPARγ activation. (Byndloss et al., 2017;Ye et al., 2018). Lipid metabolism and its interactions with inflammation and immunity may therefore account for the effects of butyrate treatment, and also create a link between BBR and cirrhosis. Thus suggests the possibility of a BBR-intestinal flora-butyrate-lipid metabolism-liver fibrosis interactive network. CONCLUSION, PERSPECTIVES AND FUTURE DIRECTIONS BBR is a natural product with many useful biological effects and few adverse effects. Its effects on inflammatory and metabolic disturbances are particularly impressive. BBR has been confirmed to decrease liver fibrosis via multiple biochemical mechanisms, such as by regulating oxidative stress, ER stress, and the activity of AMPK, NF-κB and PPARγ (as shown in Figure 1). However, the complex mechanisms of action of BBR are not yet fully understood. Early studies on BBR highlighted its favorable effects on lipid profiles and interactions with inflammatory immune responses. We conclude from this review that BBR may exert its effects via the regulation of enzymes involved in arachidonic acid metabolism and downstream inflammatory pathways. Nevertheless, this has yet to be confirmed in cirrhosis models and further studies are warranted. The poor oral bioavailability of BBR is a major hindrance to its clinical application. Fortunately, nitroreductase-containing intestinal flora or specific intestinal microorganisms can transform BBR into dhBBR, OBB, canadine and other derivatives, which are much more soluble and have better efficacy than BBR. These derivatives also have superior antiinflammatory, anti-oxidant and anti-fibrosis functions, and bile acid metabolism has been shown to increase their formation via crosstalk with intestinal flora. BBR increases the production of butyrate by anaerobic bacteria, and the resulting higher concentrations of butyrate in circulation lead to improvements in host metabolism, decreases in inflammation, enhanced immunity and decreased liver fibrosis. The mechanism by which BBR promotes the metabolites of intestinal flora to further improve liver fibrosis by regulating the liver microenvironment remains largely elusive. Beyond association studies, future research should develop a deeper understanding of the roles of the intestinal flora, arachidonic acid pathways and downstream targets (e.g., PPARγ) in liver fibrosis. Large-scale and multi-centre clinical trials are also required to verify the biological functions of BBR in cirrhosis. In addition, the safety, optimal dose and drug interactions of BBR must be taken into account. The bioavailability of BBR needs to be further improved by pharmaceutical techniques or medicinal chemistry approaches and by determining the precise mechanism of drug-host interactions. This review summarizes current knowledge of the role of BBR in liver fibrosis in terms of its effects on lipid metabolism and intestinal flora. It is hoped that it will encourage future studies on BBR and lead to the development of novel strategies for the use of BBR in cirrhosis treatment, given the positive effects of BBR on liver fibrosis. Ultimately, this may yield personalized BBR-based approaches to treat liver fibrosis that are tailored to a patient's unique intestinal microbiota profile. AUTHOR CONTRIBUTIONS XL wrote the manuscript. LW and ST provided the critical revisions. XL performed the painting of graphics. XW and BW and ZC provided supervision of entire manuscript. All authors approved the final version of the manuscript for submission.
2022-03-02T14:35:01.747Z
2022-03-02T00:00:00.000
{ "year": 2022, "sha1": "63197cc6a75ad77b4a1082cb3662d5ce884a694f", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2022.814871/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "63197cc6a75ad77b4a1082cb3662d5ce884a694f", "s2fieldsofstudy": [ "Medicine", "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
237539933
pes2o/s2orc
v3-fos-license
Studying the psychology of coping negative emotions during COVID-19: a quantitative analysis from India The outbreak of the COVID-19 virus adversely affected the material and mental well-being of the infected individuals and their families. The poor health system combined with lack of fear of infection has created significant negative health effects for people. The present research consider the notable models of coping with negative emotions, including ‘3Cs’ and ‘direct action and palliation approach’. With the observation method’s help, a detailed perspective was found on people’s coping processes, categorized as psychological, control, coherence, and connectedness coping. The present study considers the notable models of dealing with negative feelings, including ‘3Cs’ and ‘direct intervention and palliation strategy’. With the observation method’s support, a detailed viewpoint was found on people’s coping mechanisms, categorized as neurological, regulation, coherence, and connectedness coping. Using the ANOVA and t-tests, a significant augmentation in people’s negative emotions was found since the beginning of the pandemic. Using GMM regression technique, ‘avoidance’, ‘proactive preparedness’, ‘emotional resilience’, ‘entertainment’, and ‘spiritualism’ were highly significant techniques in curbing the negative emotions during the COVID-19 pandemic. Meanwhile, the LOGIT regression found cumulative negative emotions and emotions about negative career outlooks to be the most significant to bring negative emotions to normalcy. The study suggests that policymakers design a national-level strategy to strengthen the mental health systems to boost mental well-being. Introduction The coronavirus disease emerged in Wuhan city of China in December 2019. As of August 5, 2021, this virus has infected 201,817,159 people and causing 4,283,757 deaths globally (Worldmeter 2021;Fareed et al. 2020). The exponential rise in the novel coronavirus cases in the world developed into a pandemic (Dennison Himmelfarb and Baptiste 2020). It was declared a pandemic by the World Health Organization on March 11, 2020, considering its contagious behaviour (Saey 2020;Shahzad et al. 2020a, b). India is among one of the worst-hit countries. People are affected financially and emotionally by this virus (Yan et al. 2021). Therefore, the present study aims to observe the negative emotional states during the COVID-19 pandemic, and how such emotions change with time during the pandemic, the kind of coping mechanisms people adopt to deal with negative emotions, and which of these coping strategies prove to be most successful in alleviating the negative emotions This virus spreads rapidly; it transmits directly or indirectly when people are in close contact with an infected person or sneezes or coughs by an infected person (Shahzad et al. 2021;Sarwar et al. 2021). Everyday lives of the people have substantially changed, and these have been replaced by isolation and loneliness (Shakoor et al. 2020;Shahzad et al. 2020b). Social activities like school, cinema halls, and work have been suspended because of the threat of the spread of this virus. The absence of social interactions has led to overwhelming stress, depression, a state of panic, anxiety, mental instability, and reluctance to work both at individual and community levels (Brooks et al. 2020;Iqbal et al. 2020). COVID-19-like pandemic is unlike in modern years, and there is no magic formula established to resolve the sociopsychological trauma it induces. Societies as a whole and individuals, in general, have various tactics to deal with this condition. Recent research is done so far on the coping mechanisms for the distress caused by natural disasters, and pandemics have come up with a few theories such as the '3Cs' model of Reich (2006) or Lazarus's (1985) 'direct action' and 'palliation' approach which entails different mechanisms of coping. The present research discusses general population shifts in stress-related emotions since the pandemic's initial spread. Further, the efficacy of individuals' coping mechanisms in combating negative feelings has been analysed. The prominent stress-related emotions, such as stress, worry, hopelessness, bleak economic, career, social outlooks, and a general feeling that things will never be the same again are prevalent among masses due to the pandemic. With the constant spread of COVID-19 and having an uncertain timeline for treatment, against all odds, people steadily return to their 'new' normal and contemplate the dynamic psychology of the human mind, which evolves and adapts to an adverse circumstance and continues to see it as 'normal'; it would be interesting to observe the trend of the negative psychological pattern since the beginning of the COVID-19 pandemic. The pandemic coping strategies can be classified into various subgroups. As per the '3Cs' model of Reich (2006), preparedness strategies, entertainment interventions, and scenario avoidance may be e categorized as 'control' approaches. Although meditative techniques and spiritualism and participation in constructive practices can be classified as the 'coherence' approach, social support can be classified as a 'connectedness' approach. However, be it the '3Cs' model or Lazarus's (1985) 'direct action' and 'palliation' approach, neither of them recognizes the inherent capacity of the human mind to cope with a complicated and stressful situation. Such mental ability to cope ranges from individual to person, where a person can be considered 'mentally strong' if he has a better ability to endure a highly stressful situation. We incorporate two such core mental abilities to resolve negative states, namely 'emotional resilience' described as 'the process of, capacity for, or outcome of successful adaptation despite challenging or threatening circumstances' (Masten et al. 1991) and 'optimism' defined as 'a set of beliefs that leads people to approach the world actively' (Peterson and Bossio 1991). The research aims to study the role and significance of coping strategies in abating negative emotions during these times. The framework of the objectives of the study is presented in Fig. 1. The study has taken seven factors of the negative emotions, i.e. stress, worry, hopelessness, non-normality, bleak economic outlook, bleak career outlooks, and bleak social outlooks, and four coping categories, i.e. psychological coping, control coherence copying, and connectedness coping. Coping mechanisms affect negative emotions of the people. Therefore, the present study analyses the following: firstly, the impact of behavioural effect of the people during pandemic on the negative emotions; secondly, how adverse behavioural changes effects on negative emotions; thirdly, the impact of coping strategies on negative emotions (Kar et al. 2021); and lastly, can the control of adverse behaviour during the pandemic with coping strategies bring normalcy to the behaviour (Shamblaw et al. 2021). To achieve our objectives, the following hypotheses were framed: H 1 : There is a behavioural effect during COVID-19 pandemic in dependent variables (negative emotions). H 2 : There are adverse behavioural changes during COVID-19 pandemic in dependent variables (negative emotions). H 3 : Independent variables (coping strategies) affect dependent variables (negative emotions). H 4 : Control of adverse behaviour during the pandemic with independent variables (coping strategies) can bring normalcy in the behaviour. The article is divided into six sections. The 'Introduction' section introduces the topic, the 'Review of literature' section presents the review of relevant literature, the 'Database and methodology' section outlines the database and research methodology, the 'Results and findings' section is the results and findings, the 'Discussion' section highlights the discussion, and the 'Concluding remarks' section concludes the article. Review of literature This section of the study is further divided into two sub-sections. In the 'Psychology of negative emotions' sub-section, the studies related to the psychology of negative emotions are discussed, and in the 'Psychology of coping' sub-section, the studies related to the psychology of coping are discussed. Psychology of negative emotions Several stress emotions were recognized by Lazarus (1985), such as anger, hopelessness, fear, depression, guilt, fear, threat, anxiety, denial, sense of loss, and like. Lazarus (1985), in 'appraisal and reappraisal theory', has termed 'appraisal' as the process of appraising the environment and responding to the emotions. However, this method is not static. As the environment and cognitive states are evolving, 'reappraisal' is an integral part of explaining the individual's emotional states as an outcome of the environment. Constant evaluations focusing on input from the environmental and cognitive assessments result in fluctuating emotions where anger can substitute anxiety and like. The theory forms the base of our study, as an individual carries several negative emotions to varying degrees as a consequence of an everchanging stressful environment. However, The Print (Misra 2020) reported an acute shortage of COVID-19 initiatives and personnel to resolve the mental well-being problems. Numerous authorities and analysts have responded to or suggested 'new reality' or 'normal'. This fear of their life ever returning to normal is classified as 'non-normality' in our study. Rishi et al. (2021) focus on the psychological impacts of COVID-19 in India. The study has used both qualitative and quantitative data for the research of 261 respondents from 17 states in two phases. In the first phase, the study found that during the first 3 weeks of lockdown of the COVID-19, there were significant effects on psychology (mainly pandemic anxiety and social isolation) of the respondents. On the positive side, physical health, fitness, self-care, family connect, learning of the new skill sets, and self-growth provided new hopes to cope up, whereas negative emotions, such as fear, anxiety, frustration, and irritability for others, were the hurdles. In phase two, i.e. the sixth week of lockdown, there was an increase in negative emotions like increasing anxiety and frustration. Psychology of coping Lazarus (1985) accounts that 'coping' results from negative feelings to regulate certain emotions. The author distinguishes coping with 'direct action' and 'palliation'. Direct copying incorporates modifying the interaction with one's surroundings and taking direct actions such as planning for, preventing, or attacking the situation/ environment. Palliation, on the other hand, focuses on abating, moderating, and tolerating, i.e. 'seeking comfort' with regards to the distressing reactions arising out of negative emotions. Palliation approaches can involve detaching oneself from distressing thinking by regulated processes like yoga or meditation. More recently, other forms of coping behaviours have been observed by researchers studying natural disasters. Reich (2006) came up with the '3Cs' model to account for 'control, coherence, and connectedness' as forms of coping. Control, to a great extent, resonates with Lazarus's (1985) 'direct action' strategies of coping. The second C-'coherence' is the 'logical' approach towards making sense of the situation. The third C-'connectedness' addresses the innate human need for social support. Millar et al. (2021) studied the impact of the COVID-19 pandemic and lockdown restrictions in India on the psychological, social, and behavioural changes of the public. The study used cross-sectional data of 234 respondents collected after the completion of the first week of lockdown. The PLS-SEM model was used to find out the association between health anxiety, coping mechanisms, and locus of control and age. The study found that younger people have faced more health-related anxiety and they are more engaged with social media. The study also concluded that mindfulness-based strategies can also decrease health anxiety by increasing the patience level experienced during the COVID-19 pandemic. Shamblaw et al. (2021) examined the relationship between 14 coping strategies with the symptoms of quality of life, anxiety, and depression during the COVID pandemic. Anxiety and depression were found to significantly mediate the relationship between quality of life and coping. The authors further found positive reframing to be the most effective coping strategy. In a study concerning the impact of COVID-19 on European police officers and the coping resources to deal with the same, Frenkel et al. (2021) found that 'preparing for a pandemic requires three primary paths: (1) enacting unambiguous laws and increasing public compliance through media communication, (2) being logistically prepared, and (3) improving stress regulation skills in police training'. Yıldırım et al. (2021) found COVID-19 coping to be a mediator of the relation between general health and COVID-19 anxiety. Kar et al. (2021) found that individuals who practiced avoidance forms of copying or were unsure regarding the coping strategies had greater depression or anxiety and stress. Data collection and specification This study has used simple random sampling technique for data collection. The primary data was collected through Google Forms in the month of July 2020. Usable data for the study comprised 581 respondents out of 950 in Delhi. Rest of the questionnaire was incomplete. The data is highly diverse and spreads over diverse ages, income, education, and gender. The independent and dependent variables in our study are measured on a 7-point Likert scale. The study initially selects eight independent variables that represent various coping strategies. Later, one of the variablespreparedness was split into two parts and was treated as comprising two distinct variables measuring different preparedness-proactive and preventive dimensions. The reasoning and explanation for the same are mentioned in the section ahead. 'Optimism' which is a part of psychological coping, was measured using two items, OT1, 'I am sure we will find a cure of COVID very soon', and OT2, 'I am highly optimistic that the current situation will change soon, which were developed upon taking cues from the Life Orientation Test-Revised (LOT-R) inventory (Scheier et al. 1994). 'Emotional resilience', which is another aspect of psychological coping, was measured using two items, ER1, 'In general, I think I can control my emotions well', and ER2, 'I can stay calm in tough circumstances' adopted from 'Adolescent Resilience Scale' initially developed by Atsushi et al. (2002). 'Preparedness' which is part of control coping varies from one situation to another. Cues from the Coping Inventory's task-oriented coping dimension for Stressful Situations (CISS) (Strelau et al. 2020) were taken, and two items measuring preparedness uniquely for the COVID-19 situation were developed. The items were PR1, 'I regularly take immunity boosting supplements and medicines such as Kadha or Giloy, Ashwagandha, Vitamin C, etc.', and PR2, 'I strictly adhere to preventive measures such as social distancing, wearing face masks when outside, and washing hands regularly'. Control coping consists of 'entertainment' and 'avoidance' dimensions in our study. However, some entertainment aspects overlap avoidance because entertainment is sometimes sought to avoid a difficult situation. Cues from avoidanceoriented coping from CISS were taken, and entertainment and avoidance were segregated into two separate dimensions. Entertainment was hence measured using two items ET1, 'I have been binge-watching movies/Netflix/Amazon Prime/ YouTube etc to keep my mind off COVID', and ET2, 'I have excessively increased my social networking usage through Whatsapp/Instagram/Facebook/TikTok etc to keep my mind off COVID'. Avoidance was measured using single-item AVD: 'I have been avoiding the thoughts and news related to COVID'. Taking the cues from CISS, the items were developed uniquely for the study based on our observations. As part of coherence coping, 'meditation and spiritualism' was measured using two items explicitly modified for the COVID-19 situation based on the Spiritual Transcendence Scale (STS) (Piedmont and Leach, 2002). The two items were SP1, 'I have been performing meditation techniques (prayer/ yoga/pranayama/relaxing music etc) regularly to fend off COVID related negative feelings', and SP2, 'Spiritualism has been an important factor for me to fight COVID related anxiety'. 'Positive involvement' is another aspect of coherence coping and was measured using a single-item PI, 'I have involved myself in a hobby/pursuing a passion or learning a new skill or reading books or exercising enthusiastically during the pandemic to fend off COVID related negative feelings', which was based on the idea taken from a task-based item of CISS-'Use the situation to prove my ability'. As part of connectedness coping, 'social support' was measured using two items, SS1, 'I have received adequate emotional support from my family or friends or peers during COVID pandemic', and SS2 'My friends or family or peers have always made sure that I feel better during these tough times' adapted from Multi-Dimensional Social Support Scale (Winefield et al. 2010). Seven prominent negative emotions developing among people in general due to the current COVID-19 pandemic were observed. Currently, two scales have been published measuring negative emotions as a result of COVID-19. However, both the scales, COVID Stress Scale (CSS) by Taylor et al. (2020) and Fear of COVID-19 Scale by Ahorsu et al. (2020), measure mostly the worry and stress caused due to COVID-19. Our study measured the 'stress' and 'worry' dimensions through questions based on CSS. However, several other negative emotions among people were observed as a result of the current situation such as 'hopelessness' (regarding the future), 'non-normality' (that things would never be back to normal again), financial insecurity as measured by 'economic outlooks', career insecurity as measured by 'career outlooks', and social insecurity regarding the future as measured by 'social outlooks'. All the dimensions of negative emotions arising out of the COVID-19 situation in our study were measured using single item measuring the negative emotion at the beginning of the pandemic and the modification of the same item to measure the negative emotion currently, in the middle of the pandemic. In the present study, 'stress' was measured using singleitem ST1, 'When COVID started spreading a few months back, how worried were you about catching the virus' measuring the stress emotion at the beginning of the pandemic, and ST2, 'How worried are you now about catching the virus' measuring the current stress emotion. Similarly, 'worry' was measured using single-item WR1: 'How often did you get worrying thoughts about the virus, when COVID started spreading' measuring the worry emotion at the beginning of the pandemic and 'How often do you get worrying thoughts about the virus now' measuring the current levels of worry. 'Hopelessness' was measured using HP1, 'When COVID started spreading, how hopeless did your future appear to you', measuring such emotion at the beginning of the pandemic, and HP2, 'How hopeless does your future appear to you now due to COVID' measuring the current emotion. We adapted these items from the Beck Hopelessness Scale (BHS; Beck et al. 1974). 'Non-normality' was measured using NR1, 'When virus started spreading, how strongly did you believe that things would never be back to normal again' for the start of period, and NR2, 'How strongly do you believe now that things would never be back to normal again' for the current period. 'Bleak economic outlooks' was measured using EC1, 'How much financially insecure did you feel regarding your future due to COVID, when it started spreading' for the start of the period, and EC2, 'Now, how much financially insecure regarding the future do you feel yourself to be due to COVID' for the current period. 'Bleak career outlooks' was measured using CR1, 'How much did you worry for your career outlooks due to COVID, when it started spreading' for the start of the period, and CR2, 'How much do you worry for your career outlooks now due to COVID' for the current period. Finally, 'bleak social outlooks' was measured using SO1, 'When COVID started spreading, how strongly did you think that your "social life" would never be back to normal again in future' for the start of period, and SO2, 'How strongly do you NOW think that your 'social life' would never be back to normal again'. Methodology To test the various objectives, the study has applied ANOVA, GMM, and logit regression models (Tables 1, 2, and 3). Before the application of the technique, few data adjustments were made that are discussed below. Adjusting independent and dependent variables for objectivity As an ordinal scale has been used for measuring both dependent and independent variables, therefore, the chances of subjectivity are very high in these observations. To bring more objectivity, an adjustment for every type of variable has been used. This adjustment factor is as follows: i represents the observation of i th respondent. k represents the observation of respondents in respect of the k th variable. In the present study, the value of k is 15 since there are nine independent variables and seven independent variables, and i is 1162 (i.e. 581 × 2) since there are 581 respondents; however, data for two periods was collected. For independent variables, observations based on periods were not classified; therefore, for any given independent variable, there is the same value for any given respondent in both periods. However, for dependent variables, the observations based on periods were classified, i.e. the beginning of the COVID-19 pandemic and the current (or during the COVID-19 pandemic); therefore, any given dependent variable can be different values to any given respondent in both the periods. 'Total' as the aggregate of different dependent variables was also calculated and used as one more dependent variable. ANOVA for multiple factors For testing our first objective and related first hypothesis, i.e. to test whether there are changes in behaviour during the pandemic in terms of dependent variables, ANOVA has been used to test for multiple factors. ANOVA helps in testing the effect of factors on observations. Two factors in this study are present, i.e. first factor is about periods, i.e. in the beginning and during the pandemic, and the second factor pertains to different dependent variables. Through ANOVA, it can be known whether variations in observations are due to: (1) Period effect-P (i.e. periods in the beginning and during a pandemic) combined impact of P and V on observations) Model in general form Variances in Observations (i.e. dependent variables) = Variances due to Period Effect + Variances due to Variable Effect + Variances due to Period and Time Effect + Error term (i.e. ɛ) Model in the form of estimated equation In this study, we were specifically interested in whether observations are affected in two different periods, i.e. periods in the beginning and during the pandemic. We were also interested in whether observations are affected due to types of Identifiers Explanation σ 2 Y Variance in observations of dependent variables σ 2 P Variance in observations due to period effect (i.e. periods before and during COVID-19 pandemic) σ 2 V Variance in observations due to different types of dependent variables, i.e. stress, worry, hopelessness, non-normality, economic, career, social, and total σ 2 (PxV) Variance in observations due to period and variable effects (interactive effect) σ 2 ɛ Variances in observations due to random error T-test with unequal variances T-test with unequal variances was used for the second objective and related hypothesis, i.e. to test whether there are adverse behaviour changes during the pandemic in dependent variables. However, this was a one-tailed test since we were interested in adverse behaviour changes during the pandemic regarding dependent variables. Secondly, t-test with unequal variances was used because respondents tend to behave differently at different periods. The authors conducted a t-test with unequal variances using MS Excel for each of our eight dependent variables separately for 581 observations and hypothesized a mean difference of 0. Regression through GMM Regression analysis for the third objective and related hypothesis was used, i.e. to examine the relationship between independent and dependent variables. However, instead of the ordinary least square (OLS) technique, the generalized method of moments (GMM) was used since the latter is based on relatively fewer assumptions and is therefore highly unbiased, efficient, and consistent OLS. This technique was earlier used by Ullah et al. (2018). Models in general form Thereafter, to conduct regression analysis for aboveestimated equations, the natural logarithm of both sides was taken, and as a consequence after this step, the estimated equations in the deterministic form became as follows: ln ( As this is a social science approach, to judge the significance of a parameter, a relatively liberal approach was adopted. The basis for analysing parameters obtained through regression analysis shall base on an assertion that p-value of β^for independent variables lying in the range of 0.00-0.10 shall classify as 'good' estimator and the p-value lying in the range of 0.10-0.15 shall classify as 'moderate' estimator of the dependent variable. p-values greater than 0.15 have been considered insignificant. Logistic (LOGIT) regression LOGIT regression was used for our fourth objective and related hypothesis, i.e. to test whether controlling adverse behaviour with independent variables can bring normalcy in the behaviour. LOGIT regression can help locate the factors that will help converge in normalcy in the behaviour during the pandemic in dependent variables. LOGIT regression estimates the behaviour differences between two dichotomous situations (Chatterjee and Chattopadhyay 2019) (i.e. dependent variables in LOGIT regression which in our study are two different periods) due to change in independent variables of LOGIT regression which in our study are estimated values of stress, worry, hopelessness, non-normality, economic, career, social, and total. Thus LOGIT regression determines the convergence from the beginning of the pandemic to the current or during the pandemic period due to change in estimated values of stress, worry, hopelessness, non-normality, economic, career, social, and total. Model of LOGIT regression in general form Maximize the probability (P) of observations falling into either 0 (period in the beginning of the pandemic) or 1 (period current/during a pandemic) = F (independent variables). Model of LOGIT regression in estimated form Whereas: P = Dependent variable (in our study, if it is period during pandemic, then P = 1, and if it is period at the beginning of the pandemic, then P = 0) If we further solve this equation, then we deduce to: In general form, change in LOGIT implies a change in ln P 1−P for per unit change in an independent variable of LOGIT regression. We are particularly interested in those independent variables of LOGIT regression whose odd ratio >1 since these variables would cause adverse changes in behaviour during the COVID-19 pandemic, and as such, these variables need to be controlled. Estimated values of variables instead of actual values of variables in LOGIT regression As we were interested in locating those variables causing adverse changes in behaviour during the COVID-19 pandemic but are also interested in controlling these variables to prevent adverse changes in behaviour during the pandemic, the estimated values of variables were used instead of actual values of variables in LOGIT regression. LOGIT results contained some insignificant factors; hence through stepwiseregression, the estimated equation was further improved, and the revised estimated equation is as follows: Results and findings Before the application of the model, tests of data reliability and validity are required. Tests of reliability and validity for independent variables Independent variables were measured using two items or a single item. In such a case, establishing composite reliability using the traditional approach of Cronbach's α is not possible. However, according to Yong and Pearce (2013), a variable with two indicators can be considered reliable when the indicators are highly correlated but relatively uncorrelated with other variables' indicators. In our study, all the log-transformed indicators fulfil this criterion where r>0.5, N=581, and P<0.0005, except for the indicators belonging to 'preparedness' r=0.178, N=581, and P<0.0005, indicating that 'preparedness' is formative where the indicators (questions) belonging to this variable measure distinct, rather than similar dimensions of preparedness. This proposition can be further established using the rotated component matrix results of factor analysis presented in Table 4. The results of Table 4 indicate five factors, each comprising two indicators, as had been proposed, except for the variable 'preparedness' whose indicators fail to load as a factor, again indicating the formative nature of this variable. 'Positive involvement' and 'avoidance' are measured using a single item. Based on these observations, it was proposed that each of the indicators of the variable 'preparedness' represented different psychometric properties, where PR1 could be considered as 'direct action' form of coping, and PR2 could be classified as 'palliation' form of coping as per Lazarus's (1985) theory. Hence for the analysis, both these indicators would be treated as a separate variable where 'PR1' would be called 'proactive (Pro) preparedness' and 'PR2' would be referred to as 'preventive (Pre) preparedness'. Based on factor analysis, composite reliability using the formula ∑λ 2 n and average value extracted (AVE) measuring convergent validity using the formula ∑λ ð Þ 2 ∑λ ð Þ 2 þ ∑ε ð Þ were measured for each variable where λ represents factor loadings and ɛ which is the error variance or 1-λ 2 . These results are presented in Table 5. Discriminant validity was measured using the Fornell-Larker criterion (Fornell and Larcker 1981), the results of which are presented in Table 6. As the composite reliability values are greater than 0.6 and AVE being greater than 0.5, adequacy of composite reliability and convergent validity for our variables are established. The bold values are the square root of AVE of corresponding variables. The entries in bold are compared with the values in the corresponding rows and columns. The higher bold value indicates the validity and significance as per Fornell Larcker criteria. Table 6 shows that the Fornell-Larker criterion analysis' diagonal values representing the respective AVE values' square root are greater than their correlation with any other variable. Hence discriminant validity is established. Tests of reliability and validity for dependent variables To test the dependent variable's internal consistency reliability consisting of different dimensions of negative emotions, Cronbach's α was used separately for two different time periods represented by the dependent variable (i.e. the beginning of pandemic and during the period). These results are presented in Table 7. As the value of Cronbach's α is greater than 0.7 in both cases, it can be concluded that internal consistency reliability is established for our dependent variable (Cortina 1993). To test the convergent validity of different negative emotions representing a unidirectional construct, their correlation was tested. Again, this was done for both periods, i.e. the beginning of the pandemic and during the pandemic. It was observed that a significant positive correlation (p<0.01) exists among all the negative emotions comprising different dimensions of the dependent variable. This proves convergent validity, but only to an extent, as the correlation among various parameters was not high enough (>0.7). However, this is acceptable in this study as it was never intended to treat all the negative emotions as a single dependent variable for our analysis, rather to treat them separately to assess their relationship with independent variables. Hence, it can be concluded that different dimensions of negative emotions do possess convergent validity, but only to a moderate extent represent each factor's uniqueness, supporting the view to treat them as separate dependent variables for the analysis. ANOVA test through SPSS was conducted, and the results of the test are as follows: Table 8 shows that the p-value of P, V, and P x V is very significant, which implies changes in behaviour during the pandemic in dependent variables. Thus first hypothesis can be accepted that there is a behavioural effect during the pandemic in dependent variables. The results of our t-test with unequal variances are shown in Table 9. From the above results, it is evident that all dependent variables, except 'worry', have higher values during the pandemic than at the beginning of the pandemic, with a significant p-value in almost all cases. Thus, the second hypothesis cannot be rejected, i.e. there are adverse behavioural changes during the pandemic in dependent variables. Results based on the estimated equations as proposed above derived using the GMM technique for each of the dependent variables are mentioned in Table 10. The results of GMM regression reflect a two-fold interpretation. Firstly, a simple log of the independent variable [ln(independent variable)] reflects the effectiveness of that particular variable in explaining the respective dependent variable's variance in general terms. However, [ln(e D )ln(independent variable)] reflects the effectiveness of the particular variable in explaining the variance of the respective dependent variable as we approach the second period, i.e. during the COVID-19 pandemic. For example, while considering the impact of 'avoidance (AVD)' on 'stress', it can be observed that in general, avoiding the stressful situation has a positive relation with 'stress' emotion (coefficient=0.113, p<0.05); however, during COVID-19 pandemic, avoiding the situation has a significantly negative relation with 'stress' emotion (coefficient=−0.144, p<0.05). The results of the GMM regression have been summarized in Table 11. LOGIT regression was conducted for the estimated equation with the help of EViews, and results for the same are shown in Table 12. In the above results, there is high R and R 2 ; thus, this model is very explanatory. Secondly, the constant is highly insignificant since the p-value of the constant is 0.4654, and all factors turn out to be significant, implying that there is no overspecification and no under-specification in the estimated model of LOGIT regression (Table 13). Summary of LOGIT results The estimated value of 'career' turns out to be the most substantial factor for converging behaviour in dependent variables to normality during the pandemic. The estimated value of 'total' also became another decisive factor for converging behaviour in dependent variables to normalcy during a pandemic. Estimated values of worry, hopelessness, social, and economic outlooks have high p-values, but they are not relevant in our study since their odd ratio is less than 1. Based on overall analysis, the fourth hypothesis can be accepted, i.e. control of adverse behaviour during the pandemic with independent variables can bring normalcy in the behaviour. Discussion The research is incomparable in certain respects. Even though theoretical studies have been conducted in the past, reflecting possible coping strategies that may effectively alleviate the negative emotions (Lazarus 1985), which negative emotions do coping mechanisms tend to relieve have never been investigated quantitatively. Biswas (2011) conducted a study exploring various coping strategies post-26/11 terror attacks in India, where the author found a wide spectrum of strategies such as detachment and wishful things, produced out of terror perception. This is one of the first studies to recognize a broad spectrum of possible coping strategies and a wide range of negative emotions during pandemic times. The interaction among various coping strategies and negative emotions was studied independently and together. Furthermore, another remarkable feature that makes this research stand out is that we examined the connection between coping strategies and negative emotions in general terms and precisely as we reach the pandemic period deeper. The t-test results revealed that negative emotions increased upon further dive in the pandemic phase. This finding attributes that limited solution or robust treatment for COVID-19 symptoms has been discovered in the world. This has created mass hysteria and panic among the population with little reason to subside until promising cure findings. However, one of the negative emotions, 'worry', i.e. worrying thoughts, was observed not to be significantly elevated over this period. Perhaps the answer lies in the 'avoidance model of worry' (AMW), which theorizes that 'Worry is reinforced as a coping strategy because most worries never actually occur, leaving the worrier with a feeling of having controlled the feared situation successfully, without the unpleasant sensations associated with exposure' (Behar et al. 2009). However, this does not mean that worry reduces over time; it implies that worry does not increase overtime, as suggested by our findings. Some critical observations could be derived from the GMM regression result summary as provided in Table 10. In terms of the significance of coping strategies in general terms when handling negative emotions, entertainment was found to have a strong positive relationship with all the negative emotions considered in our research. This indicates that entertainment usually results in negative emotions rather than elimination. Our research finds out that entertainment media minimize stress when used as a means of immediate dissociation from the stressful condition or when breaking it (Prestin and Nabi 2020); however, if entertainment media is used to prevent the stressful situation and to recover from it, it overwhelms our senses and itself starts behaving as stressors rather than stress reliever (Davenport 2015). As can be observed, entertainment was used as an 'escaping strategy' rather than a 'relieving strategy'; hence, it led to further negative emotions in general and COVID-19 times. The most critical element in minimizing negative emotions was 'avoidance' as a coping mechanism during the pandemic. This result is consistent with the '3Cs' model of Reich (2006), where avoidance as a 'control' approach supports coping mechanisms. However, in contrast, avoidance is a significant factor that is positively associated with negative emotions in general. This variation's interpretation parallels the Hofer et al. (1974) results where it was observed that the parents who denied their child's death from leukaemia displayed higher levels of stress when their child died. This suggests that while denial and avoidance can play a role as a short-term coping mechanism, it will usually prove counterproductive in the long run. Proactive preparedness measures are positively associated with negative emotions during the pandemic; however, they were negatively associated with them in general. When we make substantial personal attempts to execute those measures, they give us a sense of control over the situation, which will be a defining factor in minimizing negative emotions in general if those positive measures were to succeed. However, despite all such efforts, the pandemic condition stays constant, and specific measures are reduced in our minds as burdens and reminders of the current pandemic that we cannot control, no matter how much proactive efforts we put in. We think this is why proactive preparedness turned out to be a significant factor leading to the COVID-19 pandemic, rather than reducing negative emotions. Preventive preparedness interventions were positively associated with several negative emotions in general, such as worry, a feeling of negative career outlooks, and negative emotions as a whole. In general, preventative measures are only the rules that we must obey or are compelled to follow due to legal or social compulsion. These interventions are not the 'extra or proactive' initiatives to change or help us feel better. In reality, following rules can appear as hassle over time, creating negative emotions. Optimism and social support were effective coping mechanisms in general that were inversely associated with the negative emotions of non-normality and grim social and economic outlooks. These results are in line with other studies such as those by Polizzi et al. (2020) and Fredrickson et al. (2003). During COVID-19, spiritualism was found to be a significant coping mechanism to reduce negative future social outlooks. Several other studies have also established that spiritualism can help cope with fear, anxiety, and trauma (Mathijsen 2012). According to Lazarus (1985), palliative coping strategies such as meditation and yoga can also be very effective in dealing with the debilitating stress when one has no choice to deal with the condition by tasking direct action towards it. One surprising finding of our study was that spiritualism was linked to the emotion of grim perceived social outlooks in general. From the basic understanding, it is observed that an ardent spiritualist would not proactively pursue social companionship since the foundation of spiritualism is going 'inward' rather than 'outward'. However, there is a dearth of research to support such observations. Finally, 'emotional resilience' was negatively associated/ adversely correlated with the pandemic's negative emotions such as hopelessness and negative future career outlooks. Emotional resilience is an effective psychological coping strategy inherently present in the human mind to deal with life's ups and downs. The importance of emotional reliance was highlighted in the literature (Polizzi et al. 2020). Based on these interpretations, Fig. 2 and 3 explain the significance of prominent coping mechanisms to curb negative emotions. Table 10 and Fig. 2 and 3 represent the frequency of coping strategies towards negative emotions in aggregate. Variables denoted by the bars coloured in green reflect the positive association of the particular independent variable with the negative emotions. The variables denoted by the bars coloured in blue reflect the negative association of the particular independent variable with the negative emotions, meaning that the particular coping mechanism helps in curbing the negative emotions. It is clear from the figure that the scenario has been changed during COVID-19 period. The effect of variable 'avoidance' had changed from positive in general to negative during pandemic period. 'Proactive preparedness' has also changed its effect from positive in general to negative during the pandemic. Through these observations, it can be inferred that during the COVID-19 pandemic, ignoring the 'firehouse of propaganda' (Paul and Matthews 2017) using news or conversations, one can avoid the 'illusory truth effect' (Hasher et al. 1977), and this proves to be the most effective coping strategy. However, this strategy will work only temporarily since it has been proven to cause negative emotions rather than curb them over time. Emotional resilience was also a useful tool to address the negative emotions during the pandemic (Khafi et al. 2014;Morgan and Southwick 2014). Some strategies are known to strengthen emotional resilience, such as learning to respond positively, being mindful and viewing situations from an open mind, and accepting and non-judgemental manner, reflecting on the present moment, embracing, rather than running away from adversity, exercising the mind by performing enduring tasks and challenging ourselves. Albert Einstein suggested that 'One should not pursue goals that are easily achieved. One must develop an instinct for what one can just barely achieve through one's greatest efforts' (Swaminathan 2013). Finally, spiritualism was also found to be a significant factor while countering negative emotions during the pandemic. Throughout the years, the benefits of regular meditation, pranayama, yoga, etc. have been well known in Indian culture (Nagraj 2012). Optimism and social support were significant factors in general to reduce negative emotions (Jain et al. 2019). However, these were not considered relevant during COVID-19 times. These findings reflect that optimism and social support act differently in general and in an adverse situation. We believe that while optimism can usually reduce negative feelings (Dougall et al. 2001), any reinforcement factor might be needed to promote optimism. Unless we see the feeling of optimism get materialized into the reality that we have not (Doğan et al. 2020). In general, social support is an outstanding tool for coping (Holahan et al. 1997). However, in our research, the negative emotions during the pandemic were not substantially reduced. We theorize that we would choose an escape strategy during the pandemic, as is in the 'fight or flight' response, where one might choose flight if fighting (or seeking support) does not seem to improve the situation. Our findings confirm this hypothesis that the most important coping mechanism was 'avoidance'. LOGIT regression results suggested that the feelings of poor job outlooks and negative emotions were the strongest reasons for converging the negative emotions to normalcy during pandemic Sharma and Mahendru 2020). These findings offer further insight into how negative emotions can be regulated during a pandemic. To pacify negative emotions for bringing them to normality; the best approach will be to control the negative feelings regarding career insecurity and control the negative emotions as a whole. Using regression analysis, we established 'emotional resilience' during a pandemic and 'social support' to be the most effective measure to regulate negative career outlook feelings, whereas 'entertainment' in general is positively associated with negative career outlook feelings. Finally, through regression analysis, we observed that 'avoidance' during the pandemic was the most effective method to manage aggregate negative emotions. In contrast, proactive 'preparedness' approaches and 'entertainment' during pandemic, preventive 'preparedness' approaches, 'avoidance', and 'entertainment' are generally positively associated with the aggregate negative emotions in aggregate. The pandemic crisis remains far from over, and no permanent remedy to the epidemic seems to be in reach (OECD Policy Responses to Coronavirus (COVID-19), 2020). New COVID cases have not recovered yet, which has profoundly impacted our economy and social structure. Further research will be required to determine the efficacy of various group coping strategies at different pandemic intervals. Concluding remarks COVID-19 pandemic severely impacted the well-being of individuals in financial, physical, and psychological terms. The present study aimed to observe the negative emotional states during the COVID-19 pandemic, and how such emotions change with time during the pandemic, the kind of coping mechanisms people adopt to deal with negative emotions, and which of these coping strategies prove to be most successful in alleviating the negative emotions. The study was based on model given by Reich (2006) which is based on '3Cs' and 'direct intervention and palliation strategy'. The model helped us determine the factors that can control negative emotions during the COVID-19 pandemic in an effective manner. An extant review of literature helped us determine the dependent and independent variables for the study. We took the stress, worry, hopelessness, normality, economic outlook, career outlook, and social life as indicators of emotions as dependent variables. Meanwhile, optimism, preparedness, emotional resilience, spiritualism, positive involvement, entertainment, avoidance, and social support were identified as independent variables. We tested whether there were changes in behaviour during COVID-19 pandemic in terms of dependent variables. We also tested whether there were adverse changes in behaviour during COVID-19 pandemic in terms of dependent variables. In addition, we examined the relationship between independent and dependent variables. Finally, we tested whether controlling adverse behaviour with the help of independent variables can bring normalcy to the behaviour. To achieve our objectives, we employed regression analysis through GMM and LOGIT regression. Regression analysis through GMM pointed out avoidance, optimism, preparedness, emotional resilience, and social support as effective factors to control behaviour, and these are ranked as per the order of their occurrence, i.e. avoidance as rank 1, optimism as rank 2, and so on. Results from running LOGIT regression revealed that career outlook and negative emotions as a whole are key for returning to normal behaviour. In addition, career outlook was found to be related to financial well-being, and controlling the emotions in respect of the career outlook can result in the normalcy of behaviour. The importance of financial well-being in underdeveloped countries was also duly recognized by Mahendru et al. (2020). Through LOGIT regression, we also found emotional resilience, social support, and avoidance to be the effective factors to control behaviour, and these were ranked as per the order of their occurrence, i.e. emotional resilience as rank 1, social support as rank 2, and avoidance as rank 3. However, contrary to GMM regression results, preparedness and optimism were not found to be significant employing LOGIT regression. Our findings revealed that emotional resilience, social support, and avoidance are significant in returning to normalcy. Out of these, emotional resilience was identified as the most important factor for controlling negative emotions and returning to normalcy. Moreover, emotional resilience was found to be least significant in pre-COVID times but most significant during the pandemic. Social support was found to be a crucial factor to control adverse changes in emotions which in turn can help bringing the behaviour to normalcy during the pandemic. In addition, avoiding thoughts and news related to the pandemic was found to be the most effective factor to control negative emotions and would also help in bringing the behaviour to normalcy during the pandemic. It is often seen that misinfodemics often lead to negative emotions, and therefore avoiding such misinfodemics leads to control of negative emotions. also point out the importance of avoidance of misinfodemics during COVID-19 pandemic. Our analysis did not indicate preparedness as an effective factor in controlling negative emotions. This confirms that though preparedness may prevent the risk of COVID-19 infection, it is not important in bringing the behaviour back to normalcy. Optimist was found to be an important (rank 2) factor in controlling emotions employing GMM regression but was found to be insignificant employing LOGIT regression. This implies that though optimism is significant for controlling emotions in general, it does not help to bring the behaviour to normalcy during the pandemic. Researchers and academicians may benefit from the novel methodology employed in this study. Employing both GMM and LOGIT regression simultaneously, we could limit the numbers of independent variables to 3 most important variables, i.e. emotional resilience, social support, and avoidance. Our study has paved the way towards a novel quantified approach in psychological studies which can present robust and reliable mechanisms to analyse psychological dimensions. Future researchers may replicate our approach to extend this research direction in controlling negative emotions and perceptions regarding COVID-19 vaccination. Governments may also take the results of the study into account while devising policies related to well-being of individuals during the pandemic. The policies towards this goal may stress emotional resilience, social support, and avoidance which were found to be effective in bringing the behaviour to normalcy.
2021-09-17T13:39:37.576Z
2021-09-16T00:00:00.000
{ "year": 2021, "sha1": "1f1c561eee28e0c27bf6c8a55f0187f6e05af86f", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s11356-021-16002-x.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "1f1c561eee28e0c27bf6c8a55f0187f6e05af86f", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
1844537
pes2o/s2orc
v3-fos-license
Cough and Asthma Cough is the most common complaint for which patients seek medical attention. Cough variant asthma (CVA) is a form of asthma, which presents solely with cough. CVA is one of the most common causes of chronic cough. More importantly, 30 to 40% of adult patients with CVA, unless adequately treated, may progress to classic asthma. CVA shares a number of pathophysiological features with classic asthma such as atopy, airway hyper-responsiveness, eosinophilic airway inflammation and various features of airway remodeling. Inhaled corticosteroids remain the most important form of treatment of CVA as they improve cough and reduce the risk of progression to classic asthma most likely through their prevention of airway remodeling and chronic airflow obstruction. INTRODUCTION Cough is a very common complaint for which patients seek medical attention [1,2]. A number of guidelines defined chronic cough as cough lasting for 8 weeks or longer [1,[3][4][5]. As such, chronic cough can lead to impaired quality of life [6]. Asthma is a disease in which the airways narrow excessively in response to various stimuli in the presence of airway hyper-responsiveness (AHR) and eosinophilic airway inflammation. In "classic asthma" (CA) variable airflow obstruction typically leads to symptoms such as wheeze, dyspnea and cough. In 'variant asthma' originally described by Glauser in 1972 [7] and subsequently re-named, by Corrao et al. [8] as 'cough variant asthma' (CVA) cough can be the sole presenting symptom. CVA remains one of the commonest causes of chronic cough worldwide [2,9]. More importantly, in CA cough may be associated with worse prognosis [10][11][12]. This review article will discuss various subtypes of asthma and associated eosinophilic airway disorders such as CVA, non-asthmatic eosinophilic bronchitis (NAEB); originally termed eosinophilic bronchitis without asthma and atopic cough [13][14][15][16][17][18][19]. COUGH AND ASTHMA Cough is a major symptom of asthma. Cough in asthma can be classified into three categories CVA, coughpredominant asthma and cough that persists despite standard therapy with inhaled corticosteroids and bronchodilators [19,20]. CVA is a subtype of asthma that usually presents solely with cough without any other symptoms such as dyspnea or wheezing [8]. In cough-predominant asthma cough is the most predominant symptom but other symptoms are also present such as dyspnea and/or wheeze [19][20][21]. These symptoms can be elicited on careful clinical history and examination. The third subtype is defined as cough that persists despite the control of other symptoms such as wheeze and breathlessness with standard treatment such as *Address correspondence to this author at the Department of Respiratory Medicine, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto 606-8507, Japan; Tel: 81-75-751-3830; Fax: 81-75-751-4643; E-mail: niimi@kuhp.kyoto-u.ac.jp inhaled corticosteroids (ICS) and beta-agonists. There are two subtypes in this category. In the first subtype cough is responsive to anti-mediator drugs such as leukotriene receptor antagonists, histamine H1 receptor antagonists and thromboxane synthesis inhibitors or receptor antagonists. The inflammatory mediators blocked by these agents are likely involved in the development of cough [22,23]. Cough in this subtype is considered a manifestation of asthma, which is refractory to ICS and bronchodilators. The other subtype is cough due to concomitant conditions such as gastroesophageal reflux disease (GERD). Co-existence of GERD with asthma or CVA is fairly common, and cough may subside with anti-reflux medications [24]. Such phenomena may be explained by "cough-reflux selfperpetuating positive feedback cycle" leading to vicious cycle of cough and reflux [25]. As subjective measures of cough (cough scores and visual analogue scale) and cough reflex sensitivity are poor surrogates for objective cough frequency and cough-related quality of life assessment may be more appropriate when assessing cough [26]. MUCUS HYPER-SECRETION IN ASTHMA AND CHRONIC COUGH Cough in asthma is typically dry or minimally productive, but it may also be associated with hypersecretion of mucus. Mucus hyper-secretion in asthma may be potentially related with steeper decline of pulmonary function [27] and fatal disease [28]. Measurement of secreted mucin in sputum has been reported in asthma [29], but not in chronic cough, which may involve goblet cell hyperplasia of bronchial epithelium with variable sputum production [30,31]. We conducted a cross-sectional study to examine mucin levels of induced sputum supernatant in 49 patients with CA, 53 with chronic cough (39 with CVA, 9 with sinobronchial syndrome, 5 with GERD) and 11 healthy subjects [32]. Sinobronchial syndrome (SBS) was defined as chronic sinusitis complicated by neutrophilic inflammation of the lower airways [2]. An ELISA method was used [32] to detect total levels of various types of airway mucin such as MUC5AC and MUC5B [29]. Sputum symptom was semiquantified by using a questionnaire. Sputum production was more prevalent in patients with CA, CVA, or SBS than in those with GERD and the controls. Whilst all SBS patients complained of frequent sputum production, none of the GERD patients reported sputum production, resulting in statistical differences for these two groups when compared with other disease groups ( Fig. 1) [32]. Notably, 13 of 39 CVA patients reported frequent sputum production. This may not be a true reflection of the clinical features of CVA, because only patients that succeeded with sputum induction were enrolled, possible resulting in selection bias [33]. However, it is important to notice that a subset of patients with CVA presents with productive cough. Sputum mucin levels were higher in CA and SBS than in the controls. They were also higher in CA than in CVA and GERD, but not different among the latter groups and the controls (Fig. 2) [32]. When the four disease groups were combined, patients with frequent sputum production had greater mucin levels than those with occasional or no sputum production, or controls ( Fig. 3) [32]. These results indicated that the difference in mucin levels among subjects reflected the degree of mucus hyper-secretion. Interestingly, patients with CA showed negative correlations of mucin levels with respiratory resistance indices on impulse oscillation [34] and with airway sensitivity to methacholine [35], possibly indicating protective effects of airway-secreted mucin in asthma [32]. COUGH VARIANT ASTHMA In CVA cough is the sole presenting symptom. CVA is characterized by AHR. It responds to bronchodilators such as beta-agonists and theophyllines [8,19,20]. In Japan, CVA remains the most common cause of chronic cough followed by SBS, GERD and AC [2,19,20,[36][37][38]. The 'health insurance for all' in Japan allow patients to visit a specialist without prior referral from a general practitioner. The patients in fact prefer to be assessed by a specialist [2]. In the majority of patients with CVA cough can be controlled with inhaled corticosteroids. General practitioners in Japan are less likely to prescribe inhaled corticosteroids than those in Western countries [39]. This may partially explain the high prevalence of CVA in Japan. ATOPIC FEATURES Seasonal variation of symptom is very common in CVA, which may implicate an involvement of atopy. We compared 74 CVA patients with 115 CA patients with wheezing with regard to total and specific IgE levels of 7 common aeroallergens [40]. The two groups of asthmatics were sensitized to one or more allergens at a similar prevalence (60% vs 67%). However, patients with CA had higher total IgE, larger numbers of sensitized allergens, and higher rates of sensitization to a number of allergens than did patients with CVA. The results revealed no specific antigen of CVA with higher sensitization rate than CA [40]. A literature review indicates that the prevalence of atopy in CVA, as defined by the presence of at least one positive serum specific IgE or skin test response to common aeroallergens, ranges from 40 to 80% [16,17,32,[40][41][42][43][44]. Fig. (2). Induced sputum levels of muciun in patients with asthma and chronic cough (ref. [32]). PHYSIOLOGICAL FEATURES Pulmonary function tests of CVA patients show normal to near normal results of peak expiratory flow (PEF) or FEV 1 , but when compared with healthy subjects or patients with post-infectious cough, these values may be slightly but significantly lower [41]. Mild diurnal change of PEF or its fluctuation in parallel with coughing may be observed [45], but to a lesser degree than that seen in CA. Reversibility of FEV 1 with beta-agonist is smaller in CVA than in CA, because baseline FEV 1 values are normal or nearly normal in the majority of CVA patients. In addition to these facts, CVA is the only cause of chronic cough that is responsive to bronchodilators [5,46]. It is thus suggested that coughing of CVA may be due to bronchoconstriction, but the detailed causal relationship involved in cough and bronchoconstriction remains unknown. A recent animal experiment has suggested that cough due to bronchoconstriction is mediated via rapidly adapting receptors, but not C fibers [47]. AHR of CVA patients has been considered similar to or milder than that of CA patients. In adult CVA patients, airway sensitivity (threshold dose of methacholine to increase respiratory resistance) and airway reactivity (slope of methacholine -respiratory resistance dose response curve) of a tidal breathing method [35] were both smaller than in CA patients [48]. In children, only airway reactivity was smaller in CVA patients than in CA patients [49], possibly explaining the absence of wheezing in CVA. As a whole, the physiological abnormalities of CVA are more modest than those of CA. However, this does not mean that CVA is a milder form of asthma, because CVA patients are often more difficult to manage than CA patients who predominately present with wheeze. Cough receptor sensitivity, most commonly assessed by inhalation of capsaicin, may or may not be heightened in CVA as compared with healthy controls, and may decrease with treatment with leukotriene receptor antagonists [50] while remain unchanged with ICS treatment [51]. This might be associated with excellent antitussive effect of the former class of drugs [52], but the details of its mechanism remain unknown. PATHOLOGICAL FEATURES In patients with CVA, eosinophils are increased in the sputum [51,53], bronchoalveolar lavage (BAL) fluid and bronchial mucosal tissue [41]. The magnitude of this increase correlates with the severity of disease as defined by symptom and treatment required to achieve control [41]. For biopsy specimens of central airway mucosa and BAL fluid recovered from peripheral airways and lung parenchyma, the degree of eosinophilia is similar between CA and CVA, indicating a similarity in the site of inflammation [41]. One study showed an increase of neutrophils as well as eosinophils in the airway mucosa of CVA patients (Fig. 4) [30], and such concomitant increase of both cells may lead to more severe disease characterized by refractoriness to treatment with ICS [54]. Mast cells, an important source of tussive as well as fibrogenic mediators, was increased in the airway mucosa of non-asthmatic chronic cough patients but not CVA patients (Fig. 4) [30]. Similar to asthma, in CVA structural changes such as sub-epithelial thickening, goblet cell hyperplasia and vascular proliferation have been demonstrated on mucosal biopsies [30,55,56]. These changes may be secondary to airway inflammation. Pathophysiological significance of these changes has been indicated in asthma, and early antiinflammatory treatment is also recommended in CVA. However, they may also be a consequence of long-term mechanical stimulation by coughing [30,56,57], because most of these changes are also present in subjects with nonasthmatic chronic cough [30,56]. Increased inflammatory mediators (e.g. histamine, prostaglandins D 2 and E 2 and leukotrienes C4, D4 and E4) [23], increased expression of capsaicin receptor TRPV-1 [58], and decreased pH of airway lining fluid that may activate TRPV-1 [59] may play a role in the development of cough. One study showed a similar pattern of sputum markers (cellular and humoral) between CA and CVA [60]. A computed tomography (CT) study has Fig. (4). Number of eosinophils, neutrophiils and mast cells in the submucosa of bronchial biopsy specimens obtained from patients with CVA (n=14), those with non-asthmatic chronic cough (n=33; 6 with postnasal drip/rhinitis, 5 with GERD, 3 with bronchiectasis, and 19 with idiopathic disease) and 15 healthy subjects [30]. revealed airway wall thickening, a feature of CA [61], in patients with CVA (Fig. 5a, b) [42]. This may reflect the net effect of airway remodeling features discussed above. However, airway wall thickening is also present in patients with non-asthmatic chronic cough although to a lesser degree (Fig. 5b) [42], which is consistent with the biopsy studies [30,56]. DIAGNOSIS CVA is characterized by AHR and responsiveness to bronchodilators, but the presence of AHR is only consistent with, but not diagnostic of, CVA [46]. Improvement of cough with bronchodilators such as beta-agonists is the essential diagnostic feature of CVA, as demonstrated by the double-blind controlled study by Irwin et al. [46]. Based on these features, the Japanese Respiratory Society cough guideline considers responsiveness to bronchodilators as the key diagnostic feature of CVA [5]. Sputum eosinophilia suggests a diagnosis of CVA [51,53], as well as AC or NAEB. However, only 30% of CVA patients fulfill the criteria of sputum eosinophilia as defined by >3% of leukocytes [14] in our experience (unpublished data), and its absence does not exclude the diagnosis of CVA. Exhaled NO levels may also be elevated and useful in the diagnosis of CVA [62]. TREATMENT AND PROGNOSIS After the diagnosis is established, treatment of CVA is essentially the same as in CA [5]. Bronchodilators (shortacting inhaled 2 agonists or theophyllines) may be used especially in patients with intermittent cough. However, as in CA eosinophilic airway inflammation and remodeling are present in CVA, ICS are the first line treatment in CVA especially in those patients who have persistent cough [30,41,55,56]. There are no data currently available regarding the choice of ICS, its dose or duration that should be used for the treatment of CVA [3,52,63]. If ICS mono-therapy is insufficient, other agents can be added such as long-acting 2 agonists, slow-release theophylline, or leukotriene receptor antagonists [5]. Effectiveness of mono-therapy with leukotriene receptor antagonists has been reported possibly through its anti-inflammatory effects [50,64]. Occasionally, (a) (b) Fig. (5). Airway wall thickening in patients with CVA (n=27) and non-asthmatic chronic cough (n=26; 8 with SBS, 3 with GERD, 3 with post-infective cough, 11 with unexplained cough, and 1 with combined SBS and GERD), as indicate by helical CT scans of right apical upper lobe bronchus (a). Airway wall thickness (wall area/body surface area) as quantified by automatic analysis shows that airway walls of patients are thickened as compared with healthy subjects (n=15), to a greater degree in patients with CVA (b) [42]. for acute exacerbations of CVA, a short-course of oral corticosteroids may be required. A subset of patients with CVA can develop wheeze and progresses to CA. If ICS are not used in CVA, the progression rate to CA has been reported between 30 to 40% [17,43]. Factors that may predict the development of CA include AHR [17] and exaggerated maximal airway response to methacholine, sputum eosinophilia [65], and sensitization to allergens [40]. Early ICS treatment may reduce the risk of progression to CA [17,43]. Avoidance of relevant allergens might also be important [40]. As in CA cough in CVA often re-occurs if treatment is discontinued [43]. Annual changes of FEV 1 have been reported to be similar among patients with CVA (-29 ml/year by average), those with AC (-21 ml/year) and healthy subjects (-28 ml/yr). Values for CVA patients ranged from approximately -90 ml to +30 ml [18]. In our 3-year follow-up [19], annual changes of FEV 1 were -2 ± 36 ml in 7 patients with mild CVA (coughing episodes interfering with usual activities or sleep 3 times or less yearly), and -62 ± 35 ml in 4 patients with difficult-to-control disease (4 or more such episodes yearly)(p=0.014). The difficult-to-control group was treated with higher doses of ICS than the mild group, reflecting their severity [19]. Although this is a small study, these results may be consistent with recent studies in CA that showed a positive relationship between the number of asthma exacerbations and progressive loss of FEV 1 [66]. DISORDERS RELATED TO CVA Atopic cough as proposed by Fujimura et al. [16-18, 44, 51] presents with bronchodilator-resistant dry cough associated with an atopic constitution. It is characterized by eosinophilic tracheobronchitis and cough hypersensitivity. However, there is absence of AHR and variable airflow obstruction. AC usually responds to ICS treatment. These features are shared by NAEB [13][14][15]53]. However, AC lacks BAL eosinophilia [16]. Unlike CVA [17,41,43,44] and NAEB [19,67,68], AC rarely progresses to CA with wheezing [17]. Histamine H 1 antagonists are effective in AC [69], but their efficacy in NAEB is unknown. The involvement of airway remodeling and accelerated decline of lung funciton, which has been shown in CVA [19,30,42,55] and NAEB [15,70,71], is unknown for AC. NAEB thus significantly overlaps with AC, but might also include milder cases of CVA with very modest AHR. The clinical and pathological features of eosinophilic airway disorders including CA are summarized in Table 1. The confusion, or lack of consensus, in these related entities may be affecting the etiology of chronic cough reported from various countries [2,19]. CONCLUSIONS CVA is one of the most common causes of chronic cough. CVA therefore should be considered in patients presenting with persistent cough. The role of inhaled corticosteroids in CVA is very important. The inhaled corticosteroids not only control cough in CVA but also they may prevent the development of wheeze, airway remodeling and chronic airflow obstruction.
2018-04-03T03:17:40.949Z
2011-01-31T00:00:00.000
{ "year": 2011, "sha1": "16be2d6b933c6bbc70ed756dd9b7797ddd460a2a", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc3182093?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "16be2d6b933c6bbc70ed756dd9b7797ddd460a2a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244733736
pes2o/s2orc
v3-fos-license
Antimicrobial Resistance among Enterobacteriaceae Found in Chicken and Cow Droppings and Their Public Health Importance Introduction: The recent surge in the number of antimicrobial resistant cases from hospitals and communities has created a need to study the points and sources of exposure to certain bacteria and determine their susceptibility to commonly used antibiotics. This study aimed at identifying and screening for drug-resistant Enterobacteriaceae isolated from chicken droppings and cow dungs in Onitsha, Anambra state, in the South-Eastern part of Nigeria. Methods: This is a cross-sectional descriptive study which included 50 chickens and 50 cow dung samples collected from five poultry houses and cow ranches respectively using sterile swab sticks. The samples were transported to the laboratory and processed following standard microbiological protocols. Isolates in the samples were recovered using MacConkey Agar, Eosin Methylene Blue Agar and Salmonella-Shigella Agar following standard microbiological pro-cedures and then identified/characterized biochemically using commercial API 20E identification kits following the standard manufacturer’s protocol. Isolates were subjected to antibiotic susceptibility testing on Muller Hinton Agar using Kirby Bauer double-disc diffusion technique. The multiple antibiotics resistance index was determined as well. Isolates with reduced susceptibility genes, P. penneri, P. mirabilis and C. braakii were recovered from chicken droppings, whereas 13/43 (30.2%) Enterobacteriaceae including K. pneumoniae, S. enteritica, S. odorifera, E. coli, K. intermediate, P. stuartii, E. aerogenes, P. penneri, P. mirabilis and C. braakii were recovered from cow dungs. Two (12.5%) different isolates demonstrated metallo-beta-lactamase and ce-phalosporinase (AmpC) production. The isolates were susceptible to six antibiotics tested except Augmentin and Nitrofurantoin where the resistance is 100% and 85% respectively while Ceftriaxone and Ofloxacin had the best antibacterial activity against the isolates from both sites. Conclusion: The bacteria of public health importance isolated from these sites and their antibiogram profile have shown the need for proper monitoring and management of animal wastes in order to mitigate the threat to human health in the spirit of One Health as well as contribute to the fight against antibiotic resistance. Introduction The Enterobacteriaceae, a family of aerobic and Gram-negative rods that naturally inhabit the intestinal tract of humans and animals, have been implicated in many human diseases. This family of bacteria has recently been experiencing a rise in incidence of resistance to antibiotics, as reported in many countries of the world, thus posing a bigger threat to healthcare delivery [1] [2]. They are particularly of clinical importance in the cause of nosocomial and community acquired bacterial infections. With the continued economic activities in poultry practice, cattle ranching and increased exposure of both crop fields and humans to antibiotic-resistant bacteria present in chicken and cow excreta, human contacts with enterobacterial infections are inevitable and have constituted a threat to public health [3]. More so, previous studies have attributed the irrational use of antibiotics in the practice of animal husbandry as the reason for the emergence and spread of resistant bacteria [3] [4]. Resistance of the Enterobacteriaceae to commonly used antibiotics in the last decade has been of an alarming proportion, causing increased public health concerns [5] [6]. The mechanisms of resistance to such antibiotics are usually through efflux pumps, enzyme modification of the antibiotic, selective pressure and antibiotic inactivation [4] [5] [6] [7]. Commonly used antibiotics are becoming less useful owing to resistance and most of the antibiotics considered as last resort are also becoming ineffective for the same reason [8]. Furthermore, carbapenems are β-lactam group of drugs that are currently used as antibiotics of last resort for treating infection because of the problem of multidrug-resistance especially among Gram-negative rods [2] [9] [10]. This re-sistance has largely been attributed to the production or acquisition of Carbapenemases among enterobacteriaceae family [11]. Originally, organisms belonging to the enterobacteriaceae family were susceptible to carbapenems, but this is no longer true due to the emergence of Carbapenem resistant enterobacteriaceae in the last couple of years and so posing a serious health concern [10]. The Center for Diseases Control and Prevention (CDC) in 2013 reported that Carbapenemresistant enterobacteriaceae (CRE), which emerged within the past two decades, among other multidrug-resistant organisms, have remained the major cause of untreatable and hard-to-treat infections among hospitalized patients, and are considered an urgent threat to human health [2] [10]. Detecting CRE early in human and animal hosts is highly recommended in controlling infections by them as well as their spread [2]. Consequently, contamination of food and food-producing animals with MDR bacteria harboring MBLs and AmpC enzymes could be a source of antibiotic resistance [12]. Over the last few decades, several extended-spectrum β-lactamases (ESBL) and AmpC-producing Enterobacteriaceae (EPE) have emerged in both human and animal health management globally, with the animals being touted as the transmission link of ESBLs/AmpCs for humans [10] [11] [13]. In addition to this, Ejikeugwu et al., reported AmpC producing Enterobacteriaceae, as well as MDR and production of MBL amongst Klebsiella spp isolated from cow anal swabs in studies carried out in Nigeria [12] [14]. Although some studies have been carried out in Nigeria at poultry and animal houses, yet paucity of data is available on antimicrobial resistance resulting from poultry droppings and cow dungs especially in South-Eastern Nigeria; thus, creating a research gap for this study. In this study, we proposed the hypothesis that antibiotic resistance in Enterobacteriaceae recovered from both poultry droppings and cow dungs could be attributed to intrinsic genetic factors possessed by these organisms that enhance the production of MBL and AmpC enzymes. Hence, this study was aimed at identifying and screening for drug-resistant Enterobacteriaceae isolated from chicken droppings and cow dungs in Onitsha, Anambra state, in the South-Eastern part of Nigeria. Study Setting This is a cross-sectional descriptive study conducted from August, 2020 to April, 2021 at Nkwelle suburb, Onitsha North Local Government Area, Onitsha. Onitsha is a metropolitan city located near the River Niger, in Anambra state, South-Eastern of Nigeria. It lies within 6˚10'N 6˚47'E coordinates in a 36.12 km 2 landmass, and has a population of 561,066 [15]. The study included five different poultry and cow ranches with a capacity of 500 birds and 200 cows respectively. Simple random sampling was used to collect swabs of 50 chicken droppings and 50 cows dungs respectively. Information obtained from the farm attendants showed that, although, proper hygiene is Collection and transportation of fecal Samples Four heaped dessert-spoonful of fresh, warm dung pat on the ground samples were collected from adult cattle from 5 different areas where adult cows congregate. They were mixed and then a sterile swab stick was used to rob through the mixture. Fifty (50) chicken and cow dung swab samples each, were collected from the poultry house and cow ranch in Nkwelle, Onitsha using sterile swab sticks and early in the morning before the attendants came for their routine cleaning and tending of the chicken and cows. Samples were collected by robbing a wet sterile swab stick on chicken and cow excreta dropped at different locations of the same poultry and ranch, for five different places. Each day, the collected samples were transported aseptically and processed within 2 hours of collection. The samples were processed at laboratory in Department of Pharmaceutical Microbiology and Biotechnology, Faculty of Pharmaceutical Sciences, Agulu of Nnamdi Azikiwe University, Awka. Culture and purification of fecal Samples The chicken dropping and cow dung swab samples were each cultured in 5 ml double strength of nutrient broth (CM0003, Oxoid, UK) and incubated overnight at 30˚C. A loopful of the specimen was transferred aseptically onto Mac-Conkey agar (MAC) plates, Eosin Methylene Blue (EMB) agar, Cetrimide selective agar plates, and Salmonella-Shigella agar for the selective isolation of Klebsiella species, Escherichia coli and P. aeruginosa and Salmonella-Shigella species respectively and incubated at 30˚C for 18 -24 hours for phenotypic characterization. Phenotypically, Escherichia coli produces colonies with metallic green sheen on EMB agar and lactose-fermenting colonies on MAC; Klebsiella species produce small, circular, elevated and mucoid colony on MAC and non-metallic sheen mucoid colonies on EMB agar while P. aeruginosa isolates produce greenish pigmentation on Cetrimide selective agar [16]. Identification and confirmation of bacterial isolates using BioMerieux API 20E Kit After Gram staining and microscopic examination, the various isolates obtained were characterized biochemically using BioMerieux API-20E kit to further identify the organisms to species level. The pure cultures were further characterized using Analytical Profile Index (API) 20E test strip comprising of 20 micro-tubes seeded with dehydrated substrates for enzymes that are produced by Enterobacteriaceae family. To allow for some moisture during incubation, distilled water (5 ml) was spread onto the honey comb wells of the tray to ensure a humid environment in the incubation box before the strips were placed in the tray. Discrete colonies collected from 20 hours overnight culture plates were inoculated into 5 ml API 20E suspension medium, and emulsified to ensure a homogenous bacterial inoculum. Following the manufacturer's specifications, the preparations were carefully distributed into the tubes of the strips, ensuring that no air bubbles were left. The tube and cupule for GEL, VP and CIT tests were filled with the suspension while the inoculum was filled up to the tube in the rest of the test segments. To create an anaerobic environment for ADH, LDC, ODC, H2S and URE tests, the tubes were overlaid with mineral oil and the set up incubated for 24 hours at 37˚C. One drop each of TDA and James reagents was added in TDA and IND tubes respectively, one drop each of VP1 and VP2 was added in VP tube after the incubation period. The results were read and recorded after 10 minutes. Antimicrobial Susceptibility Testing Antimicrobial Susceptibility testing was carried out on all the identified bac- Screening for Extended spectrum-β-lactamase production using ROSCO kits For the phenotypic detection of the various β-lactamases present in the strains, the double tablet synergy testing using subjective observations of synergy and the combination tablet method was used [13]. Attention was given to the differences in the zones of inhibition. The susceptibility tests were performed following the method M2A6 disc diffusion method on Mueller Hinton agar plate as recommended by the National Committee for Clinical Laboratory Standards [17]. A standardized (0.5 MacFrland) inoculum was swabbed onto the Mueller Hinton agar plate using sterile swabs and the discs were aseptically placed on the inoculated plates and pressed firmly onto the agar plate for complete contact while ensuring sufficient space between individual disc to allow for proper measurement of inhibition zones. The test isolates were tested against the following Screening for Amp-C β-lactamase production To test for Amp-C β-lactamase production, the test organisms were screened for presumptive AmpC production by testing their susceptibility to cefoxitin (30 μg) using Kirby Bauer disk diffusion. Following CLSI standard, isolates with an IZD of ≤ 18 mm were suspected to produce AmpC enzyme. Screening for Metallo-β-lactamase (MBL) production This was done by phenotypically screened for the production of MBL in the test isolates. Their susceptibility to imipenem (IPM), meropenem (MEM), and ertapenem (ETP) was also done according to the CLSI criteria. Test isolates with an inhibition zone diameter (IZD) of ≤23 mm were suspected to harbor MBL enzyme. Determination of Multiple Antibiotic Resistance Index (MARI) The MARI was calculated using the formula: Number of antibiotics to which the isolates were resistant MARI Total number of antibiotics to which the isolates were subjected = The incidence of multidrug resistant isolates was calculated from the formula [15] Incidence of multidrug resistant isolates Number of isolates with MARI 0.3 100 Total number of Isolates ≥ = × Data Analysis All the data collected was summarized and tabulated using Microsoft excel software 2016. The results were calculated using percentages and presented in tables, while the Multiple Antibiotic Resistance index (MAR index) was calculated for each isolate and tabulated in the result section. Results A total of 43 (69.4%) Enterobacteriaceae recovered from this study out of 62 (100%) Gram-negative bacteria (GNB) isolates obtained from this study. Table 1 shows the frequency of Gram-negative bacteria recovered from the samples collected from 40 (80%) chicken droppings and 22 (44%) cow dungs. K. pneumoniae and E. coli had the highest frequency in chicken droppings (15%), while K. pneumoniae and E. cloacae had the highest frequency in cow dung (18.1%). The Enterobacteriaceae recovered all showed to be lactose fermenters with pink colonies on MAC, whereas non-lactose fermenters showed pale colonies on MAC and were all negative for oxidase test. Characterization of the isolates biochemically using the API ® 20E kit identified the isolates accordingly (Table 1). Table 2 and Table 3 show the antibiogram profiles of the isolates to the tested antibiotics together with their respective MARI. All the isolates from chicken dropping, except Citrobacter braaki, were resistant to Augmentin while the isolates Serratia odorifera and Enterobacter cloacae were resistant to all the antibiotics tested ( Table 2). Ceftriaxone and Ofloxacin had the best antibacterial activity against the isolates from both sites. All the isolates from cow dungs were resistant to Augmentin while Shewanella putrefaciens was resistant to all the antibiotics tested ( Table 2). Table 2 and Table 3 show the multidrug resistant profile of the isolates with most of the isolates showing resistance to three or more antibiotics (above 0.2) hence are considered as multi-antibiotics resistant strains. Table 4 shows the results of the screening tests for AmpC production among the 16 MDR isolates. From the test, Escherichia coli, Proteus penneri, Pantoea spp, Shewanella putrefaciens were positive for AmpC production. The results of the screening tests for Metallo-β-lactamase (MBL) production (Table 5) among the 16 multidrug-resistant isolates revealed that Serratia odorifera and Enterobacter cloacae were positive for Metallo-β-lactamase (MBL) production. Discussion The nonstop incidence of antimicrobial resistance caused by selective pressure, The evaluation of the antimicrobial resistance profiles of the recovered isolates under study is paramount as antibiotic-resistant bacteria in animal excreta are an emergent concern. The resistance profiles of all Enterobacteriaceae isolates were evaluated by exposure to eight antibiotics of four different classes for the phenotypic characterization of the isolates. These were chosen to represent the main antibacterial classes used in human medicine and livestock production in Nigeria. The resistance of all isolates from chicken dropping, except Citrobacter braaki, to Augmentin as well as the resistance of Serratia odorifera and Enterobacter cloacae to all the antibiotics tested can be attributed to the use of extended spectrum cephalosporins such as cefoxitin and cefotazim in livestock. The results of this research buttress the findings of [11] who reported that the presence of Carbapenemases producing Enterobacteriaceae in animals is becoming worrisome. These organisms, all of which are of public health importance, are in line with the report [2] who reported that Carbapenem-resistant Enterobacteriaceae are organisms of medical importance. As well as [18] in Southern China who recorded high level of Carbapenem-resistant Acinetobacter spp. from clinical infection and fecal survey samples in Portugal [3]. The antibiogram profiles of the bacterial isolates from both sites revealed that some of the bacterial isolates were highly resistant to commonly used agents although some of the isolates were sensitive to Ofloxacin and moderately sensitive to Ceftazidime. Most of the isolates were resistant to three or more antibiotic classes hence are considered as multi-antibiotics resistant strains. The results of isolates from chicken dropping show that only one isolate, Providencia stuartii, gave a MARI of <0.20 as it had a MARI of 0.13 with others having 0.20 and above while Ewingella americana and Providencia stuartii both had a MARI of 0.13 among organisms isolated from the cow dung. The multiple antibiotics resistance index (MARI) is a protocol used to explain the spread of bacteria resistance and resistant genes in any bacterial population [19] [20]. Generally, Multiple antibiotics resistance index above 0.20 means that bacteria isolates originating from such an environment has been exposed to indiscriminate use of several antibiotics in time past [19] [21]. The multi-drug resistance to the antibiotics of different classes observed in this study may be due to the increasing administration of quinolones to treat avian infections [3]. The unnecessary use of antibiotics for enhancement of growth and prevention of diseases in farm animals has impressed selective pressures that induce more resistance among bacteria in the community. From the sixteen isolates examined phenotypically for the production of metallo β-lactamase (MBL), two (12.5%) isolates were positive for the production of this enzyme. The production of MBL is this study is similar to a previous study conducted by Ejikeugwu et al. [12] in which MBL was detected in Klebsiella species isolated from cow anal swabs. Only two AmpC producing Enterobacteriaceae was detected when the isolates were phenotypically screened for the enzyme. This is similar to an earlier report by Ejikeugwu et al. [14] in which AmpC enzymes were significantly detected in the E. coli and Klebsiella species isolated from cow anal swabs from an abattoir in Abakaliki, Nigeria. These results illustrate that the AmpC and metallo β-lactamase (MBL) producing species isolated in this study are multidrug resistant. They also produce AmpC and metallo β-lactamase (MBL) en-zymes which allow them to be resistant to the 2 nd and 3 rd generation cephalosporins which are clinically used to manage and treat serious bacterial infections. Furthermore, all the isolates screened phenotypically for ESBL production showed negative, this is in contrast to the review done by Madec et al. [11] where a study conducted among 699 S. enterica isolates from 1152 retail chickens reported a 24.6% rate of ESBL producers in Shanghai, China. This current study is relevant and a springboard to the increasing prevalence of antibiotic resistance in the non-nosocomial environment such as abattoir. More so, it provides acceptance to the possible abuse and irrational use of antibiotics in animal husbandry and for other non-clinical purposes. Hitherto, fecal excrement of chicken and cows in constant contact with humans who are husbandry rearers, pose high risk of cross contamination and further affects the antimicrobial resistance status and ultimately, the public health standards of nations. Study Limitations: The molecular characterization of the isolates to further confirm the identity of the isolates wasn't conducted at the time of this writing due to limited funding. Also, the genes responsible for drug resistance could not be identified for the same financial constraints. Conclusions The bacteria of public health importance isolated from these sites and their antibiogram profile have shown the need for proper monitoring and management of animal wastes in order to mitigate the threat to human health in the spirit of One Health as well as contribute to the fight against antibiotic resistance. What is known about this topic? World over:  Resistance of the Enterobacteriaceae to commonly used antibiotics in the last decade has been of an alarming proportion, causing increased public health concerns.  The last few years have witnessed the proliferation of several extended-spectrum β-lactamases (ESBL) and AmpC-producing Enterobacteriaceae (EPE) in both human and animal health management globally.  Animals have been touted as the transmission link of ESBLs/AmpCs for humans and demanded urgent response. What this study adds This study has showed that:  There is the presence of resistant strains among enterobacteriaceae found in chicken and cow droppings.  The need for proper monitoring and management of animal wastes in order to mitigate the threat to human health in the spirit of One Health.  More evidence that animals could serve as the transmission link of ESBLs/ AmpCs in humans. Pharmaceutical Microbiology and Biotechnology, Faculty of Pharmaceutical Sciences, Nnamdi Azikiwe University, Agulu Anambra state Nigeria and the Physician Assistantship/Public Health Department of Central University, Accra, Ghana. We would like to acknowledge all members of our team and staff of the laboratory for their continuous commitment to our research efforts. Conflicts of Interest None to declare. Funding The study did not receive external funding; instead, it was self-funded.
2021-12-01T16:13:25.826Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "4366eaac383bec4fff2861e6e3adb191e3e5031b", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=113529", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c4fb34309da7fcf571c2147d9d48b3ddbc780e46", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
270970654
pes2o/s2orc
v3-fos-license
Comparing resident operative volumes for routine general surgery cases at academic, urban community, and rural training sites Background: Surgical training traditionally took place at academic centres, but changed to incorporate community and rural hospitals. As little data exist comparing resident case volumes between these locations, the objective of this study was to determine variations in these volumes for routine general surgery procedures. Methods: We analyzed senior resident case logs from 2009 to 2019 from a general surgery residency program. We classified training centres as academic, community, and rural. Cases included appendectomy, cholecystectomy, hernia repair, bowel resection, adhesiolysis, and stoma formation or reversal. We matched procedures to blocks based on date of case and compared groups using a Poisson mixed-methods model and 95% confidence intervals (CIs). Results: We included 85 residents and 28 532 cases. Postgraduate year (PGY) 3 residents at academic sites performed 10.9 (95% CI 10.1–11.6) cases per block, which was fewer than 14.7 (95% CI 13.6–15.9) at community and 15.3 (95% CI 14.2–16.5) at rural sites. Fourth-year residents (PGY4) showed a greater difference, with academic residents performing 8.7 (95% CI 8.0–9.3) cases per block compared with 23.7 (95% CI 22.1–25.4) in the community and 25.6 (95% CI 23.6–27.9) at rural sites. This difference continued in PGY5, with academic residents performing 8.3 (95% CI 7.3–9.3) cases per block, compared with 18.9 (95% CI 16.8–21.0) in the community and 14.5 (95% CI 7.0–21.9) at rural sites. Conclusion: Senior residents performed fewer routine cases at academic sites than in community and rural centres. Programs can use these data to optimize scheduling for struggling residents who require exposure to routine cases, and help residents complete the requirements of a Competence by Design curriculum. G eneral surgery training has long followed a template in which residents spend most of their time at large academic centres in order to experience a wide breadth of surgical cases and provide the necessary clinical service for quaternary surgical services to run effect ively. 1 Over time, community and rural resident training sites have become part of training paradigms to provide residents with real-world experience and can provide a fresh environment for training.These sites are high volume in common, routine general surgery procedures that all general surgeons are required to perform, but perhaps lack in overall complexity. 2 Additionally, these experiences often do not have the same service requirements, which would allow for a more focused, technical educational experience. In Canada, general surgery residency training programs tend to be varied with regard to the location of training and the case mix.At the University of British Columbia (UBC), the residency program has evolved and is integrated within the province to create a distributed training model.Residents now spend large portions of their time as senior trainees away from academic centres.As the UBC training program is the only one in the province, and given the very large geographic area it covers, trainees have opportunities to visit many different locations, including those in urban community sites and in rural areas. Comparison of general surgery resident case volumes in academic versus community versus rural sites is lacking in the medical education literature.Some evidence exists that community programs have high operative volumes with ample time for junior residents to operate. 3 The relatively unique structure of the UBC training program provides an opportunity to investigate whether there is a difference in case volume for residents among these 3 types of sites.Although complex procedures performed at academic centres are important for resident education, routine procedures -such as hernia repairs, cholecystectomies, and bowel resections -are expected to be performed by any general surgeon and are fundamental to master before completion of training.These are also very appropriate cases for senior residents.If there are differences between various training sites, general surgery training programs could use these findings to tailor education to resident and population needs and goals. Methods We carried out a retrospective review of resident case logs through the T-Res logging system used at UBC for cases performed by residents from July 1, 2009, to June 30, 2019.We included cases deemed to be routine procedures (Box 1).We derived these from the provincial privileging dictionary based on procedures that any surgeon in the province should be proficient in, that fit into a category in our logging system, and that are routinely performed by senior residents.We included both minimally invasive surgery and open cases.We included cases for senior residents only, defined as residents in postgraduate year (PGY) 3 to 5, who were listed as the primary operator.We excluded junior residents as they do not rotate away from academic centres.We excluded residents who logged fewer than 100 cases in a year and excluded cases with data-logging errors.Finally, we also excluded out-of-province elective cases.We based site classification of academic, urban community, and rural according to long-standing designations by the local health authorities and the residency program (Appendix 1, Fig. A.1, available at www.canjsurg.ca/lookup/doi/10.1503/cjs.005323/tab-related-content).Although no rigid criteria were in place for these designations, academic sites were generally quaternary centres with a substantial number of research faculty, and ready access to ancillary services such as complex critical care and interventional radiology.Community sites were usually tertiary hospitals with access to most ancillary services and minimal research faculty.Rural sites usually comprised primary and secondary hospitals with basic operative capabilities, but lacked many ancillary services.Resident rotations followed a standard rotation matrix, although mild variations in this were possible at the request of residents based on career objectives (Appendix 1, Fig. The primary outcome for this study examines case volume per PGY per site per block.Given the variations in how long residents spend at certain sites, standardizing cases per block allows direct comparisons between resident volumes and will increase the applicability of the results. Conclusion : Les résidentes et résidents séniors ont pris en charge un moins grand nombre de cas de routine dans les établissements universitaires que dans les établissements communautaires et ruraux.Les programmes pourraient utiliser ces données pour améliorer la formation des résidentes et résidents qui doivent être exposés à des cas de routine en chirurgie générale et les aider à obtenir les acquis nécessaires pour tout programme de Compétence par conception. E275 We calculated summary statistics and used a mixedeffect Poisson regression model to estimate the mean number of procedures performed by PGY and practice setting.The mixed-effect Poisson regression model included resident ID as a random effect and both PGY and practice setting as interacting fixed effects.We estimated mean rate of procedures per block and associated 95% confidence intervals (CIs) for each combination of PGY and practice setting.We considered estimates with nonoverlapping 95% CIs to be statistically significant.Academic years 2009-2011 had 12 blocks per year instead of 13 in 2011 onward, so we repeated modelling for sensitivity analysis using procedures standardized to 13 blocks instead of 12.For these sensitivity analyses, we multiplied number of procedures in each block in academic years 2009-2011 by a factor of 12 or 13 and rounded them down to the nearest integer.We performed all statistical tests and modelling using R version 4.2.2.This study was approved by the UBC Research Ethics Board. Results Demographic information is summarized in Table 1.During the study period, 85 unique general surgery residents had appropriate case logs.Of these, 73 had records during PGY3, 74 during PGY4, and 62 during PGY5.In total, 27 unique training sites were involved in training, comprising 5 academic centres, 13 community centres, and 9 rural centres.Residents logged 48 772 cases overall.Of these, 18 503 (37.9%) cases were performed at academic centres, 22 492 (46.1%) in community centres, and 7777 (16.0%) in rural centres.Out of these cases, 28 532 were routine general surgery cases (58.5% of total cases).Residents performed 9744 cases (34.1%) in academic centres, 14 001 (49.1%) in community centres, and 4787 (16.7%) in rural centres. The average number of blocks spent during the calendar year in academic sites was 5.53 for PGY3, 4. 16 For cases per year per block per site (Table 3), commun ity PGY3 residents performed more routine cases than those at academic sites, with 14.8 (13.7-16.0)versus 10.8 (10.0-11.6)cases per block.We saw the same trend when comparing rural and academic sites, with 15.3 (14.2-16.5)versus 10.8 (10.0-11.6)cases per block.In the PGY4 year, community residents performed more cases than those at academic sites, with 23.8 (22.2-25.5)versus 8.7 (8.0-9.3)cases per block.Again we noted this trend in a comparison of rural and academic sites, with 25.7 (23.6-27.9)versus 8.7 (8.0-9.3)cases per block.Finally, with respect to the PGY5 year, community residents again performed more cases than those at academic sites, with 20.1 (18.7-21.6)versus 8.7 (8.0-9.3)cases per block.We saw the same trend when comparing rural and academic sites with 15.1 (12.9-17.7)versus 8.7 (8.0-9.3)cases per block.Of note, case volumes per block between community and rural centres each year did not show any statistically signifi cant difference.Overall relationships between sites and years are summarized in Figure 1. discussion General surgery training requires residents to be exposed to and competent in several different proced ures.Case volumes are an important factor in documenting and achieving proficiency. 4To our know ledge, this study is the first to compare 3 different types of resident training sites in a single program to see whether there are differences in case volumes for routine procedures. Results from this study show that in training, differences exist in overall routine case volumes per training site that are significant in PGY4 and PGY5.There is no statistically significant difference for residents in PGY3.We also looked at routine case volume per block per site and found that residents were performing about 1.5-3 times the number of routine cases at nonacademic than academic sites, depending on the year.When we compared the community and rural volumes, however, these results showed there was no difference in case volumes.That said, it is important to note that the PGY5 rural case volumes are extremely low, which may limit results.This is likely a result of time off for exam preparation and required rotations at academic centres E277 within the PGY5 year.Only residents who are interested in rural practice tend to request these rotations at this point in their training.We did not see a significant difference in overall case volumes for PGY3 residents.Residents in the UBC program rarely have subspeciality rotations at academic centres in their third year, meaning their highest exposure to routine procedures at this site should occur during this time and offers the best direct comparison between groups.However, per block, PGY3 residents had about 1.5 times the number of cases away from academia.Although in 1 month this difference may not be clinically important (11 v. 15 cases), these numbers can add up, as residents are usually assigned to a rotation for several blocks.Further, in the third year of training, resident operative skill acquisition is exponential, and even small case volumes have a meaningful impact as this is when residents begin to perform operations more routinely.This effect is significantly compounded in PGY4, for example, where residents perform almost 3 times the number of cases away from academia. 6][7] For example, a study from rural Tennessee reported residents performing about 35 cases per block, and all cases were essentially routine procedures. 5nother study from North Carolina reported about 10 cases per block for these cases when residents were at academic centres; however, they cited about 12 routine cases in rural sites, which differs from our results. 8The reason for this difference is not clear, but may reflect new training sites or local patient volumes. These differences seen in our study's volumes are also interesting, given that the total number of cases performed at each site excludes endoscopy, which in many programs, including the UBC program, is focused away from the academic centres. 2 Endoscopy is a big component of practice for general surgeons away from academic centres and is an essential skill for general surgery overall. 9Hao and colleagues logged 68% of their cases during rural rotations as endoscopy. 8In the UBC program, the curriculum has a dedicated scoping rotation, usually at a community site, while nonacademic general surgery rotations are focused on operative exposure.Some endoscopy is still performed, but volume varies substantially between sites.If endoscopy had been included for this study, it is likely that the nonacademic case numbers would have been much higher. Time and workload may be factors that contribute to decreased number of cases at academic sites.Senior residents at academic sites work concurrently with fellows and junior residents, who will take some of the case volume.Additionally, if more cases are performed by junior residents or some tasks are completed by medical students, this can increase the overall time for cases, leading to less opportunity for the senior resident to complete cases in a given day. 10Community and rural sites in this program have only a single resident working with the attending surgeons, and in many cases, attendings or experienced surgical assists may act as the assistant.Next, the case composition at academic centres is influenced by subspeciality services that perform nonroutine cases, which may not be applicable for this study because they are not appropriate for a senior resident to be the primary surgeon.Finally, cases at academic sites can also be more complex from both a technical standpoint (abdominal wall reconstruction v. small incisional hernia repair) and a patient standpoint, requiring more anesthesia support.From the technical standpoint, this leads to increased surgical time, as well as additional time for lines or neuraxial blocks, which will contribute to less overall daily volume.However, one must consider the complex interplay between volume, education, and complexity.Interestingly, the literature has not examined case complexity so far, and the topic is not classified in this study, but its effect is important in the development of expertise within residency and backed by educational theories. 11,12Cases with less complexity allow residents to learn the basics and limit cognitive overload, which is the goal in the PGY3 year.A high volume of simple cases can help with this.As a resident progresses, however, complexity should be sought, to improve educational value. Given these findings, there is some evidence that residents are performing more routine cases as the primary operator away from academic training sites.This has implications for resident placement for rotations.For example, if a resident is struggling with skill development, an individualized educational plan might be required outside of the standard rotation matrix.Programs may consider sending this resident to a highervolume site to improve and gain more exposure to routine cases before returning.This has further impact in the era of competency-based medical education, as program directors should have a keen sense of where to send residents, depending where they are in their clinical development. 13Our study may also be useful for the scheduling of an average resident.Although complex cases should be the focus of senior years, repeated exposure to routine cases is also essential, to maintain proficiency and develop mastery. 14Given the discrepancy of routine cases between sites, it may be beneficial for programs to spread out nonacademic rotations to avoid the concept of blocked practice and instead promote distributed practice, which is better for long-term skill retention and potentially more reflective of general surgery practice. 15inally, these data provide a quality-control metric for this program to help decide whether improvements to operative volume are required, and provide a platform for other programs with multiple training sites to consider evaluation of their operative volumes to optimize learning. Although this study was carried out in a Canadian program, its results could be used broadly.Many programs now follow a structure of academic hubs, with associated training locations in smaller centres.As long as training streams are not strictly academic versus nonacademic, case mixes should be similar, and the volumes seen here should be replicable, especially because we controlled for the number of blocks. Limitations Limitations of the study include its retrospective nature and the fact that all cases are self-logged by residents, which may not always be accurate. 16,17That said, selflogging is a required component for promotion during training, and therefore residents are generally careful in logging practices.Additionally, case classification is limited by the general description of cases.For example, the T-Res system lists a case category for ventral hernia repair, but this may be a simple hernia repair or a component separation, which is not a routine procedure in this study.Finally, definitions for community and rural general surgery centres are not clearly defined within this program or in the literature, and the studies that exist in this area are focused on rural surgical programs. conclusion The results of this study show that residents perform routine general surgery procedures in higher numbers away from academic sites during senior residency.Future work in this area will look to understand overall case volumes of residents in terms of composition, location, and timing, to see how resident training can be further optimized.There is also opportunity for qualitative work to understand residents' experiences at academic sites versus nonacademic sites for these routine cases and whether that may contribute to the differences seen here.Finally, studies should look at whether the difference in case volume truly affects the operative ability of residents or the quality of education.Contributors: Subin Punnen and Ahmer Karimuddin designed the study.All authors acquired the data, which Shayda Taheri, Leo Chen, and Tracy Scott analyzed.Subin Punnen and Ahmer Karimuddin wrote the manuscript, which Shayda Taheri, Leo Chen, and Tracy Scott revised critically for important intellectual content.All authors gave final approval of the version to be published and agreed to be accountable for all aspects of the work. Content licence: This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY-NC-ND 4.0) licence, which permits use, distribution and reproduction in any medium, provided that the original publication is properly cited, the use is noncommercial (i.e., research or educational use), and no modifications or adaptations are made.See: https://creativecommons .org/licenses/by-nc-nd/4.0/ Fig. 1 . Fig. 1.Forest plot of relationships between case volumes at different sites per year.PGY = postgraduate year. Small and large bowel resections including rectal procedures such as abdominoperineal resection and low anterior resection.†Included several categories, such as inguinal, incisional, umbilical, and epigastric. * Table 2 in PGY4, and 4.98 during PGY5.For community sites, 3.02 blocks were spent in PGY3, 4.41 in PGY4, and 4.62 in PGY5.Finally, for rural sites, 3.30 blocks were spent in PGY3, 2.91 in PGY4, and 1.86 in PGY5.With respect to overall case volumes per year and site ( Table 2 . Cases per postgraduate year per site overall CI = confidence interval; PGY = postgraduate year. Table 3 . Comparison of case volumes per postgraduate year per site per block CI = confidence interval; PGY = postgraduate year.
2024-07-06T06:17:12.147Z
2024-07-04T00:00:00.000
{ "year": 2024, "sha1": "1e623bf2f7b4623d0c32a24c1852f8ab1a72d36b", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "b6a8c65b302e20a242b37865e2a48923e98e7508", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212751963
pes2o/s2orc
v3-fos-license
Current Knowledge and Attitudes Concerning Cost-Effectiveness in Glaucoma Pharmacotherapy: A Glaucoma Specialists Focus Group Study Background Rising healthcare costs motivate continued cost-reduction efforts. To help lower costs associated with open-angle glaucoma (OAG), a prevalent, progressive disease with substantial direct and indirect costs, clinicians need to understand the cost-effectiveness of intraocular pressure (IOP)-lowering pharmacotherapies. There is little published information on clinicians’ knowledge and attitudes about cost-effectiveness in glaucoma treatment. Purpose This pilot focus group study aimed to explore clinician attitudes and perspectives around the costs and cost drivers of glaucoma therapy; the implementation of cost-effectiveness decisions; the clinical utility of cost-effectiveness studies; and the cost-effectiveness of available treatments. Methods Six US glaucoma specialists participated in two separate teleconferencing sessions (three participants each), managed by an independent, skilled moderator (also a glaucoma specialist) using a discussion guide. Participants reviewed recent publications (n=25) on health economics outcomes research in glaucoma prior to the sessions. Results Participants demonstrated a clear understanding of the economic burden of glaucoma therapy and identified medications, diagnostics, office visits, and treatment changes as key cost drivers. They considered cost-effectiveness an appropriate component of treatment decision-making but identified the need for additional data to inform these decisions. Participants indicated that there were only a few recent studies on health economics outcomes in glaucoma which evaluate parameters important to patient care, such as quality of life and medication adherence, and that longitudinal data were scant. In addition to efficacy, participants felt patient adherence and side-effect profile should be included in economic evaluations of glaucoma pharmacotherapy. Recently approved medications were evaluated in this context. Conclusion Clinicians deem treatment decisions based on cost-effectiveness data as clinically appropriate. Newer IOP-lowering therapies with potentially greater efficacy and favorable side-effect and adherence profiles may help optimize cost-effectiveness. Future studies should include: clinicians’ perspectives; lack of commercial bias; analysis of long-term outcomes/costs; more comprehensive parameters; real-world (including quality-of-life) data; and a robust Markov model. Introduction Glaucoma, the leading cause of irreversible blindness globally, is increasing in prevalence due to rapid increase in the aging population. 1,2 An estimated 64.3 million people (aged 40 to 80 years) globally were affected by glaucoma in 2013, and that number is expected to reach 76 million by 2020. 2 Open-angle glaucoma (OAG) accounts for more than 70% of all glaucoma cases. 3 The number of Americans living with OAG-which is a chronic, progressive disease-was estimated to be 2.7 million in 2011 and projected to reach 7.3 million in 2050, growing at a rate of 28% per decade. 4 Glaucoma decreases health-related quality of life; the extent of the reduction is directly associated with the severity or stage of the disease. [5][6][7] Patients with glaucoma are faced with the difficult challenges of visual dysfunction in everyday life, such as reduced mobility and difficulty with reading. Among those with glaucoma, self-reported visual disability is associated with difficulty walking, falls, and depression. As the disease progresses, the psychological burden of vision loss increases. 8 Besides the affected individual, blindness and visual impairment from glaucoma also impact the families, the healthcare system, and society in general, creating a substantial socioeconomic burden. 7 The annual medical cost of glaucoma and disorders of the optic nerve in the US was estimated at $6.1 billion in 2014 and projected to be as high as $12 billion by 2032 and $17.3 billion by 2050. 9 The true direct cost would be considerably higher if all patients with this heavily underdiagnosed disease were treated. 7,10 A retrospective cohort analysis of Medicare claims found that glaucoma patients with any degree of vision loss had 46.7% higher total costs compared with those without vision loss, with mean total annual medical costs increasing from $8157 for no vision loss to $18,670 for blindness. 11 A Markov model replicating health events over the remaining lifetime of a patient with newly diagnosed glaucoma on US Medicare claims data from 1999 to 2005 estimated that the average lifetime cost of care for people with primary OAG (POAG) was about $137 per patient per year, or $1688 greater than those without glaucoma. 12 Using a large, nationally representative sample of Medicare beneficiaries, a recent study found that patients with glaucoma incurred an additional $2903 annual total health care costs and $2599 higher non-outpatient costs (total health care costs with the exclusion of outpatient payments) compared with those without. 8 The cost of glaucoma care in the US, then, is high and expected to become higher as the prevalence of the disease increases. In order to lower those costs, stakeholders-including clinicians-need to better understand the cost-effectiveness of IOP-lowering therapies. Costeffectiveness data provide information about the costs of different interventions or treatment strategies relative to their performance, which can be helpful in identifying potential ways to reduce the economic burden of treatment. Over the past decade, research has begun to address cost-effectiveness in the treatment of OAG and ocular hypertension (OHT). [13][14][15][16] While awareness of costs is of increasing importance, little is known about whether and how clinicians treating glaucoma patients use cost-effectiveness in clinical decision-making. We convened a small focus group of glaucoma specialists to learn more about their knowledge and attitudes regarding cost-effectiveness in the treatment of patients with OAG or OHT. The focus group method's main advantage is its qualitative nature, which is complementary to that of quantitative research and allows indepth exploration of thoughts, attitudes, and opinions via open-ended questions. It is commonly used to gain original insights and perspectives, uncover opinion trends, deepen understanding, and develop new hypotheses or ideas for further research. The aim of this pilot focus group study was to: 1) explore clinician perspectives regarding the costs and cost drivers of glaucoma therapy; 2) explore clinician attitudes and experience regarding the implementation of cost-effective decisions when treating patients with OAG or ocular hypertension (OHT) and the clinical utility of cost-effectiveness studies; and 3) explore clinician views about the cost-effectiveness of available treatment strategies and modalities for OAG and OHT and identify potential opportunities to improve glaucoma pharmacotherapy and reduce costs. Methods A focus group was formed to include six academic glaucoma specialists recognized as leading experts in the field of glaucoma treatment. Two separate teleconference sessions, each with three participants and led by the same moderator (also a glaucoma specialist) were conducted. A discussion guide was created and distributed beforehand to the participants in order to facilitate and focus the discussion sessions. The discussion guide comprised a mixture of standardized/ranked and open-ended questions, which were grouped under the following general topics: the cost of care, cost considerations, current medical glaucoma therapies, and health economics and outcomes research (HEOR) Figure 1). To help address the specific questions about HEOR in glaucoma, participants were asked to review the summaries of 25 recent publications on health economics of glaucoma therapies prior to the teleconferences (Table 1). Sessions spanned 2 h to allow the moderator ample time to solicit responses from participants to standardized/ranked questions, and to allow for additional discussion around open-ended questions. Following the conclusion of the discussion sessions, responses of the participants to open-ended questions were summarized descriptively based on the teleconference transcripts. Where possible, ranked responses were tallied. Because the discussions were based on existing published literature and general clinical experiences and no research was performed on human or animal subjects, human cell lines, or human tissues, this study did not require ethics committee approval. Costs and Cost Drivers in Glaucoma Therapy Participants noted that medical costs, including the financial burden of glaucoma therapy, are rising. They asserted the need to reduce that cost so that it is possible for clinicians to continue to provide their patients the best care. When asked to define "cost," they noted that the economic implications of glaucoma extend well beyond the direct, short-term costs to the patient or health system. Some specifically stated that, apart from expenses for medical services including medications, office visits, diagnostic testing, and surgery, glaucoma also produces significant patient-based and societal costs in forms of productivity/income loss or expenses for assistance with daily living, ie, the long-term cost of vision loss. Participants acknowledged that there are multiple cost drivers in glaucoma care. The drivers identified as having significant cost impact included medications, diagnostics, office visits, and treatment change (either switching medications or adding another agent, or advancement to laser or incisional surgery). Changes in treatment, the group noted, increase cost by adding office visits and patient time. Estimates of the contribution of medication cost to the overall cost of glaucoma care varied from 20% to 40% among the participants-and was perceived as a greater proportion of the cost relative to laser treatment and surgeries, especially among well-controlled patients. Meanwhile, more than one participant noted that treatment costs are directly related to disease stage and the number of different treatments required. For OAG patients who are diagnosed and treated early, the greatest part of expenditure will most likely be on medication, these participants stated. However, they noted, for those patients who have more advanced disease when diagnosed, whose pressure is poorly controlled, and who require more interventions (multiple medications, even multiple surgeries), the overall cost will almost certainly be higher and likely be led by costs of surgical care and productivity losses. Cost Considerations in Glaucoma Therapy Participants acknowledged that patients' access to prescription medications is a major concern and influenced primarily by price and health insurance status. They stated that out-of-pocket cost to patients is an important consideration in their practice; indeed, a significant reason why , and narrative or systematic review articles on glaucoma economics were identified, with priority given to those published more recently and/or involving some of our potential participants as authors and those that took place in the US and Europe. Selected references were reviewed by the focus group moderator to ensure that key publications have been taken into account. a generic prostaglandin analog (PGA) is the first-choice monotherapy for the majority of their glaucoma patients is insurance coverage. They further noted that efficacy, ocular and/or systemic side effects, dosing convenience, and patient adherence are the main factors that influence treatment choices, first or second line. Patient preferences also play a role, with many patients holding strong preferences among available treatment options. Patient perceptions or attitudes about generic substitutions, for example, can vary widely. The participants stated that many of their patients simply opt for the least costly alternative, while others place the highest value on clinical outcomes and are therefore willing to pay or tolerate more adverse effects for therapies with greater efficacy. Participants added that, in reality, clinicians are often unaware of medications' actual costs to patients. One noted that he prescribes mainly based on efficacy at least in part because it has become very difficult in the past few years to decipher the costs of medications charged at individual pharmacies. Although the clinicians in this focus group generally do not view themselves as gatekeepers for the healthcare system, they were in agreement that cost is an important consideration in the management of glaucoma from the broader perspective of society. One of the participants specifically noted that, beyond a responsibility to patients, clinicians also have a responsibility to society. He pointed out that clinicians should keep in mind their obligation of being a good steward of societal dollars and healthcare resources when making treatment decisions. Cost-Effectiveness of Current Medical Therapies All participants agreed that PGAs, the most widely used first-line glaucoma medications, stand out as a costeffective treatment among all available IOP-lowering medications. The drug class was described as efficacious (reaching a target IOP reduction of 30% most of the time), long-lasting in efficacy (which translates into less frequent visits and thus cost savings), safe (least number of systemic adverse events), time-tested (on the market more than 20 years), dosed conveniently at once daily, and reasonably priced in an era of generics. However, some participants cautioned that generics are not all created equal-their experiences indicate that the variability in efficacy and tolerability is significant between different generic brands. Participants stated that first-line treatment with a PGA is efficacious in lowering IOP in the majority of patients with glaucoma and that only a small minority require an alternative therapy. However, they also noted that, from a longitudinal perspective, combination therapy is often necessary to achieve or maintain target IOP, and that medication switching due to reasons such as tachyphylaxis, side effects, and visual field progression is common despite treatment. One participant estimated that at least 80% of patients with moderate to advanced disease and possibly 20% of patients with early disease require adjunctive therapy. The general consensus among the clinicians was that newer agents with greater efficacy than current regimens are needed in order to better control the cost of glaucoma therapy. If most patients will require adjunctive therapy at some point-and if, as noted above, treatment changes increase the cost by adding office visits and patient time-then having better firstor second-line treatments should provide long-term cost savings. The approach to adjunctive therapy varied among participants. The majority reported that they typically choose to add a second drug when PGA monotherapy is insufficient. Their add-on choices usually include a topical carbonic anhydrase inhibitor (CAI) or a beta-blocker. Some noted that they tend to switch medication when there is an inadequate initial response and may consider laser surgery earlier in some cases to avoid polypharmacy. While specific adjunctive intervention varies, the general consensus was that an optimal second-line therapy is still lacking. Two participants suggested that an alternative to adding a second drug is switching the initial PGA (typically generic latanoprost) to the NO-donating PGA latanoprostene bunod (LBN) 0.024%-the latter is as well tolerated and safe as latanoprost but has the potential to provide additional pressure-lowering. 33 Latanoprostene bunod was approved by the FDA in late 2017 and represents the first new PGA in more than 5 years, as well as the first NOdonating PGA. 34 One participant mentioned that he is considering the Rho kinase (ROCK) inhibitor netarsudil 0.02% as a second-line choice, another recently approved therapy, although it must be used in combination with another IOP-lowering medication, such as a PGA, in order to provide additional reductions in IOP over the standard of care; in addition, concerns about relatively high hyperemia rates exist with netarsudil. 35 When asked what is needed in a new medication to make it cost-effective, participants responded that new drugs need to be significantly better than the current options in one or several ways: efficacy, tolerability, safety, duration of action, or any combination thereof. In addition, participants asserted that adherence is an important consideration in determining whether a glaucoma medication is cost-effective. As one of them pointed out, no therapy can be cost-effective if the patient is non-adherent. Thus, a new medication may initially cost more, but if patients take it as prescribed, the increase in adherence may justify the cost over the long run. Participants emphasized that glaucoma is a chronic disease associated with low medication adherence in general and noted that improvement in adherence is critical for better management of the disease. Indeed, when asked to rank the importance of adherence improvement in the management of glaucoma on a rank scale of 1 to 5 (with 1 being "not important" and 5 being "very important"), the responses were 4 or 5. One participant remarked that adherence is one of the greatest unmet needs in glaucoma pharmacotherapy. Furthermore, participants viewed adherence as a multifactorial issue and identified the following factors as the main barriers to adherence in glaucoma therapy: side effects, number of drops, costs, and patient understanding of the disease. The Utility of Cost-Effective Research Participants were unanimous in their view that, overall, current cost-effectiveness research offers little clinical utility for the treatment of glaucoma or OHT. The group noted that published cost-effectiveness studies in the field of glaucoma have largely been geared towards insurers, payers, and pharmacy benefit managers, rather than doctors. They felt that few of the studies looked at parameters that are important to patient care and clinical practice, such as quality of life and medication adherence; and that longitudinal data are scant, with a dearth of evidence to determine what the most cost-effective treatment algorithm is over a patient's lifetime. Responses to the ranking question "How much does HEOR research influence your thinking about IOPlowering treatment?" were 1 to 2 (on a rank scale of 1 to 5, with 1 being "no influence" and 5 being "enormous influence.") indicating that the influence of current costeffectiveness data on clinical decision-making is indeed minimal. Participants stated that the available data may be used to guide insurers and payers but would need to be more persuasive and better designed in order to guide clinicians. Several participants commented that they find being good stewards of resources for the health care system as a whole an important goal but difficult to achieve given the current knowledge base about cost-effectiveness and payer-based variability in drug pricing. According to participants, desirable elements of future economic studies in glaucoma pharmacotherapy include: a clinician's perspective; an independent approach (ie, without commercial bias); analysis of long-term treatment outcomes and costs; a more comprehensive set of parameters, including stage of disease, treatment switch or addition, adherence, side effects associated with various therapies; real-world data related to clinical practice, including quality-of-life data; and a robust Markov model that allows assessment of all the costs. Participants asserted that cost-effectiveness should be considered in the context of the patient's age and expected lifespan, and, if possible, it would be important to determine the incremental cost of every additional mm Hg of IOP reduction. Discussion The cost-effectiveness of care is becoming an increasingly important aspect of glaucoma therapy because of the growing patient population and associated cost increases. Some prior research has investigated the economic outcomes of various glaucoma treatments, but few if any past studies have sought to identify clinicians' views regarding cost-effectiveness and their attitudes and experience using cost-effective data in the treatment of glaucoma. Participants in the present study displayed a consistent understanding of the economic impact of glaucoma and the need to reduce treatment costs. Their perception that medication use contributes substantially to costs is consistent with previous reports that prescription medication costs drive financial burden at all stages of glaucoma and are equal to or greater than all other charges. [36][37][38][39][40] There is also evidence in support of participants' impression that diagnostics are a significant cost driver. In a recent study among Medicare beneficiaries, diagnostic testing accounts for about one-third of glaucoma-related costs (excluding medication cost). 41 There is abundant evidence from previous quantitative studies supporting participants' assertion that disease severity has a direct impact on the costs of glaucoma. According to a US study, annual direct medical costs for patients with early glaucoma, advanced glaucoma, and end-stage glaucoma averaged $623, $1915, and $2511, respectively. 36 European studies have reported similar findings. Resource utilization and direct medical costs increase as disease worsens, and medication costs ranged from 42% to 56% of direct costs at each disease stage. 40 In a German cross-sectional study examining treatment costs of OHT and POAG, average total annual direct costs per patient were €226 for OHT, €423 for early POAG, €493 for moderate POAG, and €809 for advanced POAG. 42 Among patients with early glaucoma, medication costs comprise most of the cost of care. 39,42 For those with advanced disease, indirect costs such as costs for home health care and rehabilitation become predominant. 43,44 The finding that many of the participants give considerable thought to cost-specifically fees charged to patients-in their prescribing decisions (in the context of ensuring efficacy) suggests that awareness of drug cost to patients is fairly high among prescribing clinicians. However, the results of the present study also suggest that some barriers exist to implementing costeffectiveness decisions in the treatment of glaucoma. As the group noted, cost information for medications is often not readily accessible. This is not surprising, given that multiple middlemen (insurers, manufacturers, and pharmacy benefit managers) are involved in establishing drug prices. Without knowing what a drug's actual price is at the pharmacy, it is difficult for clinicians to base decisions on costs. Additionally, clinical decision-making that aims to reduce cost requires the guidance of research showing the relative cost-effectiveness of therapeutics and treatment strategies, but such evidence is largely lacking in the literature. Based on their own experience and a review of select economic studies in the field, this group of glaucoma specialists was of the opinion that there is a shortage of solid, useful data on cost-effectiveness of glaucoma therapies in the present literature. Major questions-such as how cost-effective a particular medication is compared to other treatment modalities such as laser trabeculoplasty or surgery and which medication is most cost-effective in lowering IOP-still lack a definite answer, although some evidence exists suggesting that first-line PGA monotherapy provides greater value than laser trabeculoplasty assuming optimal medication adherence and is the more cost-effective treatment compared to other types of available glaucoma medications. 16,[45][46][47][48] This highlights the need for unbiased, well-designed economic studies to establish the relative cost-effectiveness and impact on quality of life of the treatment regimens for OAG or OHT and to identify opportunities for further savings. Since individual and societal economic burdens of glaucoma both increase with disease severity, early identification and effective treatment of patients may help reduce the overall costs. 7 In support of this concept, a French study modeling the lifetime treatment cost in glaucoma showed that initial treatment with the most effective drug would reduce medical and social costs. 49 Currently, PGAs are widely preferred as the first-line treatment for OAG or OHT. Highly effective in IOPlowering and available in generic forms, PGAs are considered to be an overall cost-effective treatment option. Even so, as this group's clinical experience indicates, many patients do not achieve adequate IOP-lowering with available agents and require further interventions, increasing the cost of the disease. While costeffectiveness information on the two latest additions to the treatment options for glaucoma-LBN and netarsudil -is currently lacking, there is clearly a need for more cost-effective IOP-lowering medications ( Table 2). In reality, prices for new medications are relatively high, but price alone does not determine whether or not a treatment is cost-effective. Any economic assessment of a new treatment must also take into account the other determinant of its cost-effectiveness: the clinical benefits it provides, which may translate to savings in other categories of care. As pointed out by participants of this study, a new medication can be cost-effective as long as it provides enough "added value" for which patients and the society are willing to pay. One medication that participants discussed in this context was LBN 0.024%, the NO-donating PGA approved in late 2017 for lowering IOP in patients with OAG or OHT. LBN acts through its two metabolites-latanoprost acid and an NO-releasing moiety (butanediol mononitrate)and lowers IOP by enhancing aqueous outflow through both the uveoscleral and trabecular meshwork pathways. 50 The new drug appears to have all the important therapeutic advantages of a first-line therapy: high IOP-lowering efficacy, once-daily dosing, negligible systemic side effects, and low rate of ocular hyperemia. In a pooled analysis of the pivotal clinical trials, it was more effective at lowering IOP than timolol 0.5% and safe and well tolerated. 51 Furthermore, LBN has been associated with an IOP reduction of 1 to 1.5 mm Hg greater than that of latanoprost 0.005% (Xalatan). 33 An incremental improvement in efficacy, such as that reported with LBN, could be fairly significant from the cost-effectiveness standpoint. As mentioned, more effective lowering of IOP and the resulting decrease in the risk of glaucoma progression itself could generate cost savings from reduced health care resource utilization. In a study using a cost-offset model to analyze the clinical and economic outcomes of PGAs, an extra 1 mm Hg of IOP reduction accounted for fewer cases of progression and increased cost savings on office visits, visual field tests, additional glaucoma medications, and surgeries over a 7-year period. 52 Further, when a monotherapy combines greater efficacy with a once-daily regimen and a tolerable side effect profile, treatment persistence may improve, with less likelihood of medication addition or switch and potentially better adherence. Poor adherence to topical therapy is a well-established challenge in the management of glaucoma patients. 53,54 According to the Glaucoma Adherence and Persistency Study, only 10% of patients are continuously persistent with IOP-lowering medications throughout a year, and, among the slightly more than half of patients who restart after a gap in refilling the prescription, nearly 80% will have at least another gap. 53 One possible barrier to adherence is the use of adjunctive agents, which is required within a year for adequate IOP control in about one-third of patients starting glaucoma therapy and has been shown to contribute to higher management costs. [55][56][57]59 Side effects of medications may also adversely impact adherence to therapy. 56,60 Netarsudil 0.02%, another new topical therapy that is most recently available for reducing IOP in patients with glaucoma or OHT, is a Rho kinase (ROCK) inhibitor. Like LBN, netarsudil enhances trabecular outflow facility. 35 The drug is thought to also decrease aqueous production and episcleral venous pressure. Clinical trial data suggest that netarsudil is not as effective as the PGAs, and that more than half of patients experience conjunctival hyperemia. 35,61 As the most common side effect of topical ocular prostaglandins, hyperemia in glaucoma patients has been shown to be a major reason for medication changes and result in increased overall treatment costs. 59,62 Given that it is conveniently dosed once-daily and no associated systemic safety issues have been identified, however, netarsudil could be potentially a more cost-effective adjunctive option relative to the available alternatives. Limitations of the present study include the small sample size (a single focus group of only six participants) and the lack of participant diversity with regard to demographics and/or professional backgrounds. In particular, the study included no input from comprehensive ophthalmologists or optometrists, who also manage glaucoma patients in everyday practice. All the participants were glaucoma specialists, whose patients are more likely to have advanced disease and thus require special treatment considerations. Although the group discussions yielded meaningful data, the results may not be generalizable. The majority of this group of glaucoma specialists said that they discuss medication costs with their patients, for example, but research indicates that cost-related conversations between ophthalmologists and glaucoma patients are uncommon. In a recent study that analyzed 275 videorecorded glaucoma office visits at six different medical centers located in various geographic areas, only 87 visits involved a discussion of medication cost. 63 In summary, the present study provides new data on glaucoma specialists' knowledge and attitudes about costeffectiveness and cost-effectiveness research in glaucoma therapy. The results suggest that these clinicians support the incorporation of cost-effectiveness into treatment decisions for glaucoma patients and are willing to provide care proved to be cost-effective. A more robust evidence base is needed to derive clear practical guidelines for decisions based on cost-effectiveness. Newer IOP-lowering medications with the potential to provide clinically meaningful benefit, such as LBN 0.024% and netarsudil 0.02%, or a fixed-dose combination of netarsudil and latanoprost approved for marketing after this focus group convened, may be helpful in applying cost-effectiveness to the treatment of OAG or OHT. Acknowledgments This pilot focus study was conducted by all listed authors, with editorial assistance from Ethis Inc. Funding was provided by Bausch Health US, LLC. Disclosure Dr Robert N Weinreb reports grants and personal fees from Bausch Health US, LLC, personal fees from Aerie Pharmaceuticals, personal fees from Allergan, personal fees from Eyenovia, and personal fees from Novartis, outside the submitted work. Dr Robert M Feldman reports personal fees from Bausch Health US, LLC, personal fees from Alcon, and personal fees from Aerie, outside the submitted work. Dr. Jeffrey M Liebmann reports personal fees from Bausch Health US, LLC, personal fees from Aerie Pharmaceuticals, personal fees from Allergan, and personal fees from Novartis, outside the submitted work. The authors report no other conflicts of interest in this work.
2020-03-19T10:13:36.014Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "ca3972c5e84e90deafaccdf87958c69275516437", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=56609", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d703225591cc5900b90dc46c69245bc82124c27b", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
16999235
pes2o/s2orc
v3-fos-license
Atmospheric Neutrino Anomaly and Supersymmetric Inflation A detailed investigation of hybrid inflation and the subsequent reheating process is performed within a mu problem solving supersymmetric model based on a left-right symmetric gauge group. The process of baryogenesis via leptogenesis is especially studied. For mu and tau neutrino masses consistent with the small angle MSW resolution of the solar neutrino problem and the recent results of the SuperKamiokande experiment, we show that maximal mu-tau neutrino mixing can be achieved. The required value of the relevant coupling constant is, however, quite small (of the order 10^{-6}). The hybrid inflationary scenario [1] can be easily implemented [2][3][4] in the context of supersymmetric theories in a 'natural' way meaning that a) there is no need for tiny coupling constants, b) the superpotential used is the most general one allowed by gauge and R-symmetries, c) supersymmetry guarantees that radiative corrections do not invalidate inflation, but rather provide a slope along the inflationary trajectory which drives the inflaton towards the supersymmetric vacua, and d) supergravity corrections can be brought under control so as to leave inflation intact. A moderate extension of the minimal supersymmetric standard model (MSSM) based on a left-right symmetric gauge group provides [4] a suitable framework for hybrid inflation. The inflaton is associated with the breaking of SU(2) R and consists of a gauge singlet and a pair of SU(2) R doublets. The doublets can decay into right handed neutrinos, after inflation, reheating the universe and providing a mechanism [5] for baryogenesis through a primordial leptogenesis. The gauge singlet, however, has no direct coupling to light matter in the simplest case. Moreover, its coupling to the SU(2) R doublets turns out to be unable to ensure its efficient decay. This difficulty can be overcome by introducing [4,6] a direct superpotential coupling of the gauge singlet superfield to the electroweak higgs doublets. This way the gauge singlet scalar can decay into a pair of higgsinos. It has been shown [6] that, in the presence of gravity-mediated supersymmetry breaking, this gauge singlet acquires a vacuum expectation value (vev) and consequently generates, through its coupling to the ordinary higgs superfields, the µ term of MSSM. A coupling of the scalar components of the SU(2) R doublets to the electroweak higgses is automatically induced in this scheme, allowing them to decay into a pair of ordinary higgses in addition to their useful (for baryogenesis) decay to right handed neutrinos. In this paper, we attempt a detailed study of inflation in the above scheme. In particular, we solve the evolution equations of this system and estimate the reheating temperature. The process of baryogenesis via leptogenesis is also considered and its consequences on ν µ -ν τ mixing are analyzed. For masses of ν µ , ν τ which are consistent with the small angle MSW resolution of the solar neutrino problem and the recent results of the SuperKamiokande experiment [7], we examine whether maximal ν µ -ν τ mixing can be achieved. Let us first describe the main features of the G LR = SU(3) c × SU(2) R × SU(2) L × U(1) B−L symmetric model [6] which solves the µ problem. The SU(2) R × U(1) B−L group is broken by a pair of SU(2) R doublet chiral superfields l c ,l c which acquire a vev M >> m 3/2 ∼ (0.1 − 1) TeV, the gravitino mass. This breaking is achieved by means of a gauge singlet chiral superfield S which plays a crucial three-fold role: 1) it triggers SU(2) R breaking; 2) it generates the µ term of MSSM after gravity-mediated supersymmetry breaking; and 3) it leads to hybrid inflation [1]. Ignoring the matter fields of the model, the superpotential reads where the chiral superfield h = (h (1) , h (2) ) belongs to a bidoublet (2,2) The model has a built-in inflationary trajectory in the field space along which the F S term is constant [3,4]. This trajectory is parametrized by |S|, |S| > S c = M for λ > κ (see below). All other fields vanish on this trajectory. The F S term provides us with a constant tree level vacuum energy density V tree = κ 2 M 4 , which is responsible for inflation. Radiative corrections generate a logarithmic slope [3] along the inflationary trajectory that drives the inflaton toward the minimum. The one-loop contribution to this slope comes from the l c ,l c and h supermultiplets, which receive at tree level a nonsupersymmetric contribution to the masses of their scalar components from the F S term. For |S| ≤ S c = M, the l c ,l c components become tachyonic, compensate the F S term and the system evolves towards the 'correct' supersymmetric minimum at h = 0, l c =l c = M. (For κ > λ, h would have become tachyonic earlier and the system would have evolved towards the 'wrong' minimum at h = 0, l c =l c = 0.) Inflation can continue at least till |S| approaches the instability at |S| = S c provided that the slow roll conditions [3,8] are violated only 'infinitesimally' close to it. This is true for all values of the relevant parameters considered in this work. The cosmic microwave quadrupole anisotropy can be calculated [3] by standard methods and turns out to be where M P = 1.22 × 10 19 GeV is the Planck scale and with x = |S|/S c and S Q being the value of |S| when the present horizon scale crossed outside the inflationary horizon. The number of e-foldings experienced by the universe between the time the quadrupole scale exited the horizon and the end of inflation is The spectral index of density perturbations turns out to be very close to unity. After reaching the instability at |S| = S c , the system undergoes [9] a short complicated evolution during which inflation continues for another e-folding or so. The energy density of the system is reduced by a factor of about 2-3 during this period. The system then rapidly settles in a regular oscillatory phase about the supersymmetric vacuum. Parametric resonance can be ignored in this case [9]. The inflaton (oscillating system) consists of the two complex scalar fields S and θ = (δφ Here φ,φ are the neutral components of the superfields l c ,l c respectively. The scalar fields S and θ predominantly decay into ordinary higgsinos and higgses respectively with a common decay width Γ h = (1/16π)λ 2 m inf l , as one can easily deduce from the couplings in Eq.(1). Note, however, that θ can also decay to right handed neutrinos ν c through the non-renormalizable superpotential term (M ν c /2M 2 )φφν c ν c , allowed by the gauge and R-symmetries of the model [4]. Here, M ν c denotes the Majorana mass of the relevant ν c . The scalar θ decays preferably into the heaviest ν c with M ν c ≤ m inf l /2. The decay rate is given by where 0 ≤ α = 2M ν c /m inf l ≤ 1. The subsequent decay of these ν c 's gives rise to a primordial lepton number [5]. The baryon asymmetry of the universe can then be obtained by partial conversion of this lepton asymmetry through sphaleron effects. The energy densities ρ S , ρ θ , and ρ r of the oscillating fields S, θ, and the 'new' radiation produced by their decay to higgsinos, higgses and ν c 's are controlled by the equations: where is the Hubble parameter and overdots denote derivatives with respect to cosmic time t. We have assumed that the potential energy density is, to a good approximation, quadratic in the fields S and θ and, thus, the oscillating inflaton system resembles the behavior of 'matter'. Note that the second equation in Eq.(6) can be replaced by where t 0 is the cosmic time at the onset of the oscillatory phase. The initial values are taken to be ρ S (t 0 ) = ρ θ (t 0 ) ≈ κ 2 M 4 /6, ρ r (t 0 ) = 0 and, for all practical purposes, we put t 0 = 0. The 'reheat' temperature T r is calculated from the equation where the effective number of massless degrees of freedom is g * =228.75 for MSSM. The lepton number density n L produced by the ν c 's satisfies the evolution equation: where ǫ is the lepton number produced per decaying right handed neutrino and the factor of 2 in the second term of the rhs comes from the fact that we get two ν c 's for each decaying scalar θ particle. Eq.(11) is easily integrated out to give where a(t) is the scale factor of the universe. The first equation in Eq. (6) gives Combining Eqs. (12) and (13) we get the asymptotic (t → ∞) lepton asymmetry where s(t) ∼ 2π 2 g * 45 30 is the asymptotic entropy density. For MSSM spectrum between 100 GeV and M, the observed baryon asymmetry n B /s is related [10] to n L /s by n B /s = −(28/79)(n L /s). It is important to ensure that the primordial lepton asymmetry is not erased by lepton number violating 2 → 2 scatterings at all temperatures between T r and 100 GeV. This requirement gives [10] m ντ < ∼ 10 eV which is readily satisfied in our case (see below). Assuming hierarchical light neutrino masses, we take m νµ ≈ 2.6×10 −3 eV which is the central value of the µ-neutrino mass coming from the small angle MSW resolution of the solar neutrino problem [11]. The τ -neutrino mass will be restricted by the atmospheric anomaly [7] in the range 3 × 10 −2 eV < ∼ m ντ < ∼ 11 × 10 −2 eV. Recent analysis [12] of the results of the CHOOZ experiment [13] shows that the oscillations of solar and atmospheric neutrinos decouple. We thus concentrate on the two heaviest families ignoring the first one. Under these circumstances, the lepton number generated per decaying ν c is [8,14] where g(r) = r ln(1 + r −2 ) , | h (1) | ≈ 174 GeV, c = cos θ, s = sin θ, and θ (0 ≤ θ ≤ π/2) and δ (−π/2 ≤ δ < π/2) are the rotation angle and phase which diagonalize the Majorana mass matrix of ν c 's, M R , with eigenvalues M 2 , M 3 (≥ 0) in the basis where the 'Dirac' mass matrix of the neutrinos, M D , is diagonal with eigenvalues m D 2 , m D 3 (≥ 0). Note that, for the range of parameters considered here, the scalar θ decays into the second heaviest right handed neutrino with mass M 2 (< M 3 ) and, thus, M ν c in Eq.(5) should be identified with M 2 . Moreover, M 3 turns out to be bigger than m inf l /2 as it should. We will denote the two positive eigenvalues of the light neutrino mass matrix by m 2 (=m νµ ), m 3 (=m ντ ) with m 2 ≤ m 3 . All the quantities here (masses, rotation angles and phases) are 'asymptotic' (defined at the grand unification scale M GU T ). The determinant and the trace invariance of the light neutrino mass matrix imply [14] two constraints on the (asymptotic) parameters which take the form: The µ-τ mixing angle θ 23 (=θ µτ ) lies [14] in the range where ϕ (0 ≤ ϕ ≤ π/2) is the rotation angle which diagonalizes the light neutrino mass The 'asymptotic' Dirac masses of ν µ , ν τ as well as θ D can be related to the quark sector parameters by assuming approximate SU(4) c symmetry. We obtain the asymptotic Renormalization effects must now be taken into account. To this end, we take MSSM spectrum and large tan β ≈ m t /m b . The latter follows from the fact that the MSSM higgs doublets form a SU(2) R doublet. It turns out [14] that, in this case, renormalization effects can be accounted for by simply substituting in the above formulae the following numerical values: m D 2 ≈ 0.23 GeV, m D 3 ≈ 116 GeV and sin θ D ≈ 0.03. Also, tan 2 2θ 23 increases by about 40% from M GU T to M Z . The allowed regions in the m ντ -κ plane for maximal ν µ -ν τ mixing (bounded by the solid lines) and sin 2 2θ µτ > ∼ 0.8 (bounded by the dotted lines) are shown in Fig.3. Notice that, for sin 2 2θ µτ > ∼ 0.8, κ ≈ (0.9 − 7.5) × 10 −6 which is rather small. (Fortunately, supersymmetry protects this coupling from radiative corrections.) The corresponding values of M and T r can be read from Fig.1. We find 1.3×10 15 GeV < ∼ M < ∼ 2.7×10 15 GeV and 10 7 GeV < ∼ T r < ∼ 3.2 × 10 8 GeV. We observe that M turns out to be somewhat smaller than the MSSM unification scale M GU T . (It is anticipated that G LR is embedded in a grand unified theory.) The reheat temperature, however, satisfies the gravitino constraint (T r < ∼ 10 9 GeV). It should be noted that, for the values of the parameters chosen here, the lightest supersymmetric particle (LSP) is [16] an almost pure bino with mass m LSP ≈ 0.43M 1/2 ≈ 200 GeV [18]. Its contribution to the mass of the universe turns out [18] to be Ω LSP h 2 ≈ 1. In summary, we have investigated hybrid inflation and the subsequent reheating process in the framework of a µ-problem solving supersymmetric model based on a left-right symmetric gauge group. The process of baryogenesis via leptogenesis is especially considered. For masses of ν µ , ν τ consistent with the small angle MSW resolution of the solar neutrino problem and the recent SuperKamiokande data, we showed that maximal ν µ -ν τ mixing can be achieved. The required value of the coupling constant κ is, however, quite small (∼ 10 −6 ). We would like to thank B. Ananthanarayan and C. Pallis for useful discussions. This work was supported by the research grant PENED/95 K.A.1795.
2014-10-01T00:00:00.000Z
1998-07-06T00:00:00.000
{ "year": 1998, "sha1": "0533d030b0c417f8832acf7f5fd7115360e30641", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/9807253", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0533d030b0c417f8832acf7f5fd7115360e30641", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259368538
pes2o/s2orc
v3-fos-license
Complete mitochondrial genome of Laeocathaica amdoana Möllendorff, 1899 and phylogenetic analysis of Camaenidae (Gastropoda: Stylommatophora: Helicoidea) Abstract The first complete mitochondrial genome of the dart sac-bearing camaenid Laeocathaica Möllendorff, 1899 was sequenced and analyzed in this study. The whole mitogenome of Laeocathaica amdoana Möllendorff, 1899 was 14,660 bp in length and its nucleotide composition showed high AT-content of 67.45%. It had 37 genes, including 13 protein-coding genes, two ribosomal RNA genes, and 22 transfer RNA genes. The phylogeny yielded by both Bayesian inference and maximum-likelihood method suggested that Laeocathaica was closely related to the other dart sac-bearing camaenids with known complete mitochondrial genome. These genetic data are expected to provide fundamental resources for further genetic studies on the camaenids. Introduction Laeocathaica amdoana M€ ollendorff, 1899 belongs to the genus Laeocathaica M€ ollendorff, 1899, a group of dart sac-bearing camaenids (M€ ollendorff 1899). L. amdoana is characterized by a typical Laeocathaica sinistral shell and anatomically, two tiny proximal accessory sacs and each has an opening leading to the dart sac chamber. This species is endemic to NW Sichuan and S Gansu, China (Wu et al. 2023). At present, the public nucleotide database has no information pertaining to the mitogenome of this genus. For the first time, we sequenced, annotated, and analyzed the complete mitochondrial genome of L. amdoana, which might provide new data for the reconstruction of the camaenid phylogeny. Materials The specimen (voucher no. HBUMM08456, Figure 1) was collected from Shichuanba, Wenxian County, Gansu Province (33.17534 N, 105.019362 E) and deposited in Hebei University (contact Min Wu: minwu1969@aliyun.com). The reference images were prepared using a Canon camera. Species identification was based on shell morphology (Figure 1(A)) and genital anatomy according to the diagnostic characteristics proposed by Wu et al. (2023). The elongated part between atrium and dart sac apparatus and two tiny proximal accessory sacs (Figure 1(B)) make this species clearly different from those species with similar shells in the genus Laeocathaica, i.e. L. distinguenda M€ ollendorff, 1899 (in which the part between atrium and dart sac is not elongated) and L. tropidorhaphe M€ ollendorff, 1899 (in which the proximal accessory sac is absent) and is obviously L. amdoana. DNA extraction, sequencing, and assembly We sequenced the mitogenome of L. amdoana using the next-generation sequencing (NGS) techniques. Genomic DNA was extracted using CTAB method. Raw data were generated using Illumina Novaseq6000 platform. The lengths of reads and inserted sequence are 2 Â 150 bp and 400 bp, respectively. Software fastp v0.36 (Chen et al. 2018) was applied to filter raw data. After quality control (QC), the clean reads were assembled via SPAdes v3.15 (Bankevich et al. 2012). Then, we used blastn (Altschul et al. 1997) to compare scaffolds with those existing sequences in the NT database (NCBI) to investigate sequence similarity. GapFiller v1.11 (Boetzer and Pirovano 2012) was used to supplement gaps. The sequences were checked and corrected using PrInSeS-G (Massouras et al. 2010) before the complete circular mitogenome sequence was obtained. Annotation of mitogenome The limits of protein-coding genes (PCGs) were adjusted with aids of genewise (Birney et al. 2004) and tblastn (Altschul et al. 1997) against the mitogenomic sequences of other assumed related species. Ribosomal RNA genes (rRNAs) were predicted using cmsearch (Nawrocki and Eddy 2013) based on the CM models constructed by mitos (Bernt et al. 2013). Transfer RNA genes (tRNAs) were identified with MiTFi (Li et al. 2013). After predicting the gene positions by softwares, we manually checked and adjusted the boundaries of every gene according to the criteria mentioned by both Fourdrilis et al. (2018) and Ghiselli et al. (2021). We used a circos plot ( Figure 2) to display the annotation results. Phylogenetic analysis Complete mitochondrial genomes of two species of Helicidae and 11 camaenids including that sequenced by this work, all known so far, were obtained from NCBI for reconstructing the phylogeny of Camaenidae. We use helicids Helix pomatia Linnaeus, 1758 and Cylindrus obtusus Draparnaud, 1805 as the outgroups, considering that phylogenetically Helicidae and Bradybaenidae (sensu Wade et al. 2001) (¼Bradybaeninae) are almost the nearest relatives based on 5.8S gene, the complete internal transcribed spacer 2 (ITS2) region, and approximately 840 nucleotides of the large subunit (28S) gene (Wade et al. 2001(Wade et al. , 2007. In addition, the mitochondrial genome of camaenid Euhadra herklotsi was sequenced in 1997, and was submitted to GenBank in nine separate parts (see Figure 3), instead of a complete circular genome, so we synthesized these nine records to obtain 37 genes (for details of the complete mitogenome that we organized, see supplementary material 1). The extraction and alignment of PCGs and rRNA genes were performed with PhyloSuite v1.2.3 (Xiang et al. 2023). Ambiguously aligned fragments of 13 PCG alignments were removed in batches using Gblocks 0.91b (Talavera and Castresana 2007) with default parameter settings, and gap sites of rRNA genes were removed with trimAl v1.2rev57 (Capella-Guti errez et al. 2009) using '-automated1' command. DAMBE v7.3.32 (Xia 2018) was employed to make the saturation test for every gene of PCGs and rRNAs. Unsaturated sequences were concatenated in the same order for subsequent analyses. Best-fit partition model (Edge-linked, Table 1) was selected under BIC criterion using ModelFinder (Kalyaanamoorthy et al. 2017). Bayesian inference phylogenies were inferred using MrBayes 3.2.7a (Ronquist et al. 2012) under partition model (two parallel runs, 530,000 generations), in which the initial 25% of the sampled data were discarded as burn-in. Maximum-likelihood phylogenies were inferred using IQ-TREE (Nguyen et al. 2015) under Edgelinked partition model for 5000 ultrafast (Minh et al. 2013) bootstraps. Mitogenomic characterization The average sequence coverage was 39.34Â. The complete mitochondrial genome of L. amdoana was circular and 14,660 bp in length. The nucleotide composition was 28.68%, 38.77%, 18.18%, and 14.37% for A, T, G, and C, respectively. This mitogenome contained 37 genes, including 13 PCGs, two rRNA genes, and 22 tRNA genes. Among them, 23 genes were transcribed along the forward direction and the rest along the reverse direction. A few genes were overlapped with their neighboring genes. The length of gene overlaps in the whole mitochondrial genome was 40 bp. All PCGs used ATK as the start codon except COX2 and NAD3 starting with TTG, and CYTB starting with GTG. In addition, all PCGs use TAR as stop codon, except COX2 ending with an abbreviated stop codon T, and ATP6 and ATP8 ending with TA. The length of tRNA genes ranged from 53 to 63 bp. Intergenic regions that contained 952 bp in total ranged between 1 and 816 bp, accounting for 6.50% of the whole mitogenome. The longest intergenic region, also an AT-rich region, was located between trnW uca and trnY gua with a 67.89% AT content. Phylogenetic analysis The saturation tests indicated that ATP8, NAD2, NAD3, NAD4L, and NAD6 are saturated genes. The 1st and 2nd codons of the rest PCGs and two rRNA genes were therefore concatenated for phylogenetic analyses. The phylogenies reconstructed by Bayesian and ML methods were topologically identical (Figure 3). It indicated that L. amdoana was sister to the clade (Mastigeulota kiangsinensis, (Karaftohelix adamsi, Fruticicola koreana)). The clade of dart sac-bearing camaenids was divided into two groups based on the status of flagellum. The camaenids without dart sac apparatus are basal in the clade including all the examined camaenids. In Laeocathaica, the change of the gene order, in comparison to those in the other camaenid taxa, was observed (for details, see supplementary material 2). The gene order of Laeocathaica agrees with that of Mastigeulota on the same branch, and Euhadra and Dolicheulota on the sister branch. The present work confirms that the evolution of flagellum and dart sac apparatus might have played significant roles in the course of formation of camaenid diversity (Figure 3). By this work, we reported the mitogenome of L. amdoana for the first time, which is congruent with the typical mitochondrial genome structure in metazoan (Saccone et al. 1999). The systematic position of Laeocathaica within Camaenidae was first inferred based on complete mitogenomes, and these results would provide fundamental resources for further studies on the evolution of this genus and the phylogenetics of Camaenidae. Author contributions WS prepared figures, analyzed data, uploaded data to NCBI, and drafted the paper. MW designed this work, identified the species, drafted and approved the final draft. They both agree to be accountable for this work. Ethical approval This research did not involve ethical research because snails are exempt from ethical approval or permission. Disclosure statement No potential conflict of interest was reported by the authors, and the authors alone are responsible for the content and writing of the paper.
2023-07-09T05:09:19.650Z
2023-07-03T00:00:00.000
{ "year": 2023, "sha1": "2cdb4a8837c288fe87079d506cc946677bd03b69", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "2cdb4a8837c288fe87079d506cc946677bd03b69", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
199637193
pes2o/s2orc
v3-fos-license
Pseudoleucochloridium ainohelicis nom. nov. (Trematoda: Panopistidae), a Replacement for Glaphyrostomum soricis Found from Long-Clawed Shrews in Hokkaido, Japan, with New Data on its Intermediate Hosts Members of the genus Glaphyrostomum Braun, 1901 (Trematoda: Brachylaimidae) are parasites of birds. However, an exception occurs in Glaphyrostomum soricis Asakawa, Kamiya and Ohbayashi, 1988, which was described from the longclawed shrew, Sorex unguiculatus Dobson, 1890, in Hokkaido, Japan. A recent DNA barcode-based trematode survey of land snails clearly showed that Ainohelix editha (A. Adams, 1868), a bradybaenid snail indigenous to Hokkaido, serves as the first and second intermediate hosts for a species of the genus Pseudoleucochloridium Pojmańska, 1959 (Panopistidae). Its adult stage was furthermore confirmed from S. unguiculatus. A comparison of adult morphology between Pseudoleucochloridium sp. and G. soricis revealed that both should be considered the same species. However, Pseudoleucochloridium soricis comb. nov. cannot be applied because P. soricis (Sołtys, 1952) already exists as the type species of the genus. We, therefore, propose Pseudoleucochloridium ainohelicis nom. nov. as a replacement name for G. soricis. Introduction The class Digenea (Platyhelminthes: Trematoda) is a group of obligate parasites, which have a complex life cycle involving three hosts in typical species (Cribb et al. 2003). The digenean trematodes asexually reproduce mainly in molluscan first intermediate hosts, and the resulting cercariae metamorphose into metacercariae in second intermediate hosts. The metacercariae mature hermaphroditic adults in vertebrate definitive hosts. The discrimination of the larvae and adults from each of the hosts is essential in conducting taxonomic and ecological studies on digenean trematodes. Several species of trematodes belonging to the closely related families Brachylaimidae Joyeux andFoley, 1930 andPanopistidae Yamaguti, 1958 are endemic to Japan. Members of these families are unique in using land snails as the first and second intermediate hosts (Pojmańska 2002a). Each of the parasites uses a particular species or group of snails as the first intermediate host, but a wide range of snail species are involved as the second intermediate host Montoliu 1986, 1995;Gracenea and González-Moreno 2002). Thus, the parasites play the different roles of a strict specialist and a generalist in their larval stages. The Japanese species of the two families have been infrequently recorded from shrews (Asakawa et al. 1988), shrew-moles (Yamaguti 1952;Kifune et al. 1992), rodents (Kamiya and Machida 1977), and birds (Yamaguti 1935(Yamaguti , 1941, although their larval stages still remain undiscovered from land snails. A recent DNA barcode-based trematode survey of land snails in Hokkaido, the northernmost island of Japan, led us to the description of two new species belonging to the genus Brachylaima Dujardin, 1843 (Nakao et al. 2017(Nakao et al. , 2018. During the continuing survey of land snails, we noticed that Ainohelix editha (A. Adams, 1868), an indigenous land snail in the island, serves as the first and second intermediate hosts for an unidentified species of Pseudoleucochloridium Pojmańska, 1959 (Panopistidae). A subsequent survey of small mammals succeeded in finding its adult stage from the long-clawed shrew, Sorex unguiculatus Dobson, 1890. However, this species has already been described as Glaphyrostomum soricis Asakawa, Kamiya, and Ohbayashi, 1988 (Brachylaimidae) from the same definitive host. A nomenclatural revision is needed to correct the taxonomic position of G. soricis. A new combination name, Pseudoleucochloridium soricis comb. nov., cannot be applied because the same name, P. soricis (Sołtys, 1952), already exists as the type species of the genus. Accordingly, in this study we propose P. ainohelicis nom. nov. as a replacement name for G. soricis. The field survey of this parasite enabled us to describe the larvae (sporocyst, cercaria, and metacercaria) from land snails and to redescribe the adult from shrews. The natural transmission of the parasite in Hokkaido was also considered based on the prevalence data of land snails, together with a discussion on the validity of both the species and the related families. Materials and Methods Field survey. During July and October in both 2017 and 2018, a field survey was carried out at three woody sites of Asahikawa, Hokkaido, namely Shunkodai (43. 808°N,142.355°E),Ubun (43.719°N,142.351°E),and Tomisawa (43.746°N,142.316°E). Land snails of Ainohelix editha, Ezohelix gainesi (Pilsbry, 1900), Discus pauper (Gould, 1859), and Succinea lauta Gould, 1859 were collected by hand picking from plant leaves and litter layers. Each snail was crushed between thick grass plates and then dissected in Dulbecco's phosphate-buffered saline (PBS) under a stereomicroscope. The heart, kidney, and hepatopancreas were broken by fine-tipped forceps to detect metacercariae and sporocysts including cercariae. The number of metacercariae per infected snail was counted to measure the intensity of infection. After microscopic observation of the living larvae, the remainder were kept in 70% ethanol or 10% neutralbuffered formalin for later analyses. In the sites of Shunkodai and Tomisawa, shrews were collected dead, through the use of pitfall and Sherman box traps. All the shrews were necropsied to detect adult trematodes from the internal organs, particularly the intestine. The adult worms recovered were also kept in ethanol or formalin. Morphological observation. A microscope with a digital camera (Axio Imager, Zeiss) was used to observe parasite morphology. Digital photographic data were processed by the accessory software (AxioVision) to measure object sizes. Sporocysts and cercariae mounted on glass slides with PBS were observed in a living condition. The vital dye, neutral red, was used at approximately 0.05% in PBS to stain the internal structure of the cercariae. Non-encysted metacercariae and gravid adults were flattened in 10% neutralbuffered formalin between a glass slide and a coverslip. A slight pressure was added on the coverslip to arrange their posture. After removing the extra formalin, the slides were kept in a moisture box overnight. The resultant flattened worms were photographed for morphometric assessment. Line-drawing figures were made from digital photographs by using an interactive pen display (UGEE Co. Ltd.) and the software Clip Studio Paint (SELSYS, Inc.). As reported previously (Nakao et al. 2018), the permanent specimens of the flattened worms were made after staining with Heidenhain's hematoxylin. The type specimen of adult G. soricis kept in Meguro Parasitological Museum, Tokyo (no. MPM19500) was used for a comparative observation with newly obtained materials. DNA sequencing. Polymerase chain reaction (PCR) and subsequent DNA sequencing were carried out as reported previously (Nakao et al. 2017). Templates of PCR were prepared from ethanol-preserved specimens without purifying DNA. A half body (metacercaria) or an approximately 1 mm 3 piece (sporocyst and adult) was lysed in 25 µl of 0.02N NaOH at 99°C for 30 min. One µl of the lysate was used as a template. The Tks Gflex™ DNA polymerase (TaKaRa) was used for PCR with the manufacturer-supplied reaction buffer. Nuclear 28S ribosomal DNA (rDNA) was amplified using the primer set digl2 and 1500R (Tkach et al. 2016), and mitochondrial cytochrome c oxidase subunit 1 (cox1) using the set JB3 and CO1-R trema (Miura et al. 2005). The latter gene is needed for DNA barcoding. The PCR was run for 40 cycles (98°C for 10 sec, 50°C for 20 sec, and 68°C for 90 sec) in a total volume of 25 µl including 0.25 μM of each primer. The PCR amplicons were sequenced using BigDye terminator cycle sequencing kit and ABI genetic analyzer 3500 (Applied Biosystems). Each of the PCR primers was used as a sequencing primer. Phylogenetic analyses. The nucleotide alignment datasets of 28S rDNA and cox1 were prepared by MAFFT (Katoh and Standley 2013). The comparative sequences of related taxa were retrieved from DDBJ/ENA/GenBank databases. A phylogenetic tree of 28S rDNA was made by maximum likelihood (ML) method under the best-fit nucleotide substitution model GTR+I. The integrated software MEGA7 (Kumar et al. 2016) was used for the model selection and the tree construction. The robustness of the trees was tested by bootstrapping with 500 replicates. Inter-and intraspecific values of pairwise divergence between cox1 barcode sequences were computed by MEGA7 under pdistance model. The cox1 sequences of the related sympatric species, Brachylaima ezohelicis Nakao, Waki and Sasaki, 2017 (database accession nos. LC198311-6) and B. asakawai Nakao, Sasaki and Waki, 2018 (LC349002-11), were used for the comparison. A cox1 haplotype network was illustrated by TCS (Clement et al. 2000), and population genetics indices were computed by DnaSP (Rozas et al. 2017). Results Sampling. During the survey period from 2017 to 2018, 657 land snails were examined for the trematode infections (Table 1). We demonstrated only one snail of Ainohelix editha from Shunkodai to be infected with the reticular sporocyst of Pseudoleucochloridium ainohelicis nom. nov. Branched tubes of the sporocyst containing cercariae occupied the hepatopancreas (Fig. 1). In contrast, the host range of the metacercaria included 4 snail species (A. editha, Ezohelix gainesi, Discus pauper, and Succinea lauta). All of the metacercariae detected were from the pericardial cavity. The metacercarial prevalence of A. editha was relatively high, ranging from 5.0 to 38.5%. The number of metacercariae per snail (the intensity of infection) was also high in A. editha, ranging from 1 to 16. Shrews were collected in October, 2018. Three individuals of Sorex unguiculatus (two from Shunkodai and one from Tomisawa) were available for the examination of parasites. No other species of shrews were collected. The adults of P. ainohelicis nom. nov. were de-tected from all three shrews. The number of the adults per shrew was very few, ranging from 1 to 2. A total of 5 adults were obtained, but only two were gravid. DNA barcoding. Twelve representatives of the larvae and adults from the three sites of Asahikawa (i.e., 1 sporocyst from A. editha, 9 metacercariae from all the snail species, and 2 adults from So. unguiculatus) were subjected to a distance-based DNA barcoding of mitochondrial cox1. All of the samples were identified as P. ainohelicis nom. nov. The mean of the intraspecific divergence was 0.003. The related species of Brachylaima ezohelicis and B. asakawai are sympatrically distributed in the survey areas. When compared among the three species, the interspecific divergence were extremely high, ranging from 0.138 to 0.223. A parsimony network consisting of 6 cox1 haplotypes illustrated a slightly scattered pattern, showing the absence of a dominant haplotype (Fig. 1). The population genetics indices of the 12 cox1 sequences were as follows: haplotype diversity (0.818), nucleotide diversity (0.00362), Tajima's D (−1.41807), and Fu's FS (−0.411). Both the network and the indices suggest that no bottleneck events occurred in the recent past. Molecular phylogeny. A ML phylogenetic tree was constructed using the 28S rDNA data set including P. ainohelicis nom. nov. and members of related families. The resultant tree topology was not robust, perhaps due to the lack of essential taxa (Fig. 2). However, the phylogeny suggests a possibility that the families Brachylaimidae and Leucochloridiidae are paraphyletic. The isolates of P. ainohelicis nom. nov. were shown to be distinct from all related species for which DNA sequences are available. Descriptions. The specimens obtained in the field survey were used for the descriptions of larval and adult P. ainohelicis nom. nov. The numbers of the specimens used are 1 sporocyst (10 branches), 10 cercariae, 10 metacercariae, 2 adults, and 10 mature eggs. The eggs were obtained from the metraterm of one broken adult. All specimens were observed ventrally, excepting the sporocyst and the egg. All measurements, unless indicated otherwise, are in µm as the mean, with minimum-maximum range in parentheses. In the case of adults, only the minimum-maximum range was shown because of the low sample size. Distribution. To date, the distribution of P. ainohelicis nom. nov. is restricted to Hokkaido. In this study, all the developmental stages were found in Asahikawa, an inland area of Hokkaido. The parasite had been recorded as G. soricis from several areas of Hokkaido, namely Ebetsu, Kitami, Ozora, and Kushiro (Asakawa et al. 1988;Asakawa et al. 1992;Mitsuhashi et al. 2013;M. Asakawa, unpublished data). Hosts. The land snail indigenous to Hokkaido, A. editha, serves as the first and second intermediate hosts for P. ainohelicis nom. nov.. The sporocyst and metacercaria parasitize the hepatopancreas and pericardial cavity, respectively. Other land snails, E. gainesi, D. pauper, and Su. lauta, are involved as the second intermediate host. All of them are endemic species mainly in northern Japan. The long-clawed shrew, So. unguiculatus, serves as the definitive host. The adult parasitizes the lower part of the intestine. The geographic distribution of the shrew is restricted to Hokkaido, Sakhalin, and the adjacent minor part of the Eurasian Continent (Hutterer 2005). Etymology. The new specific name is given after the generic name of A. editha, an essential intermediate host in Hokkaido. Vouchers. The specimens of P. ainohelicis nom. nov. used in this study have been deposited in Meguro Parasitological Museum, Tokyo under the collection numbers MPM21491 (1 adult) and MPM21492 (3 metacercariae). The holotype is also kept in the same museum (Asakawa et al. 1988). Differential molecular markers. The parasite DNA sequences (28S rDNA and 6 haplotypes of cox1) are available for precise species identification. All of them have been deposited into DDBJ/ENA/GenBank databases under the accession numbers LC455740 (28S rDNA) and LC455741-6 (cox1). Discussion In this study, Glaphyrostomum soricis from shrews in Hokkaido has been renamed as Pseudoleucochloridium ainohelicis nom. nov. Both the genera Glaphyrostomum and Pseudoleucochloridium belong to the superfamily Brachylaimoidea, whose members use land snails as intermediate hosts (Pojmańska 2002a). Members of the genus Glaphyrostomum are parasites of birds (Pojmańska 2002b); G. soricis from shrews is the only exception to this pattern (Asakawa et al. 1988). The genera Glaphyrostomum and Pseudoleucochloridium share common morphological characteristics, but can be differentiated by the configuration of uterus and the position of genital pore (Pojmańska 2002b, c). In the type specimen of G. soricis, it was very difficult to locate the position of the genital pore, but the M-shaped configuration of the uterus was characteristic of Pseudoleucochloridium (Pojmańska 2002c). The present study enabled us to confirm the terminally-positioned genital pore through observation of living materials from shrews, demonstrating that G. soricis is a member of Pseudoleucochloridium. The size of suckers and the distribution of vitellarium in the type specimen of G. soricis are also consistent in identifying it as P. ainohelicis nom. nov. Before the erection of Pseudoleucochloridium, the corresponding species were assigned to Leucochloridium Carus, 1835 because of their morphological similarities. As a result of the revision (Pojmańska 1959), the former genus is properly used for the parasites of shrews and the latter genus for the parasites of birds. As far as we know, only five species of Pseudoleucochloridium have been found from shrews in Eurasia. These are P. ainohelicis nom. nov., P. pericardicum Montoliu, 1995, P. rotundus Bychovskaya-Pavlovskaya andKulakova, 1970, P. skrjabini (Shaldybin, 1953), and P. soricis. These species are morphologically quite similar to one another ( Table 2). The sizes of oral and ventral suckers, the outline of cecum, the shape of testes and ovary, and the distribution of vitellarium are important characters to discriminate the species. The specimens of P. ainohelicis nom. nov. from Hokkaido are most similar to those of P. pericardicum from the Pyrenees, but can be differentiated by having a larger ovary, a shorter vitellarium, and strongly undulating ceca. The intraspecific genetic diversity of P. ainohelicis nom. nov. estimated by the sequences of mitochondrial cox1 suggests that the parasite is originally indigenous to Hokkaido rather than being a recent immigrant. The endemism of the intermediate and definitive hosts in Hokkaido also supports the validity of P. ainohelicis nom. nov. As shown in Table 2, land snails acting as the first intermediate host are proven only in P. ainohelicis nom. nov. and P. pericardicum. Although the sporocyst of P. soricis was found from Pyrenean snails (Jourdane 1976), the causative species was synonymized to P. pericardicum (Mas-Coma and Montoliu 1995). Snails of the second intermediate host are further confirmed in P. ainohelicis nom. nov., P. pericardicum, and P. soricis. It is likely that the disk-like flattened body and large suckers of the metacercaria are an adaptation to parasitize the pericardial cavity of snails. The intensity of the metacercarial infection is relatively low (i.e., mostly 1 to 3 metacercariae per snail in P. ainohelicis nom. nov.), perhaps due to the limited space of the pericardial cavity or the high mortality of severely infected individuals. Most of the recognized species of Pseudoleucochloridium are from Europe, in accordance with the distribution of the Eurasian shrew, Sorex araneus Linnaeus, 1758. The discontinuous finding of P. ainohelicis nom. nov. in the Far East suggests that more congeners may be distributed widely in Eurasia, as a result of cospeciation events among trematodes, land snails, and shrews. In particular, a remarkable speciation occurs in land snails due to their low mobility and the resulting geographical isolation (Cook 2008;Rundell and Price 2009). The strict host specificity of land snails in developing the sporocyst might exert a selective pressure on the evolution of Pseudoleucochloridium. The prevalence data of P. ainohelicis nom. nov. in land snails has important implications for considering the transmission dynamics in natural settings. In the present sur-vey, a total of 144 snails of Ainohelix editha were examined ( Table 1). The sporocyst was found from only one snail (0.7%), whereas 27 snails (18.8%) harbored the metacercaria. The hepatopancreas of the sporocyst-infected snail was displaced by the proliferating reticular branches (Fig. 3A). It is, therefore, likely that the sporocyst prevalence becomes lower due to a high mortality from the dysfunction of hepatopancreas. Another possibility is that land snails rarely ingest the parasite eggs from shrews. In any case, the sporocyst-infected snail serves as a superspreader for other snails. Land snails generally aggregate with other individuals of the same species (Hylander et al. 2005;Sólymos et al. 2009). Therefore, cercariae from a sporocyst-infected snail can probably be transmitted to normal snails in a certain space where the snails aggregate. The high prevalence of metacercarial infections is maintained as a result of the snail-to-snail transmission. In this study, the metacercaria was also found from Ezohelix gainesi, Discus pauper, and Succinea lauta, but their prevalence was relatively lower than that of A. editha. The snail-to-snail transmission also occurs in individuals of other snail species sharing a common microhabitat with A. editha. All of the snails involved in the transmission of P. ainohelicis nom. nov. mainly inhabit the litter layer of woodlands. This soil surface seems to be a major environment in which snail-to-snail transmission occurs efficiently. Shrews living in the litter layer become infected when preying on the metacercaria-infected snails. According to a helminthological review of shrews in Hokkaido (Mitsuhashi et al. 2013), G. soricis (=P. ainohelicis nom. nov.) was found from So. unguiculatus, but not from So. caecutiens or So. gracillimus. This host preference is probably due to the food habit and/ or the infection susceptibility of the shrews. The present 28S rDNA-based phylogeny of the Brachylaimoidea is still immature because of the lack of essential taxa. However, as already reported by another study (Valadão et al. 2018), our result suggests that the present morphology-based classification of the corresponding families is unnatural, particularly for members of the Brachylaimidae. A large-scale taxon sampling from land snails and vertebrates is required to revise the paraphyly of the Brachylaimoidea. Based on the resultant molecular phylogeny, the erection of new families and the rearrangement of existing genera will probably be necessary.
2019-08-16T06:18:23.439Z
2019-07-25T00:00:00.000
{ "year": 2019, "sha1": "5ba3991b11592dbcf4b2bd38c7a863a273d11036", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/specdiv/24/2/24_240206/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b52bb5d4459dfafcfa94babd9dfaf32333df8ca9", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
574978
pes2o/s2orc
v3-fos-license
The role of interaction in virtual embodiment: Effects of the virtual hand representation How do people appropriate their virtual hand representation when interacting in virtual environments? In order to answer this question, we conducted an experiment studying the sense of embodiment when interacting with three different virtual hand representations, each one providing a different degree of visual realism but keeping the same control mechanism. The main experimental task was a Pick-and-Place task in which participants had to grasp a virtual cube and place it to an indicated position while avoiding an obstacle (brick, barbed wire or fire). An additional task was considered in which participants had to perform a potentially dangerous operation towards their virtual hand: place their virtual hand close to a virtual spinning saw. Both qualitative measures and questionnaire data were gathered in order to assess the sense of agency and ownership towards each virtual hand. Results show that the sense of agency is stronger for less realistic virtual hands which also provide less mismatch between the participant's actions and the animation of the virtual hand. In contrast, the sense of ownership is increased for the human virtual hand which provides a direct mapping between the degrees of freedom of the real and virtual hand. INTRODUCTION The virtual representation of the user in immersive virtual environments, the avatar, has elicited a lot of attention both in virtual reality and psychological research communities [11].Does the user perceive the avatar as her/his own body?Is the avatar able to al-ter the user's self perception and/or behavior?Virtual reality is a powerful tool to answer these questions as the user's avatar can be altered in numerous ways in order to assess changes in the sense of embodiment and behavior [15].For example, controlling the level of realism of the avatar [23], its skin color [22] or even its shape [26].However, little is known about the role of interaction towards the sense of embodiment.Existing works mainly focused on the effects of visuo-tactile [30], visuo-motor stimuli [24] or morphological changes [38]. In this work, we explore the effects of different virtual hand representations on the sense of embodiment when actively interacting with the virtual environment (see Figure 1).We aim to grow the existing knowledge on how the representation of the user alters the perception of the virtual environment, the avatar and her/his self, in which performance and appreciation are not necessarily correlated [28].We designed an experiment in which participants performed a series of pick-and-place operations in which, sometimes, hazardous elements could threaten their virtual hand (see Figure 1).Three different visual representations of the virtual hand were considered, each providing different degrees of realism (shape and grasping animations).However, all of them shared the same underlying grasping control scheme, providing the same interaction capabilities.The study focused on two dimensions of the sense of embodiment: the sense of agency, i.e. the feeling of being in control of the avatar, and the sense of ownership, i.e. the feeling that the avatar is the source of experienced sensations.Two main research questions are addressed: Does the virtual representation of the hand alter the sense of agency?and, Does the virtual representation of the hand alter the sense of ownership?The obtained results show that the sense of agency is related to the virtual hand control and the task efficiency, while the sense of ownership is mainly related to the visual appearance of the virtual hand.These results can help the design of interactive systems focusing on virtual embodiment. The remainder of the paper is structured as follows.Section 2 provides a broad overview of related work on virtual grasping, embodiment and existing body ownership illusions in VR.Section 3 presents the methodological basis of the experiment and details the analysis of the experimental results.Then, Section 4 discusses the main findings and Section 5 provides the concluding remarks. Virtual Grasping Grasping is one of the most common interactions performed in everyday life.Still, the simulation of realistic grasping operations in virtual environments requires dedicated input devices and algorithmic approaches [4].In this work we are only focusing on egocentric interaction techniques not requiring physically-based simulations.Egocentric manipulation techniques, such as the virtual hand [27], translate the user's hand movements to a simplified virtual representation of the hand, in which objects are typically glued to the virtual hand upon contact.Virtual hand metaphors can be enhanced by providing an increased control of the virtual hand (e.g.finger motions [17]) and providing additional visual feedback [32].Regarding the hand control, finger displacement and orientation can be used as an heuristic to determine the fingers configuration to infer grasping operations [36,21].Furthermore, in order to provide additional feedback, the visual representation of the hand or of the interactive objects can be altered in order to express contacts, valid grasping status [21] or use explicit glyphs and illumination effects [32]. The Sense of Embodiment In this Section we will briefly detail the different elements considered to be the main actors to enable the sense of embodiment.For additional reading we will refer to the works of Kilteni et al. [15] and De Vignemont [8]. Kilteni et al. [15] defines the sense of embodiment (SoE) toward a body B as the sense that emerges when B's properties are processed as if they were the properties of one's own biological body.Embodiment is a complex phenomena which is achieved at different levels, as defined by Longo et al. [19] and further revisited by Kilteni et al. [15]: the phenomenology of embodiment includes the sense of self-location, the sense of agency, and the sense of body ownership.A similar decomposition was provided by De Vignemont [8] in which three dimensions are considered: Spatial, Motor and Affective.While spatial and motor dimensions are directly related to self-location and of agency respectively, the sense of ownership is linked to the affective dimension.A stronger sense of ownership will increase the physical and physiological responses towards hazardous situations that threaten the virtual body. Self-location Self-location can be defined as the space in which we perceive the self to be located.The body space provides a reference frame for our physical body and determines the space in which body sensations are registered [8].Several factors can alter the sense of self-location.A collocation between the virtual and the real body (first person perspective) will elicit a stronger sense of self-location that non-collocated perspectives (third person perspective) [31,25].In addition, synchronous visuo-proprioceptive correlations during passive or active movements increase the sense of self-location.The well known rubber hand illusion experiment [5] showed that self-location can be altered when synchronous visuo-proprioceptive correlations are applied between the rubber hand and the hidden real hand.Furthermore, correlated vestibular cues can also increase the sense of location [2]. Agency The sense of agency is elicited when oneself is the agent of one's own actions.When interacting with our body, we have accurate control of the motor activity and we are aware of our actions (e.g.proprioception).Agency is described as motor activity control, which encompasses the obedience of the concerned body part to one's will and the sensation of movement [2].In other words, agency is closely related to action awareness and action planning [8]. The sense of agency is present in the use of tools (effectors), for which the knowledge of sensorimotor control and the association between effectors leads to an expected outcome.The close relationship between intention and outcome is also considered as a component of agency [6].When controlling virtual limbs or fullbody avatars, the sense of agency has little impact on the effectors.For example, when controlling virtual avatars, the sense of agency appears even when the avatars are not realistic (such as point-line avatars [37]), when virtual avatars drastically deviate from the physical body [16] or with virtual limbs in implausible positions [35].Although motor recalibration is required when the effector (e.g.tool, virtual body) differs with respect to the real body, a degree of visuomotor adaptation is tolerated in forms of proprioceptive recalibration, motor learning or virtual space recalibration [7].Nevertheless, the perceptual-motor fidelity between individuals and their avatars must be ensured. Ownership The sense of ownership is described as the sense that one's own body is the source of sensations [35].Since Botvinik and Cohen's experiment with a rubber hand [5], known as the rubber hand illusion (RHI), it has been proved that a fake limb can elicit the sense of ownership.While the brain can believe that a fake limb belongs to the body [34,13], a basic morphological similarity or spatial configuration between the real and artificial body is still required [15]. Ownership can be observed when the fake body is threatened [8].For example, on the RHI, the fact of hitting the rubber hand created a strong physical response [5].Although one might argue that the response is related to the surprise effect, neuroscience studies proved that the reaction induced by the threatening of the rubber hand (which is to retract one's own hand in the vast majority of cases) is not only a pure reflex, but also that there is a cortical anxiety response to a perceived danger towards the body [1]. Body Ownership Illusions in VR Going one step further, the RHI has been revisited in VR, giving birth to the virtual arm illusion [30].Similar to the RHI, stimulations between the avatar and the real hand should be synchronous.However, studies performed in VR setups [24] show that the virtual arm illusion can be achieved without tactile stimuli.More precisely, Slater et al. [29] show that synchrony between visual and proprioceptive information along with motor activity is able to induce an illusion of ownership over a virtual arm.Additional studies have explored full-body ownership, some examples of illusions have used mannequins [23], virtual avatars [31,3] or even out-ofbody experiences [18]. Such experiments paved the way for further studies to explore how changes in the virtual avatar influence the sense of ownership.For instance, body space and limb plausibility have been investigated in [16], in which participants tolerated having a virtual arm longer than their real one.Another example is the work of Peck et al. [22], which showed that racial bias could be reduced by using an avatar of a black person.Additional studies have explored one's body weight perception by altering the complexity of the avatar [26], adding additional limbs to the avatar [9,33] or even exploring the effects of social anxiety responses to standing in front of an audience when having an invisible body [12]. Although there is evidence that the virtual representation of the user has an impact on the sense of ownership [12,22,26], few studies have explored how the virtual body and its interaction capabilities alter the user's behavior.Existing studies have focused on constrained interactions with the virtual environment, limited to pushing virtual buttons [10], touching landmarks [20] or drumming [14]. The following experiment goes one step further and evaluates the effects of the avatar representation in a realistic setup where believable interactions with the environment are provided. How do people appropriate the virtual representation of their hand when interacting with virtual environments?The following experiment aims to study the effect of the hand representation towards the sense of agency and ownership.The two research questions studied were (1) does the representation of the virtual hand influence the sense of agency?and (2) does the representation of the virtual hand influence the sense of ownership? In order to address both research questions, we designed an experiment in which the user had to perform a set of pick-and-place tasks.The main factor of the experiment was the representation of the virtual hand, from an iconic virtual hand to a fully animated one (see Figure 2).The task was designed so that participants have to actively interact with their virtual hand.Additionally, we designed a second task in which participants were asked to place their virtual hand in a predefined virtual location in which they could potentially put their virtual hand in danger.The sense of ownership and agency were assessed by analyzing the participant's behavior when interacting with the virtual environment and by gathering their subjective impressions.We first detail the virtual grasping technique used, followed by the experimental tasks and methods, and the analysis and discussion of the results. Virtual Hand Representation The design of the virtual hands took the degree of realism into consideration.As previously discussed, the degree of morphological similarity could have an impact on the sense of ownership, and the control is tightly coupled with the sense of agency.Three different virtual hand representations were considered, each one providing a different level of realism (see Figure 2).All virtual hands were collocated with the real hand. Abstract virtual hand.Low realism.The hand is represented by a uniformly shaded sphere which moves according to the real hand palm.Once the grasping operation is triggered, the shading color of the sphere changes from white to red.Most VR applications still use such a representation. Iconic virtual hand.Medium realism.The 3D model represents a simplified robotic hand.A two state animation is played (opened to closed, there is no continuous animation) when the grasping operation is triggered.The hand model is translated and rotated following the real hand palm position and orientation. Realistic virtual hand.High realism.This representation is a fully animated virtual hand (including the forearm) which follows the user's arm, hand and finger movements.The 3D model was directly obtained from the Leap Motion SDK. Grasping Control Although having different visual representations, all three virtual hand representations shared the same grasping control scheme.The prehension area was considered to be a half-sphere (with the same radius as the abstract virtual hand) driven by the palm of the real hand.In order to interact with an object, the prehension area had to collide with it.Once the prehension area collides with the object, users had to close/open their real hand in order to trigger the grasping/release operation. Hand tracking was provided by the Leap Motion (see Figure 4), allowing to track the forearm, the hand and the fingers of the user's dominant hand.We decided to use the Leap Motion because it provided a seamless finger tracking without the need to wear gloves or markers, thus providing less invasive tracking.The tracking quality ensured a correct animation for the realistic virtual hand for most situations, still, when the palm was in a vertical position finger tracking issues appeared due to inter-finger occlusions.To avoid such situations, finger animation was deactivated when the palm was in a vertical position.This situation rarely occurred during the and realistic virtual hands (right).Each virtual hand had its own visual feedback when the grasping operation is triggered (bottom).The abstract virtual hand changes color, the iconic virtual hand abruptly changes shape (there is no smooth animation) and the realistic virtual hand is animated from the user's finger motions.experimental tasks as they required the user to keep the palm in a horizontal position.For the abstract and iconic virtual hands, no tracking issues were detected.The grabbing intent detection method relies on the angles of both metacarpal and proximal joints for all fingers, except the thumb.The thumb was not considered as we found no difference on the grasping quality and sometimes introduced noise.As shown on Figure 3, the contribution of metacarpal joints is less obvious than the one of the proximal joints.However, this feature was found to highly depend on users.Indeed, the grasping habit of everyone is different.To that extent, our choice was to consider both the metacarpal and the proximal joint angles.For grasping and release operations, a thresholding was done considering the sum of the orientation for each joint.Experimentally, we determined that the optimal grasping threshold was 290 degrees and the release threshold was 200 degrees.Finally, in order to provide additional feedback, the grabbed object changed its color when intersecting with the prehension area (green highlight) and when the object was grabbed (red highlight). Apparatus and Participants Participants were immersed in the virtual environment using an Oculus Rift (DK2), in which head tracking was provided by the Oculus Rift and the participant's dominant hand was tracked using a Leap Motion.The physical setup (see Figure 4) was designed to ensure optimal tracking conditions for the Leap Motion.The Leap Motion was placed upside down, and anti-reflective tape was used on the shelf to limit infra-red interferences.In addition, the interaction space was constrained by the frame of the shelf.Participants were asked to only use their dominant hand when interacting with the virtual scene.Additionally, participants were asked to keep their other hand away from the field of view of the Leap motion to avoid detection artifacts.The virtual environment used in the experiment resembled the physical setup providing both a reference frame and passive haptic feedback when touching the bottom shelf.The application was developed using Unity and driven by a standard graphical workstation, which ensured a constant 75Hz. Thirty-three male participants from inside and outside the lab took part in the experiment (aged from 21 to 44 years, M=29.75;SD=9.67).The population was restricted to male users as the realistic virtual hand was from a male avatar.Eighteen subjects did not have any previous experiment in virtual reality, seven had some previous experience and eight were familiar with VR.All participants except one were right-handed. Figure 4: Experimental Setup.The interaction space was constrained by the shelf, which ensured an optimal tracking space for the Leap Motion (top).In addition, the bottom shelf provided passive haptic feedback. Experimental Protocol At the beginning of the experiment, participants, after signing the consent form, were briefed about the equipment and the experimental tasks.The experiment was subdivided into three blocks (one for each virtual hand representation) in which two different tasks were done, hereinafter referred as pick-and-place and spinning saw tasks (described below).Once the experimenter set up the VR equipment, the participant was able to explore the virtual environment by moving his head and to test the current virtual hand representation.After finishing each block, the experimenter removed the head-mounted display and participants were asked to fill a subjective questionnaire related to the corresponding virtual hand representation.Participants could take all the time they needed in order to continue the experiment.Participants took on average 10 minutes to perform each block and 45 minutes to finish the experiment. Pick-and-Place Task The pick-and-place task required participants to move a virtual cube from its original position towards a predefined position on the other side of the shelf, indicated by a red circle (see Figure 1 Left).The task had two goals in mind: first, to create a link between the real and the virtual hand through a repetitive interaction task, and second, to analyze the participant's behavior while avoiding potentially dangerous obstacles (see Figure 5).The sense of agency can be elicited by the sole means of controlling the virtual hand, while the potential hazardous situations can provide insights about the sense of ownership. In order to avoid virtual objects popping in and out, a transition between each trial was introduced.Before each task, the user was asked to push a hand-shaped button in order to make a wooden curtain descend.When the curtain attained the shelf, the virtual elements were added to the virtual environment and the curtain automatically rose.This gave the feeling that the virtual scene elements were consistent, as they were not roughly appearing or disappearing.The system did not handle collisions between the virtual hand and the obstacles. Spinning Saw Task The spinning saw task was designed to assess the virtual hand control when actively performing a task that can potentially endanger the integrity of the virtual hand.The spinning saw task was performed just after the pick-and-place task, which ensured that participants became used to the virtual representation of their hand. Participants had to place their hand in a specific location on the table, just besides a spinning saw (see Figure 1 Right).Participants had no time limit to perform the task, and the only instruction provided was to place the virtual hand on the designated mark.This task aimed to study how a danger in the virtual environment is perceived, and whether participants will risk the integrity of their virtual hand.The participant's behavior provides insights about the sense of ownership. Design and Hypotheses The pick-and-place task followed a full factorial 3x4x2 design: virtual hand representation (3 levels: abstract, iconic and realistic), obstacle (4: levels, none, brick, wire and fire) and direction of the movement (2 levels: right-to-left and left-to-right).All variables were within-subjects.The experiment was divided into three blocks, one for each virtual hand representation.To minimize the ordering effects, the virtual hand condition was counterbalanced (Latin-square design) while obstacle and direction conditions were fully randomized.For each combination, participants did five repetitions, resulting in a total of 120 pick-and-place tasks.Ten additional training trials without any obstacles were included at the beginning of each block, thus ensuring that participants understood the task and the hand control.The main dependent variables were the task completion time (s) and the placement precision (cm), the latter being measured by computing the distance between the object center and the target location (XZ plane).Additionally, we measured the grasping thrust (sum of metacarpal and proximal join angles) and the elevation at the midpoint of the trajectory.The spinning saw task only had one within-subjects independent variable, the virtual hand representation.This task was the last trial performed in each block (after the pick-and-place task).Participants only performed the spinning saw task once for each representation, to decrease the habituation effect.We measured the time (s) required to place the hand in the designed location.The task was considered finished when any part of the virtual hand touched the placement mark. Task Completion Time (s) In addition to the quantitative measures, participants were asked to assess their subjective impressions after each block by the means of subjective questionnaires (see Table 1).The questionnaire was mainly inspired by the works of Botvinick et Cohen [5] and Longo et al. [19] Finally, we recorded the described trajectories in order to analyze changes in participants' behavior during both tasks.According to our experimental design, our main hypotheses were: H1 Faster manipulation time for simplified virtual hands. H2 Increased manipulation precision for simplified virtual hands. H3 Increased placement time for the realistic virtual hand. H4 Increased collision avoidance for the realistic virtual hand. H5 Increased sense of agency for the realistic virtual hand. H6 Increased sense of ownership for the realistic virtual hand. Analysis Parametric data was analyzed using factorial ANOVA analysis.Tukey pairwise tests (α = 0.05) were done when needed, only significant differences are discussed (p < 0.05).When ordering effects were observed, the order was included as a between-subjects factor. Pick-and-Place Task The four way ANOVA, virtual hand, obstacle, direction and order vs task completion time showed three relevant significant effects.First, there was a main effect of obstacle (F(3, 90)=54.13;p < 0.001; η 2 p =0.64).Post-hoc tests showed that users required significantly more time to perform the task during the fire condition (M=5.51s;SD=1.86s), followed by the brick (M=4.9;SD=1.53) and barbed wire (M=4.9s;SD=1.53s) conditions.The condition without obstacle resulted in the fastest task completion times (M=4.25s;SD=1.35s).Second, there was a two-way interaction effect between order and virtual hand (F(4, 60)=24.52;p < 0.001; η 2 p =0.61) (see Figure 6).Post-hoc tests showed that there was a significant increase in performance between the first and the third block for the realistic virtual hand.In contrast, between the second and the third block all three techniques achieved comparable performances.Third, there was a main effect of direction (F(1, 30)=395.12;p < 0.001; η 2 p =0.92), post-hoc tests showed higher task completion times for the left-to-right trials (M=5.42s;SD=1.64s) than the rightto-left trials (M=4.37s;SD=1.47s).This effect was mainly due to the fact that we considered the time before the grasping.Users therefore had a longer distance to cover before grasping the object for the left-to-right condition which consistently increased the task completion time. The four way ANOVA, virtual hand, obstacle, direction and order vs placement precision showed two relevant significant effects.First, there was a strong main effect of virtual hand representation (F(2, 60)=26.24;p < 0.001; η 2 p =0.46).Post-hoc tests showed that participants were significantly less accurate with the realistic hand (M=3.52cm;SD=1.34cm) compared to both the iconic (M=2.7cm;SD=1.17cm) and the abstract (M=2.70cm;SD=0.99cm) representations.Second, there was a moderate effect on the obstacle (F(3, 90)=6.30;p < 0.001; η 2 p =0.17).Post-hoc tests showed that the accuracy for the fire condition was the lowest (M=3.27cm;SD=0.94cm).Nevertheless, the differences among conditions are lower than 1cm. Regarding the mean grasping thrust, as no significant differences were found in terms of distance, order and block, data was pooled and only the virtual hand representation was considered as a factor.The one-way ANOVA virtual hand representation vs grasping thrust showed a significant effect on virtual hand representation (F(2, 64)=4.9;p < 0.05; η 2 p =0.13) Post-hoc tests showed that the grasp thrust for the realistic virtual hand (M=460.64;SD=52.04) was significantly lower than the grasp thrust for the abstract virtual hand (M=489.17;SD=55.27). Finally, we explored the hand trajectory when performing the pick-and-place task.In order to characterize the trajectories, we considered the elevation at the midpoint of the trajectory as the main feature.We only found a significant effect on obstacle (F(3, 96)=55.84;p < 0.01; η 2 p =0.64).Post-hoc tests only showed that the mean elevation was lower only for the non-obstacle condition (M=10.23cm;SD=3.6cm) compared to the other conditions (M=17.5cm;SD=6.64cm).Nevertheless, we observed higher variability in the avoidance strategies when avoiding the fire.Figure 8 shows the mean trajectory for each participant.We can observe that different strategies were used to avoid the obstacle, such as avoiding the fire sideways.Another observation from the trajectories is that few participants did not avoid the obstacles, even for the solid ones.No correlation was found regarding their VR experience. Spinning Saw Task The two-way ANOVA virtual hand representation and order vs task completion time showed a main effect of virtual hand representation (F(2, 81)=4.22;p < 0.05; η 2 p =0.09) and of order (F(2, 81)=6.57;p < 0.005; η 2 p =0.14).Tukey post-hoc tests showed that participants took significantly more time to perform the task using the realistic virtual hand (M=3.21;SD=1.15) than the abstract virtual hand (M=2.46;SD=0.78).Moreover, participants took significantly more time to perform the task the first time (M=3.34;SD=1.06) in comparison with both the second (M=2.61;SD=0.77) and the third (M=2.58;SD=0.96) block.No interaction effect was found. We also analyzed the final position of the participants forearm.Figure 7 shows that there were three main strategies to perform the task, approaching from the top (white) from the side (green) or from the front (red).We observed a tendency of increased saw avoidance for the realistic hand condition (9 collisions) compared to the abstract (14) or the iconic (13) virtual hand.7: Final position and orientation of the forearm during the spinning saw task for the abstract (left) and realistic (right) conditions.The color determines whether the forearm intersected with the saw (red), the user approached from the top (white) or from the side (green). Figure 8: Mean trajectories when avoiding the brick (left) and the fire (right) obstacles for the realistic virtual hand.The fire increased the trajectory variability as participants showed different avoidance strategies. Questionnaires The data from the questionnaires was analyzed using the Friedman rank test and Wilcoxon pairwise test.The questionnaire is split into agency-(A1-A6) and ownership-related (O1-O8) questions.Before performing the analysis, ordering effects were tested, but no significant ordering effects were found.Table 1 provides the summary of the results. According to the agency results (A1-A6), we observe that participants perceived that the realistic virtual hand was more difficult to control (A1), as they expected the virtual hand to precisely follow their movements (A2).This is also supported by the fact that participants considered that performing the different tasks using the realistic virtual hand was slightly more difficult (A3 and A4).Nevertheless, all virtual hand representations elicited a strong sense of agency, which shows that participants had the feeling of controlling all three virtual representations of the virtual hand. Regarding the ownership-related questions (O1-O8), we observe higher ratings for the realistic virtual hand in terms of the virtual hand being part of their body (O1), the coupling between the real and the virtual hand (O3) and increased feeling of danger towards the virtual hand (O4) and their real hand (O5).The control question O2 showed no significant contradictory effects.There was no significant differences in participants' behavior to avoid obstacles (O6) although they considered that the virtual hand was able to go through virtual obstacles (O8).Overall, we can state that the sense of ownership was higher for the realistic virtual hand. Discussion The results from the pick-and-place task showed that simplified virtual hands provide faster and more accurate interactions supporting hypotheses [H1] and [H2].However, the observed ordering effects show that the differences are more visible during the first block of the experiment.As long as the experiment advances the differences among them decrease.This decrease in performance can be explained by the lack of feedback.The realistic virtual hand occluded the virtual object during grasping and release operations, and no haptic feedback was provided.This effect was stronger in the first block as participants were still not adapted to the grasping/release control.In addition, the sense of agency was higher for the abstract and the iconic virtual hands in A1 (I felt as if the virtual representation of the hand moved just like I wanted it to), A3 (Perceived task difficulty) and A4 (I felt like I was able to interact with the environment the way I wanted to).This decrease of the agency towards the realistic virtual hand can be correlated with its decreased performance.We can hypothesize that increased task completion times indicate a decreased motor control which can be perceived for the user as a decrease in the sense of agency.A second explanation is the limitation of the tracking system.Although it did not happened frequently, in some cases, the Leap Motion mistook the participant's right hand for the left one, providing strange virtual finger configurations.Interestingly, the reaction of participants when there was a difference between their physical hand and the virtual one was to shake their hand, as if they tried to "shuffle it back".This could be one of the reasons of the lower scores for question A1 and higher scores for question O3 (I felt that I was losing the control of my hand when the virtual hand was not responding properly).Either way, these results do not support [H5]. In contrast, we did not observe any significant difference among virtual hand representations in the collision avoidance strategy, which does not support hypothesis [H4].On the vast majority of cases, although the natural behavior was to avoid obstacles, the avoidance behavior did not depend on the virtual hand representation.This is also supported by the subjective questionnaires (O6a-O6c) in which participants reported that they kept a similar strategy for each virtual hand representation.Interestingly, participants tended to avoid the fire obstacle even in iconic and abstract conditions.Some participants even had difficulties passing above the fire, which can be interpreted as an extension of the volume of the danger zone of the fire outside of its visible area, being supported by alternative avoiding strategies like going around it (see Figure 8).On the other hand, two participants systematically went through all virtual obstacles.The task completion time during the spinning saw task showed that participants required significantly more time to finish the task using the realistic virtual hand, thus supporting [H3].Figure 7 shows that participants had the tendency to avoid the saw to be collocated with their forearm, with a stronger effect for the realistic virtual hand representation, which supports [H6].From these results, we can hypothesize that participants were more careful when performing the task due to the increased feeling of danger.This is supported by the results from the subjective questionnaires in which participants ranked the potential virtual and real danger higher for the realistic virtual hand (O4-O5).Furthermore, different participant profiles were observed.For instance, the body ownership component of a user was particularly strong as he hesitated to put his hand behind the circular saw for quite a long time with the realistic hand condition.In contrast, a few participants went through all obstacles, and even tried to put their virtual hand in the saw, which shows a lower ownership component. Limitations Bare hand interactions present tracking and interaction challenges and no perfect solution exists yet.Our choice was to use depth sensing technology to avoid the user having to wear bulky equipment and to provide an interaction as natural as possible.However, depth sensing technologies are prone to occlusions and noisy reconstructions which could have biased the perception of participants especially for the realistic condition.Nevertheless, although tracking artifacts sometimes appeared, the experimental tasks were designed to minimize such artifacts.Participants were able to perform the tasks efficiently and the measured levels of agency and ownership show a high degree of virtual embodiment. One potential source of bias, especially during the spinning saw task, could have been the fact that the realistic virtual hand provided visual feedback for the forearm.Participants could have just avoided interpenetration between their forearm and the spinning saw.Adding a virtual forearm was our choice as we considered that a realistic virtual hand floating in mid-air would feel strange.Still, we observed non-expected behaviors: some participants avoided the saw when no forearm was displayed and some participants did not avoid the saw when the forearm was displayed.This result shows that the role of the forearm could not only be related to the visual feedback, but also the proprioception of the real forearm.Additional studies are required to assess this question. GLOBAL DISCUSSION From the results obtained in the experiment, we can now partially answer the two main research questions.Does the virtual representation of the hand alter the sense of agency?Participants' subjective impressions showed that the sense of agency was influenced by the virtual hand representation, with the abstract and iconic representations being the ones providing the highest sense of agency.This was an unexpected result as we hypothesized that the increased number of degrees of freedom of the virtual hand animation would have elicited a stronger sense of agency.The decreased performance for the realistic virtual hand and the finger tracking limitations could have influenced the results.In both cases, there is a reduction of the perceived control when manipulating and interacting with the virtual hand.Although, all virtual hand representations elicited a strong sense of agency, more accurate finger tracking and improved sensory feedback could result in a stronger sense of agency. Does the virtual representation of the hand alter the sense of ownership?Both qualitative and quantitative data suggest that the realistic virtual hand representation elicits stronger sense of ownership.Interestingly, although the sense of agency is also an important factor for the sense of ownership, we observe that the decreased sense of agency observed for the realistic virtual hand was not enough to alter the sense of ownership. The presented experiment explored the effects of the virtual hand representation on the senses of agency and ownership.On the one hand, we have observed that the sense of agency does not seem to be related to the amount of degrees of freedom the user can control, but rather on their efficiency to control them.Even the abstract virtual hand which required only 3 degrees of freedom provided a high sense of agency.This suggests that the design of the user's avatar has to take into account the quality of the tracking system in order to provide a seamless control.Furthermore, as it has been acknowledged in the literature, there is no specific need to provide realistic avatar representation to increase the sense of agency.On the other hand, the sense of ownership is dependent on the virtual representation of the virtual hand and the need of morphological resemblance is required to increase the sense of ownership.Interestingly, this result still holds even when the sense of agency for the realistic virtual hand was lower, thus suggesting that the provided level of control was sufficient to ensure an ownership illusion.Nevertheless, additional studies are required to explore the link between agency and ownership in interactive VR scenarios. Figure 2 : Figure 2: Virtual hand representations.Abstract (left), iconic (center)and realistic virtual hands (right).Each virtual hand had its own visual feedback when the grasping operation is triggered (bottom).The abstract virtual hand changes color, the iconic virtual hand abruptly changes shape (there is no smooth animation) and the realistic virtual hand is animated from the user's finger motions. FrameFigure 3 : Figure 3: Dashed lines show the mean flexion values for the metacarpal and proximal joints during two consecutive grasping operations.In order to control the grasping state, the sum of the angle of all metacarpal and proximal joints was used.The grasping and release thresholds were set at 290 and 200 degrees. Figure 5 : Figure 5: Virtual obstacles during the pick-and-place task.The brick is a tangible and non-threatening obstacle.The barbed wire is a tangible threatening object and the fire is intangible but threatening. Figure 6 : Figure 6: Boxplot of the task completion time considering the virtual hand representation and the ordering. Figure Figure7: Final position and orientation of the forearm during the spinning saw task for the abstract (left) and realistic (right) conditions.The color determines whether the forearm intersected with the saw (red), the user approached from the top (white) or from the side (green). Table 1 : Statistical summary for the questionnaire responses (7-Likert scale).Friedman rank tests and Wilcoxon post-hoc tests were used.Mean and standard deviations not sharing a subindex ( 1 or 2 ) indicate significant differences (α = 0.05).The mean and standard deviation for all virtual hand conditions is provided when no significant differences were found.
2017-02-19T02:28:25.495Z
2016-03-19T00:00:00.000
{ "year": 2016, "sha1": "876e00760d163b7d99de5c32d2c3f597144314e7", "oa_license": "CCBY", "oa_url": "https://hal.inria.fr/hal-01346229/file/The%20Role%20of%20Interaction.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "2bd23a8c42d21903534ac6ca085eb1b8d8561859", "s2fieldsofstudy": [ "Computer Science", "Psychology" ], "extfieldsofstudy": [ "Computer Science" ] }
233025874
pes2o/s2orc
v3-fos-license
From benchmarking to best practices: Lessons from the laboratory quality improvement programme at the military teaching hospital in Cotonou, Benin Background In 2015, the Army Teaching Hospital–University Teaching Hospital (HIA-CHU [Hôpital D’instruction des Armées de Cotonou Centre Hospitalier et Universitaire]) laboratory in Benin launched a quality improvement programme in alignment with the World Health Organization Regional Office for Africa’s Stepwise Laboratory Improvement Process Towards Accreditation (SLIPTA). Among the sub-Saharan African laboratories that have used SLIPTA, few have been francophone countries, and fewer have belonged to a military health system. The purpose of this article was to outline the strategy, implementation, outcomes and military-specific challenges of the HIA-CHU laboratory quality improvement programme from 2015 to 2018. Intervention The strategy for the quality improvement programme included: external baseline SLIPTA evaluation, creation of work plan based on SLIPTA results, execution of improvement projects guided by work plan, assurance of accountability via regular meetings, training of personnel to improve personnel competencies, development of external stakeholder relationships for sustainability and external follow-up post-SLIPTA evaluation. Lessons learnt Over a period of 3 years, the HIA-CHU laboratory improved its SLIPTA score by 29% through a quality improvement process guided by work plan implementation, quality management system documentation, introduction of new proficiency testing and internal quality control programmes, and enhancement of personnel competencies in technical and quality management through training. Recommendations The programme has yielded achievements, but consistent improvement efforts are necessary to address programme challenges and ensure continual increases in SLIPTA scores. Despite successes, military-specific challenges such as the high mobility of personnel have hindered programme progress. The authors recommend that further implementation research data be shared from programmes using SLIPTA in under-represented settings such as military health systems. Regulations and the Maputo Declaration (2008), in which signatories pledged to address and strengthen laboratory and health services. 2 The International Standards Organization (ISO) 15198:2012 standard, 'Medical Laboratories-Particular requirements for quality and competence', serves as a standard for the laboratory quality management system (QMS) -a formalised system that outlines required structures and functions for medical laboratories. Progress made on the ISO 15189 standard can help a laboratory prepare for accreditation through a recognised agency; this allows the laboratory to demonstrate to clients, partners and staff that it has attained a high level of technical competence, thus instilling confidence in stakeholders on the accuracy and reliability of its results. In sub-Saharan Africa, the clinical laboratory remains a weak link in the healthcare chain. 2,3 Though progress has been made, most clinical laboratories in sub-Saharan Africa remain under-equipped, under-funded and far from attaining international norms and standards. Few laboratories are accredited to international quality standards, and most of these internationally accredited laboratories are based in South Africa. 4 Quality assurance, which comprises QMS including the existence of a quality manual, use of internal quality controls (IQC) and participation in external quality assessment (EQA), is poorly implemented and often unavailable in many laboratories in sub-Saharan Africa. 5,6,7 In the absence of quality assurance, there is the risk of laboratory errors, which could adversely impact patient care. To alleviate these challenges and support awareness of the importance of achieving quality standards in medical laboratories in sub-Saharan Africa, the World Health Organization Regional Office for Africa developed in 2009 a phased laboratory quality management evaluation system called the Stepwise Laboratory Quality Improvement Process Toward Accreditation (SLIPTA). 8 The African Society of Laboratory Medicine serves as the secretariat of the World Health Organization Regional Office for Africa SLIPTA programme. The SLIPTA framework guides improvement of performance, measures and evaluates the progress of laboratories towards ISO 15189 international accreditation and awards a certificate of recognition from zero to five-star ratings. 8,9 The HIA-CHU in Cotonou, Benin, has a staff consisting of military and civilian personnel and includes a clinical pathologist, 12 medical laboratory scientists and support staff. The services provided include medical fitness evaluation, disease prevention, treatment and care, and teaching and research in medical, biological, pharmaceutical, paramedical, odontological and veterinary specialities. Laboratory services at HIA-CHU are versatile and the laboratory manages, on average, 20 000 patient files and 50 000 tests per year. In January 2015, the HIA-CHU laboratory service initiated, for the first time, a quality improvement (QI) programme in alignment with the SLIPTA framework. The QI programme goal was to improve laboratory services related to disease diagnosis and monitoring via the implementation of QMS using SLIPTA framework. The QI programme aimed to provide training for laboratory personnel on quality management concepts and practices, monitor laboratory quality and adherence to quality systems, conduct laboratory test method validation and provide proficiency testing panels for HIV, tuberculosis and other critical tests -successful implementation of the QI programme ensures that laboratory stakeholders and end-users have confidence in laboratory data which informs clinical decisions and optimises patient care. The HIA-CHU laboratory QI programme was launched through a collaboration with the US Department of Defense HIV/AIDS Prevention Program (DHAPP). As of 2015, the Benin Army Health Service already had a long-standing collaboration with DHAPP, which had been funding laboratory equipment and reagents, specifically to support HIV diagnosis and treatment monitoring. Over the years, both parties recognised that laboratory tests and equipment alone are not sufficient to demonstrate test result quality; tests and equipment must be accompanied by laboratory quality practices implemented by properly trained and motivated personnel. To address this recognised need, DHAPP began supporting the Benin Army Health Service in the execution of the HIA-CHU laboratory QI programme in 2015, with implementation assistance provided by the United States-based global health company Global Scientific Solutions for Health (GSSHealth). The purpose of this article is to report on the implementation of the QI programme at the HIA-CHU laboratory from 2015 to 2018. Baseline SLIPTA evaluation and work plan creation The laboratory QI programme began in January 2015 with initial programme planning among stakeholders (HIA-CHU leadership, DHAPP, GSSHealth) and the facilitation of a comprehensive baseline evaluation of the HIA-CHU laboratory by GSSHealth using the World Health Organization Regional Office for Africa SLIPTA checklist, version 2:2015. The baseline SLIPTA evaluation allowed the assessment of the 12 quality management sections: organisation and personnel, management reviews, process control and internal quality assessment and EQA, information management, corrective action, equipment, purchasing and inventory, documents and records, occurrence management and process improvement, internal audit, client management and customer service, and facilities and safety. Following the evaluation, leaders from HIA-CHU, DHAPP and GSSHealth convened for a collaborative session to review findings, identify priority QI areas and develop a tailored QI approach for each priority area. A tailored QI approach was developed to strengthen HIV testing processes -the focus of the funding agency DHAPP; it covered the identification of key quality indicators to track and improve upon over time and the development and execution of the QI work plan. The key indicators identified to measure laboratory improvement based on ease of collection and relevance to the work plan included: • SLIPTA: Percentage improvement in total and by section. • EQA: Percentage score in PT programme. • IQC: Accuracy of quality control results. • Personnel training: Percentage change in theoretical test scores for training workshops. • Laboratory documentation: Number of documents created and adequately implemented to standardise laboratory practices and formalise the commitment to the QMS. The laboratory-set targets were: continual improvement in the SLIPTA score through the establishment and implementation of improvement projects tracked by work plans, participation in EQA and PT and improvement of performance where the score is less than 100%, participation in IQC activities and improvement of processes in case of discrepancies between expected and obtained results, strengthening of laboratory quality management processes guided by well defined documentation, and the advancement of laboratory personnel competencies through training and mentorship workshops. The HIA-CHU laboratory supervisor facilitated work plan development in collaboration with partners (DHAPP and GSSHealth) and laboratory staff. The work plan included high-level QI goals, specific objectives for each goal and details of specific, measurable, achievable, realistic, and timely, or 'SMART', tasks and deliverables to meet set targets within defined timeframes (see Figure 1). The work plan was progressively implemented and updated through improvement project execution, progress monitoring, ongoing data collection and review, and continual alignment with the funder's priorities. Staff were engaged in the work plan implementation through their appointments to specific QI projects with oversight by the laboratory supervisor, and through continuous engagement in recurring collaborative work plan update meetings to promote accountability. Quality improvement activities conducted in the context of the work plan included: nomination of new QMS personnel roles (quality manager, biosafety manager), targeted laboratory personnel training and mentorship, implementation of EQA and IQC, definition of essential QMS processes and development and use of associated documentation, and coordination of a follow-up SLIPTA evaluation to measure progress. Lessons learnt From the baseline SLIPTA evaluation, the identified priority issues, which correspond to low-scoring SLIPTA sections, include the following: the lack of a quality manual or safety manual, the lack of standard operating procedures for all processes and technical procedures, the lack of PT and quality control for tests, the lack of equipment maintenance and repair logs, the lack of temperature monitoring processes, and the lack of mistakes or error logs. The initiation and implementation of a QI programme at the HIA-CHU laboratory yielded numerous positive changes as perceived by HIA-CHU hospital leadership, laboratory management and staff, and partners (DHAPP and GSSHealth). Positive changes included the establishment of a laboratory quality management team, achievement of designated QI projects resulting in SLIPTA score improvements, the improvement of IQC and EQA test result accuracy, the development and implementation of over 50 standard operating procedures, and the strengthening of personnel competencies through targeted QMS training and mentorship. Throughout the years of the QI programme, support from funding partner DHAPP and implementing partner GSSHealth ensured continuous evaluation of processes via the internal and external evaluations and the update of work plans and associated QI projects. Staff training and work plan updating The engagement of laboratory staff was key to the establishment and implementation of the laboratory QI work plan at HIA-CHU. To encourage staff commitment to the QI approach and ensure staff in-depth orientation to quality management concepts and testing processes, the laboratory QI approach prioritised staff training and mentorship. Over 50 technicians participated in training workshops on QMS co-organised and co-facilitated by the Benin military health system and GSSHealth. All training sessions consisted of didactic and interactive sessions and tests were administered pre-and post-training to evaluate knowledge change among the trainees. The median change in test score from pre-training to post-training for all the trainees increased from 15% to 40% (Figure 2). Improvements in participants' theory test results demonstrated improved understanding of QMS and technical concepts and facilitated personnel participation in QI projects. In the first year of the programme, two quality workshops were held for laboratory staff with GSSHealth facilitators. The objectives of the workshops were to improve the competency of laboratory staff in the areas of quality assurance, QMS and HIV and tuberculosis testing. These topic areas were priorities for the laboratory director and DHAPP -the funder, whose focus is the prevention and control of HIV and HIV comorbidities. The first workshop took place in April 2015 with a focus on quality management concepts for equipment management, document writing and EQA. Staff received guidance and mentorship on HIV rapid testing. A second workshop took place in September 2015, with a focus to increase staff competency on laboratory supply chain and stock management. At the end of the first year, a follow-up SLIPTA evaluation was conducted. In 2016, the QI programme continued with an updated work plan, and laboratory staff participated in a national biosafety and biosecurity workshop hosted by the Ministry In 2017, the laboratory quality team updated their quality work plan, worked with laboratory personnel on QI projects, implemented the use of quality controls for CD4 testing, and organised an internal SLIPTA evaluation. In April 2017, one laboratory staff member from HIA-CHU was sponsored to attend a multi-country, 5-day training workshop in Senegal. The workshop covered essential aspects of laboratory quality management, including document management and standard operating procedure writing; equipment management and maintenance; error occurrence management, prevention and corrective action, and quality indicators. Finally, a follow-up external SLIPTA evaluation was conducted in October 2017 to re-evaluate the laboratory system, measure progress since the previous external SLIPTA evaluation, and update the QI programme. As the QI programme moved into its third year in 2018, the laboratory again updated its work plan, based on the previous evaluation, and revised and updated key documentation including the laboratory quality manual and biosafety manual. In April 2018, a joint Ministry of Defense and Ministry of Health laboratory biosafety and quality management workshop was organised, to increase staff competency on principles and ISO standards for biosafety and biosecurity. Through its ongoing commitment to training personnel in both technical and quality management topics, the laboratory has observed significant improvements in staff competencies and performance. For example, the SLIPTA sections in which the laboratory achieved the most significant quality improvements correspond to the quality management topics covered during training workshops. Data from the QI programme showed that post-training, personnel knowledge was improved and retained ( Figure 2). Additionally, in 2015, staff undertook training on HIV rapid testing processes; thereafter, their HIV PT scores reached 100%. Document creation and implementation The laboratory document management system has been developed gradually over time and has included the creation and implementation of a quality manual, a biosafety manual, a sample collection manual and more than 50 standard operating procedures, technical instructions, forms and logs (e.g. temperature monitoring logs, corrective and preventive action forms). The laboratory supervisor and designated quality manager took the lead in developing documents, executing a document management process, and introducing new documents into circulation in the laboratory. Laboratory personnel were oriented to new documents and procedures during regular laboratory meetings, facilitating the understanding and proper use of new documents. The availability of procedures and other documents helped laboratory management ensure that personnel observed standardised processes across the pre-analytical, analytical and post-analytical phases, and simplified the training of new personnel. Furthermore, to ensure that only correct and updated documents were available in the laboratory, processes to control document revision, approval and version release were instituted by the laboratory supervisor and quality team. External quality assessment and internal quality control results In September 2015, the HIA-CHU laboratory enrolled and participated in an annual PT programme for HIV serology with commercial EQA provider Thistle quality assurance, and later with the Benin Ministry of Health, to promote sustainability. During the four EQA events for HIV serology, nine technicians participated, and PT scores of 100% were obtained. No corrective action was needed to improve proficiency; laboratory personnel were nonetheless oriented on root-cause analysis, and corrective and preventive action in case of future instances of lower EQA scores. In 2018, the HIA-CHU laboratory participated in two commercial haematology EQA programmes with Human Quality Assessment Services. The laboratory results for the first haematology EQA test were in the acceptable range for all analytes except for monocytes or mid cells, due to a reporting error. As a corrective action, the laboratory established a system for the supervised recording of results, in which the laboratory section supervisor verifies the accuracy of test results entered manually by technicians, after which results are entered into a computer by a secretary and printed for final validation. This process has helped reduce transcription errors. The HIA-CHU also began an IQC programme in biochemistry for the first time, in which the laboratory used high, normal and low commercial controls on equipment daily. When control values are outside the acceptable range, the laboratory staff will conduct a rootcause investigation and analysis. Elements to be investigated include reagents (conservation, expiration, etc.), room temperature, quality of distilled water, quality of the control product, etc. If the issue persists after the above steps, the equipment is then recalibrated. All corrective actions and measures taken are documented and recorded. Improvement of SLIPTA score Overall improvement of the laboratory's quality system was demonstrated by a 29% increase in SLIPTA score, from 16% at baseline evaluation in 2015 to 45% at the follow-up evaluation in 2017 (see Figure 3). Since 2018, reliance on the Ministry of Health National HIV Reference Laboratory for the administration of an HIV serology PT programme has allowed the HIA-CHU laboratory and other military laboratories to ensure a more sustainable and affordable PT option. Several important factors can explain the successes achieved during the QI programme. The laboratory leadership and staff benefited from the strong support of the hospital management, thus underpinning laboratory staff motivation to sustain quality improvements. Also, the implementing partner provided support in the execution of work plans, drafting of procedures, funding of training visits and organisation of regular teleconferences to track progress. The implementing partner provided sound advice for the appropriation of the quality approach and the implementation of improvement steps. Recommendations Although SLIPTA is an adaptable structure and was used to guide the HIA-CHU laboratory's quality successes, its formal implementation in francophone African countries and military contexts has been limited. 8 Military health systems face unique challenges that impact their laboratory QI efforts; the challenges and associated responses of HIA-CHU laboratory may be relevant to other military laboratory programmes or to non-military laboratories that have limited funds to invest in QI efforts. Regardless of improvements made, QI is a never-ending process and the sustainability of gains is not guaranteed. 10 Despite the quality advancements made over the past several years, challenges remain, requiring corrective action to ensure the efficacy of the ongoing programme. 11,12,13 The issue of the high mobility of military personnel has made it difficult to fully and consistently integrate staff into quality laboratory operations and maintain the laboratory QMS. As a result of high staff turnover -a key facet of the military system -all members of the starting laboratory quality team are no longer in service at the HIA-CHU laboratory in Cotonou. Continuous staff turnover in part explains the slowdown in progress observed between the 2016 and 2017 follow-up evaluations. Staff mobility has also negatively impacted the timelines for the execution of planned activities such as management reviews, internal evaluations and inventory processes, for all of which personnel must be properly trained and oriented to ensure their execution per SLIPTA and ISO 15189 requirements.
2021-04-06T05:16:09.146Z
2021-02-11T00:00:00.000
{ "year": 2021, "sha1": "d4d29eea3d5304b4db37db2cfb93a7a436fba411", "oa_license": "CCBY", "oa_url": "https://ajlmonline.org/index.php/ajlm/article/download/1057/1861", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d4d29eea3d5304b4db37db2cfb93a7a436fba411", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
235287586
pes2o/s2orc
v3-fos-license
Studies on utilization of different preheated straight vegetables oil in a CI engine Renewable energy befits as an attractive and sustainable solution to compensate for the imbalance in the supply and demand of fossil fuels. Continuous depletion of fossil fuels, price variations and detrimental effect of greenhouse gas emissions are the burning issues in the current scenario. Non-edible vegetable oil can be considered as a comparable alternative to replace conventional diesel fuel. There are some operational issues with straight vegetable oils (SVOs) such as their less calorific value, higher density and higher viscosity etc. which may be dealt with by adopting an appropriate treatment before combustion. Present study has been undertaken on the use of non-edible oils available in the vicinity of the Institute Research Lab. Oils derived from the seeds of Jatropha and Mahua have been preheated to reduce their high viscosity, surface tension and density for meeting the fuel requirements and to use on engines. Experimental and comparative study about the emission and performance parameters has been carried out on a 3.5 KW diesel engine fuelled with preheated straight vegetable oils and diesel fuel. It has been found that at full load the brake thermal efficiency (BTE) of preheated straight vegetable oil is lower by about 8–10% as compared to diesel. Brake specific fuel consumption (BSFC) has been found comparable to diesel with a variation of approximately 2-5%. Emission components hydrocarbon and carbon mono oxide have been found to be reduced significantly in case of preheated oils. The results support the use of non-edible straight vegetable oils directly on the engine in preheated form without converting them into biodiesel by transesterification process. This research motivates the rural areas of developing countries to become self-reliant in energy production as most of these areas are enriched with huge amount of vegetation which can be used to produce edible and non-edible vegetable oil. Introduction The escalating demand of fossil fuels in transportation sector, industrial sector, automobile sector, power sector etc. and the fast evolving mechanization has led to the search for alternative fuels which are globally acceptable technically and can be treated as an optimal solution to the problems of environmental degradation, energy security issues, employment and ultimately GDP of any country. R. Altin [1] elaborated that vegetable oils are not harmful for environment because of less content of HC, CO and NO x . Also, these oils have fairly good energy content and their properties are comparable to fossil fuels. These oils can be extracted from various plants and seeds which can be produced easily in variety of agricultural lands. Bernat Esteban et al [2] has done experimental research on vegetable oil and found that viscosity can be lower down by preheating at a temperature of about 140 o C. It also improves the spray pattern, atomization and cetane rating of the fuel. Jeyachandran et al [3] analyzed that the peak pressure level (with SVOs) in engine cylinder was considerable when experimentation has done with diesel due to the notable modification in value of oil viscosity and improved droplet size. P Psonune [4] found from experiments that emission percentages of CO, HC and CO 2 were dropped down due to preheating of fuel but at the same time NOx slightly increased. Nandkishore D. Rao [5] analyzed that vegetable oil emulsions with alcohol result in reduced viscosity, better combustion, and improved volatility. Vijay Sisarwal et al [6] found from experimental investigations that the brake thermal efficiency with blends of vegetable oil was better than that obtained from straight vegetable oils because of improved combustion features. The brake specific fuel consumption with blends was found to be lower as compared to that obtained with straight vegetable oils due to better atomization of blends. M. Pugazhvadivu et al [7] analyzed that the performance of engine was enhanced and carbon oxide & hydrocarbon emissions decreased when diesel engine was run with preheated vegetable oil at 130°C. They concluded that SVO can be used as an alternative of diesel in extra ordinary conditions. Gaurav Dwivedi et al [8] analyzed five edible and four non -edible SVOs on the basis of structure of fatty acids. The findings were based on oxidation stability index and cold flow properties and SVOs were graded for biodiesel production. M.S. Reddy et al [9] conducted an experimental study on a non-firing engine equipped with FIE simulator. The study was conducted to confirm the long-term compatibility and durability of bio-fuel blends. The major parts of FIE were investigated for wear. The fuels used for experimentation study were blends of Karanja, Jatropha, Biodiesel and baseline mineral diesel. The various parameters like dimensional loss, weight loss and surface texture deviation were estimated to analyze the compatibility of FIE with test fuels. Biodiesel blends showed relatively reduced percentage of wear compared to mineral diesel however SVO blends showed no specific trend compared to baseline mineral diesel. A.T. Hoang et al [10] revealed from the experimentation study that specific fuel consumption, carbon mono oxide, hydro carbon and smoke emissions were higher, but thermal efficiency and nitrogen oxide emissions were lower on using heated coconut oil (HCO) in comparison with diesel fuel. The study confirms the use of heated raw coconut oil (up to 100 o C) as the most substitute fuel to attain the Marym Dabi et al [16] has performed an experimental study on a CI engine and elaborated that use of excessive high preheated fuel may affect injection system, combustion and emission characteristics of the engine. Further, additional investment is required to sustain such higher preheating temperature. Therefore, the major issue is to optimize the fuel preheating temperature before feeding to the engine. He found that the best technique to use vegetable oil in the engine was blending in different percentages. Narayan Lal Jain et al (17) conducted his study on preheated thumba oil and revealed that preheated optimized thumba oil (B20) blend gives 1.27% higher thermal efficiency and 0.02 kg/kwh lesser brake specific fuel consumption than the unheated same blend. Anh Tuan Hoang et al (18) experimentally investigated the high-speed direct-injection diesel engine performance when operated for 300 hours with diesel fuel and preheated jatropha oil. The study focused on the deposit formation on the injector tips, spray characteristics, output power, and emissions. On the basis of SEM and EDS analysis, the deposits formed in the injector tips for preheated jatropha oil were found much more than the case of diesel fuel. The deposits formed on injector tip reduced the injector holes diameter and also partly obstructed injector holes which results the increase of penetration length and a reduction of cone angle Based on the literature, it is derived that the preheated straight vegetable oils have a substantial potential for use in diesel engine. In the present experimental study, preheated Jatropha and Mahua SVOs have been used in a 4-stroke single cylinder diesel engine. Performance and emission characteristics have been determined at different loading conditions by maintaining a temperature range of 130 o C to 150 o C. The results have been compared with the performance and emission characteristics of the engine run on diesel fuel. Materials In the present work, non-edible oils (Mahua and Jatropha) were used for experimental work, because they are not directly associated with human consumption. Oils used for the tests were properly filtered and used directly in the test engine. An electric heater was incorporated for oil heating and an eddy current dynamometer was used for torque measurement. In Table- Experimental Setup Table-2 shows typical specifications of the diesel engine used for the testing. The water cooled diesel engine was used with some modifications in the fuel intake system. A four stroke constant speed Single cylinder, direct injection diesel engine ( Figure-1) was used for experimental investigations. Thermocouples were used in the engine for measuring temperature of heater cell and heated oil in flow at the fuel inlet passage. Fuel intake line was connected to the heater and burette, and burette showed the level of fuel in the fuel tank. Initially the fuel level was low due to high viscosity but with increase in the temperature, the viscosity reduced and the fuel level increased in the burette. Figure-1 Experimental Engine Setup The engine was run maintaining a temperature range of 130 o C -150 o C during the tests, which was controllable by switching on the electric heater to attain the temperature in the desired range. Performance Parameters The effect of using SVOs on the brake thermal efficiency (BTE) and the brake specific fuel consumption (BSFC) when used on the test engine are summarised as follows: Brake thermal Efficiency (Figure 2) The Brake thermal efficiency (BTE) is an ideal parameter to identify the engine performance. It was found that the BTE of diesel, preheated Jatropha oil and preheated Mahua oil were about 25.50 %, 23.95% and 23.64 % respectively at full load. The possible reason for low BTE with SVOs could be their low calorific value and low volatility, leading to improper spray pattern and poor atomization. In case of diesel, the smaller fuel particles might have lead to better atomization of fuel, thus providing improved combustion efficiency. In case of SVOs, non-homogenous fuel distribution also could have resulted in poor spray pattern in the combustion chamber, thus causing the improper combustion and thereby low efficiency. Brake specific fuel consumption (Figure 3) The results indicate that with increasing loads, brake specific fuel consumption (BSFC) reduced consistently, probably due to improved fuel combustion and minimum heat losses. At full load condition of the engine, the BSFC for Diesel, preheated Mahua and Jatropha oils were found to be 482, 472 and 461gm/kWh respectively. With heated vegetable oils, there was an improvement in BSFC, the possible reason being the reduction in viscosity and thereby improved volatility and atomization. Emission parameters The main exhaust emissions in the engine are hydrocarbon, carbon monoxide and nitrogen oxide which are discussed subsequently as follows: Hydrocarbon emissions (Figure 4) During the experiments, it was found that the HC content in the exhaust was slightly lower when the engine was run on SVOs instead of diesel, possibly because at high temperatures, the viscosity of Figure-4 HC emission versus Load for different fuels SVOs got reduced resulting in the improved vaporization and the improved air fuel mixture leading to better combustion. Carbon monoxide emissions (Figure 5) CO concentration in the exhaust was found to be higher in case of diesel as compared to the preheated Mahua and Jatropha oils. This was possibly due to the oxygen content present in SVOs causing better and complete combustion and thereby lesser CO emissions. Nitrogen Oxide Emissions (Figure 6) The results indicate that nitrogen oxide emissions rapidly increase with increasing load condition. It is possibly due to the fact that increasing the fuel quantity influences the increasing oxygenated fuel and promotes the high combustion temperature in the combustion chamber thus predominantly affecting the increase of NOx production. It was observed during the tests that NO X formation was higher with preheated Mahua and Jatropha oils as compared to diesel. The increase in NOx emissions with preheating may be attributed to the increase in combustion gas temperature with the increase in fuel inlet temperature. Further, the increase in NOx emissions with preheated oils may be due to various reasons, such as improved fuel spray characteristics, better combustion due to high oxygen content in vegetable oil, and higher temperature in the cylinder as a result of preheating. Hence to reduce NOx emissions, the temperature in the cylinder should be reduced. Conclusion In the present experimental work, the diesel engine was fuelled with SVOs and diesel to test the engine performance and emission characteristics. It was observed that the viscosity of both Jatropha and Mahua oils decreased remarkably with increase in temperature and viscosity became closer to mineral diesel above 95 o C. The brake thermal efficiency (BTE) was found maximum for diesel at full load and relatively lower (about 8 -10% lower) for preheated Mahua and Jatropha oils respectively (figure 2). The reason for lower BTE with SVOs might be due to their lower calorific value and low volatility, which lead to improper spray pattern and poor atomization. The values of brake specific fuel consumption (BSFC) were found comparable and at full load condition, a variation of approximately 2 -5% was observed when engine was run on preheated Jatropha and Mahua oils instead of diesel ( figure 3). The emission levels were also tested and it was found that hydrocarbon (HC) emissions (figure 4), and carbon mono oxide (CO) emissions (figure 5) were reduced when engine was run with preheated Mahua and Jatropha oils instead of diesel. Also it was noted that nitrogen oxide (NO x ) emissions
2021-06-02T23:54:56.164Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "8a80bbc9e973a42ee7cedda88f5205ef502b1424", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/1116/1/012057", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8a80bbc9e973a42ee7cedda88f5205ef502b1424", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
264821189
pes2o/s2orc
v3-fos-license
Reducing bias through directed acyclic graphs Background The objective of most biomedical research is to determine an unbiased estimate of effect for an exposure on an outcome, i.e. to make causal inferences about the exposure. Recent developments in epidemiology have shown that traditional methods of identifying confounding and adjusting for confounding may be inadequate. Discussion The traditional methods of adjusting for "potential confounders" may introduce conditional associations and bias rather than minimize it. Although previous published articles have discussed the role of the causal directed acyclic graph approach (DAGs) with respect to confounding, many clinical problems require complicated DAGs and therefore investigators may continue to use traditional practices because they do not have the tools necessary to properly use the DAG approach. The purpose of this manuscript is to demonstrate a simple 6-step approach to the use of DAGs, and also to explain why the method works from a conceptual point of view. Summary Using the simple 6-step DAG approach to confounding and selection bias discussed is likely to reduce the degree of bias for the effect estimate in the chosen statistical model. Background The objective of most biomedical research, whether experimental or observational, is to predict what will happen to an outcome if the treatment is applied to a group of individuals or if a harmful exposure is removed. In other words, the clinician/policy maker is interested in making causal inferences from the results of a study. The purpose of this manuscript is to demonstrate a simple 6-step algorithm for determining whether a proposed set of covariates would reduce possible sources of bias when assessing the total causal effect of a treatment on an outcome. There are many nuances to the definition of cause. For the purposes of this manuscript, we define it in counterfactual terms: "Had the exposure differed, the outcome would have differed", where exposure or outcome may be dichotomous (e.g. presence/absence of exposure; occurrence/disappearance of disease) or continuous (e.g. a different value of blood pressure whether blood pressure is exposure or outcome). Further refinements into sufficient, complementary and necessary causes [1] are important but do not alter the essence of the definition. Although the above causal definition is deterministic at the individual level, in almost all practical settings the outcome under the counterfactual condition is unknown. Therefore, researchers are limited to causal inference at the population level (e.g. comparing average risks) [2]. A straightforward explanation of the use of counterfactuals to define cause can be found in [2]. There are many features of a study that can lead to inappropriate causal inference. For the purposes of this discussion, we assume "ideal" processes for the study (i.e. large studies that minimize the risk of a chance unequal distribution of subjects with different prognoses, no information or selection or detection bias, complete follow-up and adherence, no measurement bias, etc). Under "ideal" conditions, inappropriate causal inferences (i.e. biases) are more likely to occur in observational studies compared to randomized trials because some subjects may be exposed to a treatment for a condition specifically because of personal factors that are related to prognosis (figure 1a). Under the conditions described above, most epidemiologists would consider this confounding bias. We recognize that there are several definitions of confounding bias and Greenland and Morgenstern provide an excellent overview of the nuances among the different definitions [3]. The traditional approach to confounding is to 'adjust for it' by including certain covariates in a multiple regression model (or by stratification). One common practice is to consider a covariate to be a confounder (and "adjust" for it) if it is associated with the exposure, associated with outcome, and changes the effect estimate when included in the model. According to standard textbooks, additional criteria also need to be applied and the covariate should not be affected by exposure and needs to be an independent cause of the outcome [4]. However, recent advances in epidemiology have proven that even these additional criteria are insufficient. In fact, the methods described above may introduce conditional associations (sometimes called selection bias [5,6], collider bias [6] and confounding bias [6,7]; this terminology may be confusing and we prefer the terminology suggested by the structural approach to bias as described later) and create bias where none existed, which is in direct contrast to the objective of eliminating an existing bias [2,8,9]. Some published examples include the effectiveness of HIV treatment [10], and why birth weight should not be included as a covariate when examining the causal effects of exposure during pregnancy on perinatal outcomes [11]. One method to help understand whether bias is potentially reduced or increased when conditioning on covariates is the graphical representation of causal effects between variables. In the causal directed acyclic graph (DAG) approach, an arrow connecting two variables indicates causation; variables with no direct causal association are left unconnected. Therefore the bi-directional arrows in figure 1a are replaced with unidirectional arrows (figure 1b). There are of course situations where each variable may cause the other -the functional disability created by chronic pain may cause depression, and depression may cause chronic pain through diminished pain thresholds. These more complex situations are simplified by understanding that time is a component in the above relationship. Therefore, there is a variable for depression at time 1, chronic pain at time 1, depression at time 2, and chronic pain at time 2; the same construct measured at different times represents distinct variables and must be treated as such. Although other articles have previously described the DAG approach to confounding [9,12,13], the articles demonstrate relatively simple DAGs. However, many clinical problems require complicated DAGs and little has been published on how to assess whether a particular subset of covariates potentially reduces or increases bias in this context [6,9]. Therefore, although many investigators now understand the problem, they continue to use traditional practices because they do not have the tools necessary to choose the statistical model that is most likely to yield an unbiased effect estimate. The objective of this manuscript is to demonstrate a simple 6-step approach developed by Pearl [14] that helps determine when the The bi-directional arrows in A show the traditional represen-tation of a confounder as being associated with the exposure (X) and outcome Figure 1 The bi-directional arrows in A show the traditional representation of a confounder as being associated with the exposure (X) and outcome. Because confounders must cause (or be a marker for a cause) of both exposure and outcome (see text for rationale based on basic principles), directed acyclic graphs use only unidirectional arrows to show the direction of causation (B). Covariate traditional statistical approaches of regression/stratification on specific covariates is likely to reduce or increase bias, and to provide our explanation as to why the method works from a conceptual point of view. Although this manuscript is limited to the conceptual discussion necessary for clinical researchers to use DAGs, there are many different facets and a complete theoretical development of these materials has been published elsewhere and has been summarised in one source [15]. Readers are also encouraged to learn more about counterfactual random variables, an important complement to the theory of DAGs [3,16]. The Pragmatic Solution: a Six-Step Process Towards Unbiased Estimates [14] By applying the following simple 6-step process correctly, we will show how including only 2 covariates in a complicated causal diagram (figure 2a) is likely to reduce bias. As each step is described, we also explain its conceptual role in the process. Formal proofs of the underlying theorems have been summarised in one source [15]. In the subsequent section of the manuscript, we will add an additional covariate from the diagram into the model and show how including this additional variable is likely to increase bias rather than reduce it. It is important to note that this algorithm demonstrates whether bias would be minimized in a specific situation, but does not indicate all the situations in which bias is minimized. For example, if a confounder causes a second variable with a high probability (i.e. the second variable is a strong marker for the confounder), including the marker for the confounder should reduce bias [6]. However, in this situation, the algorithm we describe would still suggest that there is bias in the effect estimate. Therefore, the algorithm is used to "rule-in" appropriate sets of covariates and it is beyond the scope of this article to discuss the special cases where bias might be reduced even when the algorithm fails. Therefore, if the algorithmic conditions are not met, readers are encouraged to either choose another set of covariates, or seek further help in order to determine if their particular model is one of the cases where bias might still be reduced. Figure 2a is one possible causal diagram for the relationship between warming up prior to exercise and the outcome injury (we will show another possible causal diagram later in the manuscript). The question we want to answer is whether including a measure of neuromuscular fatigue (Z 1 ) and tissue weakness (Z 2 ) (in the design or analysis stage) would minimize bias in the estimate of the effect of warming up on injury if this is the true causal diagram. We will later discuss how to approach the more general problem when multiple causal diagrams are possible. As with any analytic approach to bias in an observational study (including the one below), we must make some assumptions regarding how variables are causally related to each other; we seek to determine whether our analytic approach would succeed under these assumptions. The algorithm we describe below only works if the DAGs are drawn so that they include all variables that cause two or more other variables shown in the DAG [17]. In other words, no common causes can be omitted from the DAG. Finally, the DAG approach does not reduce or eliminate other sources of bias (e.g. measurement bias). Finally, at the end of the manuscript, we have provided a glossary of terms used so that readers unfamiliar with DAG terminology have an easy reference immediately available (genealogical terms are often used to describe relationships between variables). Step 1 (figure 2a): The covariates chosen to reduce bias [fatigue (Z 1 ) and tissue weakness (Z 2 ) in this case] should not be descendants of X (i.e. they should not be caused by warming up) This does not occur in this situation and one can proceed to Step 2 Step 1 ensures that the covariates chosen are possible confounders in the traditional sense of the word; if the covariates are descendants of X, then the statistical model adjusting for these variables may yield a biased estimate for the total causal effect of X on the outcome and a different set of covariates needs to be chosen. The step is required because confounding bias (as defined by the structural approach) can only occur if a covariate causes the exposure or is a marker for a cause of exposure (note that other biases are still possible and discussed in Step 4). Although more formal proofs exist [12], this can be deduced from the following standard criteria for a potential confounder: the covariate must be associated with the exposure and with the outcome, but cannot be affected (i.e. caused) by exposure [4,18] (for completeness, these standard criteria are in fact insufficient to define confounding [9], and more complicated scenarios such as time-dependent confounding [19] are not covered by the standard definitions). Because it is inappropriate to include a variable that lies along a causal pathway between the exposure of interest and the outcome, it is also inappropriate to include a marker for a variable that lies along a causal pathway. For example, if the marker is 100% correlated with the causal pathway variable, there is no mathematical difference in a statistical model between the marker and the causal pathway variable. Thus, if a covariate is associated with an exposure, and the exposure cannot cause or be a marker for a cause of the covariate, then the covariate must cause (or be a marker for a cause of) the exposure. By similar reasoning, one can deduce that confounding only occurs if the covariate also causes, or is a marker for a cause of the outcome. Returning to the DAG, if the covariate is a descendant of X, it means the exposure is a cause of the covariate. Step 2 (figure 2b): Delete all variables that satisfy all of the following: 1) non-ancestors (an ancestor is a variable that causes another variable either directly or indirectly) of X, 2) non-ancestors of the Outcome and 3) non-ancestors of the covariates that one is including to reduce bias (Z 1 and Z 2 in this example) In figure 2a, the only covariate that fulfills this criterion is previous injury (Z 3 ) and this is deleted in figure 2b. Note that the exposure, outcome and covariates should not be deleted Step 2 is essential because after completing the step, all variables left are either conditioned on, or have one of their descendants conditioned on. The importance of this result will become clear in Step 4. Step 3 (figure 3a): Delete all lines emanating from X In this setting, warming-up causes a change in proprioception, and therefore we delete this arrow In Step 3, deleting all lines emanating from X effectively simplifies the DAG because we have already said that X should not be a cause of the covariates in the model. We leave the variables in and eliminate the line because these variables may be responsible for bias through an indirect pathway. This will become clearer in Step 4, and the example where we include a third covariate, which results in the introduction of bias. Step 4 (figure 3b): Connect any two parents (direct causes of a variable) sharing a common child (this step appears simple but it requires practice not to miss any) For example, team motivation and poor proprioception can both cause an individual to warm-up more than someone without these factors -these two variables are joined because they share a common effect Step 4 is essential for the following reason. If two covariates both cause a third covariate, then adjustment for the third covariate (or an effect of the third covariate) creates a conditional association between the first two covariates (i.e. if one conditions on the child or descendant of the child, there is a conditional association between the parents), and could introduce bias [20]. For example, both rain and sprinklers can cause a football field to be wet. If one knows the grass is wet, then knowing the sprinklers were off improves your assessment of the probability that it rained; rain and sprinklers become associated when the common effect of "field wetness" is known. Consider a second example from the health sciences: both a thrombus and a haemorrhage can cause a stroke. If we condition on the patient having symptoms of a stroke and learn that there was no haemorrhage, the probability that a thrombotic event occurred is increased. By connecting the two parents of a common child in the figure after Steps 1-3 are completed, we are explicitly stating that we understand that these variables are associated because we have either conditioned on the value of the child or one of the child's descendants (otherwise the variable would have been removed in Step 2). As we shall later see, it is this conditional association that can cause the introduction of bias when traditional rules of confounding adjustment are applied without reference to a DAG. In DAG terminology, the child is called a "collider" because two directed arrows collide at the covariate (node). Step 5 (figure 4a): Strip all arrowheads from lines In Step 5, we strip all the arrowheads from the lines. This is because the arrowheads (causal direction) were only necessary to note the conditional associations created between two parents of a collider. Once this is done, we can simplify the diagram as we have now completed all the steps related to causation. a-b. Diagrammatic equivalent of the 6-step process to determine if one obtains an unbiased estimate of the exposure of inter-est (X) on the Outcome by including a particular subset of covariates (see text for details of the specific steps) Figure 2 (see previous page) a-b. Diagrammatic equivalent of the 6-step process to determine if one obtains an unbiased estimate of the exposure of interest (X) on the Outcome by including a particular subset of covariates (see text for details of the specific steps). In this example, we are interested in minimizing the bias when estimating the causal effect of warming up on the risk of injury. In figure 2a, a possible causal diagram of variables that are associated with warming up (X) and injury (outcome) are shown. The main mediating variable is believed to be proprioception (balance and muscle-contraction coordination) during the game. Starting at the top of the figure, the coach affects the team motivation (including aggressiveness), which affects both the probability of previous injury and the player's compliance with warm-up exercises. A player's genetics affects their fitness level (along with the coach's fitness program) and whether there are any inherent connective tissue disorders (which leads to tissue weakness and injury). Both connective tissue disorders and fitness level affect neuromuscular fatigue, which independently affects proprioception during the game and the probability of injury. Finally, if the sport is a contact sport, the probability of previous injury is greater, as is the probability of minor bruises during the game that would affect proprioception. Although other causal models are also possible, we will use this one for illustrative purposes at this time. For this example, we have decided to include neuromuscular fatigue (Z 1 ) and tissue weakness (Z 2 ) in the statistical model. Step #1 is to ensure that these covariates are not descendants of (i.e. directly or indirectly caused by) warm-up exercises. Step 2 is illustrated in 2b. The open circle (previous injury, Z 3 ) represents the only non-ancestor (an ancestor is direct or indirect cause of another variable) of warm up exercises (X), neuromuscular fatigue (Z 1 ), tissue weakness (Z 2 ) and injury (Outcome). It is therefore deleted from the causal diagram in figure 2b. a-b. In Step 3 (3a), all arrows emanating from X are deleted Figure 3 a-b. In Step 3 (3a), all arrows emanating from X are deleted. In Step 4 (3b), one joins all parents of a common child. We have used dashed lines here for clarity. a-b. In Step 5 (4a), we strip all the arrowheads off all the lines Figure 4 a-b. In Step 5 (4a), we strip all the arrowheads off all the lines. In Step 6 (4b), all lines touching the covariates neuromuscular fatigue (Z 1 ) and tissue weakness (Z 2 ) are deleted. Because the exposure of interest (warm up exercises) is dissociated from the Outcome (injury) after Step 6, the statistical model that includes the covariates neuromuscular fatigue and tissue weakness minimizes the potential bias for the estimate of effect of warm up exercises on the risk of injury. Step 6 (figure 4b): Delete all lines between the covariates in the model and any other variables All lines into and out of Neuromuscular fatigue (Z 1 ) and tissue weakness (Z 2 ) are deleted Step 6 is simply the graphical equivalent of standard regression techniques. When a covariate is included, the estimate of the effect represents the relationship between the exposure and the outcome independent of any causal pathway going through that covariate; including the covariate "blocks" all associations occurring through this pathway. Therefore, we can delete all lines between the covariates included in the model and any other covariates. Interpretation: If X is dissociated from the outcome after Step 6, then the statistical model chosen (i.e. one that includes only the chosen covariates) will minimize the bias of the estimate of X on the chosen outcome If this causal model is correct, then a statistical model that includes a measure of tissue weakness and neuromuscular fatigue minimizes the bias in the estimate of the effect of warming up on the risk of injury We have now deleted all the direct causal pathways between the exposure of interest and the outcome, and between the covariates and the outcome, and explicitly noted the conditional associations created by including specific covariates with two different causes as explained in step 4. If there is no uninterrupted series of lines through nodes from X to the outcome after completing the six steps (figure 4b), then within this specific causal DAG, there is no non-causal structural association between X and the outcome. In other words, any measured association between the exposure and outcome that exists conditional on the covariates in the model minimizes the bias in the estimate of the causal relationship. When including covariates creates a conditional association and introduces bias In the last step of this process, we show that including a different subset of covariates in the statistical model can introduce a conditional association or bias (called "collider-stratification bias" or "selection bias" by different authors) (figure 5). In this example, we again include neuromuscular fatigue (Z 1 ) and tissue weakness (Z 2 ), and add the covariate previous injury (Z 3 ) to our statistical model. Note that previous injury is a marker for a direct cause of warming up (X) (team motivation/aggression). It is also a marker for contact sport (an indirect cause of the outcome). Therefore previous injury is associated with both the exposure and the outcome and many researchers would include it in the statistical model. Figure 5a-c show the result of including previous injury in the model graphically. The key to the process in this case lies in step 4. Because previous injury is now present in the model, its two parents are conditionally associated (because includ-ing Z 3 means the value of Z 3 is known) where they were not associated in the previous example. After step 6, warming up remains connected to the outcome and therefore the estimate of the effect of warming up on the injury would be biased. It is essential to understand that previous injury (Z 3 ) may be a very important predictor of the outcome, and techniques such as stepwise regression might strongly suggest that it be included in the model. Further, simply measuring univariate relationships and finding that Z 3 is related to both the exposure and the outcome would also suggest that it be included in the model. Finally, adding Z 3 to a model that included Z 1 and Z 2 would indeed change the effect estimate, and this is often used as a criterion to suggest that a specific covariate causes confounding bias. It is only through an understanding of the theoretical framework that one realises that including Z 3 in the model along with Z 1 and Z 2 will lead to a conditional association and a biased estimate of effect. Understanding the conditional associations naturally leads to what is sometimes known as the structural approach to bias [5,15]. Using this approach, epidemiologic biases can be categorized as either lack of conditioning on a common cause (known as confounding bias), or conditioning on a common effect of two parents (or a descendant of the common effect; known as selection bias). The typical selection bias described in observational studies is due to conditioning on a common effect (one conditions on willingness to participate), as are Berkson's bias (conditioning on admission to hospital), loss to follow-up or missing data (conditioning on presence of data; occurs in both observational or randomized trials), some forms of Simpson's Paradox, etc [5,12]. Indeed, we believe it is possible to represent all epidemiologic biases in DAGs; therefore, the restrictions we set out at the beginning of this article concerning an ideal study were used only as a pedagogical tool and are not necessary for this approach. Selecting a subset of covariates that minimizes the bias in the estimate of the effect requires trial and error and a sound foundation of the theoretical model. At the present time, there is no algorithm and the six-step process should be repeated until a subset of covariates is found such that X is dissociated from the outcome after the 6-step process is completed. Additional Advantages There are two other potential advantages to the DAG approach. First, only a subset of covariates that are associated with both exposure and outcome are necessary to yield an unbiased estimate of effect. Second, because one may require fewer covariates in the model, the statistical a-c. This example illustrates the effect of adding the covariate "previous injury" (Z 3 ) to the statistical model used for the causal diagram in Figure 2a Figure 2a. Note that previous injury is associated with both warming up (through team motivation/aggression) and the outcome injury (through Contact Sport). After completing steps 1-4, one is left with figure 5b. Because previous injury (Z 3 ) is included in the model, it has not been deleted from the causal diagram in Step 2, and one must join its ancestors (dotted line). Figure 5c represents the causal diagram after completing Steps 5-6. Because warm up is not dissociated from the outcome risk of injury in figure 5c, the statistical model that includes the covariates Z 1 , Z 2 , and Z 3 will yield a biased estimate of warm up on the risk of injury. efficiency of the analysis is increased (i.e. there are more degrees of freedom if one uses fewer covariates). Limitations to the 6-Step approach The immediate question that always arises is how can one know the true underlying causal structure in order to draw the DAG (i.e. Step 0) -if we knew it, we wouldn't have to study the disease. Although it can be a challenging exercise, the fact remains that understanding the causal structure is an essential step when one wants to know if including a covariate is likely to reduce or increase bias in the effect estimate. In other words, the DAG representing the true causal structure exists even if we do not know what it is, and all causal inferences based on statistical models are implicitly based on a causal structure -the DAG approach simply makes the assumptions explicit. As an example, the causal DAG in Figure 2a may be incorrect and one alternative is illustrated in Figure 6a. In this causal diagram, we have added a causal link from previous injury to pre-game proprioception, and indicated the additional conditional associations that occur due to this change using dotted lines. If Figure 6a represents the true causal diagram, traditional regression/stratification using only neuromuscular fatigue and tissue weakness with or without previous injury will introduce bias for the following reason. Previous injury is now an ancestor of warm-up exercises (previous injury causes pre-game proprioception which causes warm-up) and is therefore not deleted in Step 2 and this leads to two important features. First, contact sport is now a common cause of warm-up (contact sport -previous injury -pre-game proprioceptionwarm-up) and of injury (contact sport -intra-game proprioception -injury) and therefore including only neuromuscular fatigue and tissue weakness will still provide a biased estimate. Second, the conditional association between Team Motivation/Aggression and Contact Sport exists whether or not we condition on previous injury because we have already conditioned on a descendant of previous injury in this DAG (i.e. warm-up). Therefore, although adding previous injury or pre-game proprioception to the statistical model would block the bias due to the common cause "contact sport", the inclusion of either of these variables does not block the conditional association that now exists between Team motivation/Aggression and Contact Sport; using the six-step algorithm illustrates this clearly for those not used to working with DAGs. In Figure 6b, we present a different causal diagram where we have added a causal link from pre-game proprioception to intra-game proprioception. Figure 7a shows the diagram after step 4 has been completed, and Figure 7b shows the result after completing all the algorithmic steps once we condition on Tissue Weakness, Neuromuscular Fatigue, Previous Injury and Contact Sport. The presence of a path through the variables Warm-up Exercise, Pre-game propri-oception (directly or through Team Motivation/Aggression), and Intra-game proprioception to Injury means that we still have a biased estimate. Authors who make causal inferences without explicitly using the DAG approach are assuming a specific DAG (i.e. causal structure) without consideration of other possibilities. Drawing causal DAGs can be challenging. Causal DAGs represent theory, and theory needs to be developed within the context of all the evidence (basic science, observational and clinical trials) available. Because of this, generating a causal DAG necessarily requires the collaboration of methodological experts, clinicians, physiologists, and others (e.g. psychologists, sociologists) depending on the particular question. The inclusion of latent (unmeasured) variables poses additional problems [21,22]. For many conditions, it is likely that even after reviewing all the evidence, we still won't have enough information to determine if one particular DAG is more appropriate than another DAG. Under these conditions, it is necessary to draw each of the possible DAGs and determine if the same choice of covariates yields an unbiased estimate for each. If not, then one should present each of the interpretations and future research will determine which causal diagram, and which interpretation is correct. Not using the causal approach because of uncertainty on which is the correct DAG simply means that one is allowing chance rather than rational deliberation to make the choice among the different causal diagrams. A further corollary of the structural approach to bias is that an understanding of biological mechanisms and basic science is necessary for appropriate epidemiological studies, and that cross-discipline collaborations should be encouraged. The DAG approach requires a "node" for each of the covariates. Effect modifiers or covariates that interact with other covariates in a synergistic or antagonistic manner are not currently indicated as such in a DAG. Although there is some theoretical work currently being done in this area (e.g. [23,24]), one can conceptually think of two binary variables that interact as a single variable with multiple levels, and include them as a single node in the DAG. This is somewhat analogous to treating socio-economic status as one variable in a model even though it represents the two distinct constructs of sociological and economical influences. As is known, when two variables both cause a third variable, there is interaction on either the multiplicative scale, additive scale, or both. Therefore, if a DAG were to model synergism or antagonism, one would need different DAGs for different measures of effect (e.g. risk ratio versus risk difference). Finally, issues related to sufficient and component causes have also recently been addressed elsewhere [25]. The DAG approach is not a statistical technique that yields an estimate of effect. However, it will allow users of traditional stratification and regression techniques to reduce the magnitude of the bias in the estimate. Although researchers should generally not adjust for a covariate (or a marker for a covariate) that lies along a causal pathway when assessing the total causal effect, this may not be the case for researchers interested in decomposing total causal effects into direct and indirect effects. In these cases, one may sometimes need to include covariates that lie along the causal path, but this is a process that needs to be carefully thought out or incorrect inferences may occur [26,27]. We also think it is important to highlight the effect of newer statistical techniques to assess total causal effect like marginal structural models [28] that are often necessary in special situations, such as when the covariate is affected by exposure or when a covariate is both a "collider" and a "confounder" at the same time [29,30]. Conclusion The traditional approach to confounding bias by determining only associations and avoiding discussions related to causation is problematic and has led to inappropriate data analysis and interpretation [10,13]. The DAG approach can be used to help choose which covariates should be included in traditional statistical approaches in order to minimize the magnitude of the bias in the estimate produced. Investigators should become aware of the other statistical causal approaches available so that the appropriate technique is used to answer the appropriate question. 2. Common Effect (also known as collider): A common effect is a covariate that is a descendant of two other covariates. The term collider is used because the two arrows from the parents "collide" at the node of the descendant. a-b. Figure 6a is an example of an alternative causal diagram to figure 2a Figure 6 (see previous page) a-b. Figure 6a is an example of an alternative causal diagram to figure 2a. The only difference between the two is an additional causal relationship where previous injury causes a decrease in pre-game proprioception (we have also included the additional conditional associations that occur as a result of this change with dotted lines). We are still interested in the causal effects of warm-up on injury risk. Because previous injury is an ancestor of warm up exercises (previous injury causes a decrease in pre-game proprioception which causes an increase in warm up exercises), it is not deleted in Step 2. This leads to two effects. First, contact sport is now a common cause of exposure and outcome. Second, there are additional conditional associations in Step 4 (dotted lines) even if "Previous Injury" is not conditioned on in the statistical model because one is already conditioning on a descendant of previous injury (i.e. the main exposure of interest, warm-up); the effect estimate of warm-up on injury is biased if the statistical model includes only warm-up, neuromuscular fatigue and tissue weakness. Figure 6b shows the same causal diagram as 6a (without the conditional associations), but now a causal link is added from pre-game proprioception to intra-game proprioception. a-b. Figure 7a represents the causal diagram in Figure 6b after step 5 (dark dotted line represents the additional conditional association due to the new causal link in figure 6b), and Figure 7b shows the result after step 6 if one conditions on Tissue Weakness, Neuromuscular Fatigue, Previous Injury and Contact Sport
2014-10-01T00:00:00.000Z
2008-01-01T00:00:00.000
{ "year": 2008, "sha1": "31bf858a22af665d285588328b7abc7239e39491", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "CiteSeerX", "pdf_hash": "7df702b5bbefddb3defafc1a7656ebd11774a3d7", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [] }
251767291
pes2o/s2orc
v3-fos-license
A successful treatment of a lobar atelectasis in a patient with cystic fibrosis Abstract Lobar atelectasis may be a complication of pulmonary exacerbations in cystic fibrosis (CF). There are no established guidelines on the management of this condition in patients with CF. Therapeutic bronchoscopy with recombinant human deoxyribonuclease (rhDNase) instillation has been described to be successful in patients not responding to conservative measures. We describe a case of a young man with CF, with previously mild impaired lung function, presenting with cough, desaturation, and worsening dyspnea, persisting for over 6 weeks, despite conservative therapy. Thoracic imaging showed right lower lobe atelectasis, which was successfully treated with bronchoscopy and instillation of rhDNase. Long‐term resolution of the atelectasis was confirmed with chest magnetic resonance imaging follow‐up. by lobar atelectasis, mainly occurring as a result of mucus plugging. There are no established guidelines on the management of atelectasis in people with CF (pwCF). The most common approach consists of the first-line treatment with intravenous (iv) antibiotics, associated with enhanced chest physiotherapy; if no improvement is obtained, a second-line treatment consists of a therapeutic bronchoscopy with intrabronchial recombinant human deoxyribonuclease (rhDNase) instillation. 1,2 These procedures might be uneffective if the persistence of atelectasis has already led to a state of parenchymal fibrosis. | CASE REPORT A 21-year-old male patient with CF (R347P/4382delA), with pancreatic sufficiency, was admitted to our regional CF reference center for progressive dyspnea and worsening of pulmonary function tests. He had a good nutritional status (BMI 26) and FEV 1 values in the previous 5 years varied from 75% to 96% of predicted. During the same period he was chronically infected with methicillin-sensitive Staphylococcus aureus and intermittently by Pseudomonas aeruginosa, but his last sputum culture tested positive for methicillin-resistant S. aureus (MRSA). The patient was regularly treated with nebulized rhDNase and inhaled budesonide-formoterol for a history of seasonal allergies and asthma. Before hospitalization he was evaluated as an outpatient for frequent cough, SpO 2 was 94% in room air, and FEV 1 was markedly reduced (55% of predicted, as compared to 75% at the previous visit). The patient refused hospitalization and was therefore given oral cotrimoxazole. After about 6 weeks he presented to the emergency department due to the persistence of symptoms, associated with low SpO 2 (91%) and dyspnea. Blood tests revealed a negative C-reactive protein and a normal white blood cell count. Total immunoglobulin E (IgE) was 452 kUA/L, while Aspergillus fumigatus-specific IgE and IgG levels as well as the Galactomannan index were negative. Chest x-ray showed a right paracardiac hypodiaphany of uncertain significance, described as a possible area of dysventilation. The triangular wedge-shaped opacity correspondent to the collapsed lower lobe was poorly visible and accessory signs of atelectasis (downward shift of the hilum, mediastinal shift, and diaphragm obscuration) were not present. On admission, he was started on iv vancomycin and meropenem. On Day 5, a pulmonary magnetic resonance imaging (MRI) was performed ( Figure 1A,B), which showed right lower lobe atelectasis. | DISCUSSION In CF, atelectasis occurs as a consequence of poor clearance of inflammatory debris, smooth muscle constriction, and edema of the bronchial walls, leading to complete intrabronchial obstruction. The atelectatic lung region is perfused, generating a physiologic shunt, which may explain the hypoxemia observed in our patient. Hypoxic vasoconstriction may act as an adaptive mechanism, diverting blood flow from the nonventilated lung region, and it may be speculated that, once fibrosis has developed, blood flow is fully diverted from the atelectatic lung regions, with the restoration of F I G U R E 1 Chest magnetic resonance imaging T2-weighted images. (A and B) The right lower lobe atelectasis (indicated by the white arrows) in the coronal and transverse sections, respectively. (C and D) The resolution of the atelectasis after bronchoscopic instillation of rhDNase, in coronal and transverse sections, respectively. normal SpO 2 . 3 In our patient SpO 2 did not normalize, suggesting that the atelectatic lung was still viable. Patients with atelectasis generally respond to iv antibiotics and to intensive chest physiotherapy. Even if there are no controlled trials to support the role of bronchoscopy with rhDNase instillation for the treatment of lobar atelectasis, this approach has been successfully applied in pwCF not responding to standard medical management and shown to be resolutive 7 weeks after the onset of symptoms. 1,2,4,5 Recent advances in MRI have made this tool comparable to CT in detecting morphological changes in CF lung. MRI has the advantage of being nonionizing and therefore repeatable without any radiation risk, making it suitable for frequent longitudinal monitoring and to assess personalized patient care. Furthermore, lung MRI also offers functional evaluation and provides ventilation and perfusion assessment in the same setting. Finally, it is possible to improve image quality with proper MRI protocols and to reduce MRI scan time with judicious planning and adopting tailored MR protocols. 6,7 In conclusion, lobar atelectasis may be a complication of pulmonary exacerbations in CF. Bronchoscopy with rhDNase instillation represents a valid therapeutic option in pwCF failing to respond to conservative treatment, particularly in those with persistently suboptimal SpO 2 values, since this nonadaptive condition is present until atelectasis becomes irreversible. In these cases, chest MRI has proved to be a valid diagnostic and follow-up tool, that avoids repeated irradiation.
2022-08-25T06:17:59.636Z
2022-08-24T00:00:00.000
{ "year": 2022, "sha1": "23101116c3803a61ac5fb4bbd5e7da817f4decd3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Wiley", "pdf_hash": "b126b420882af3d8d39709d766e29bd745fff348", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14314449
pes2o/s2orc
v3-fos-license
Production and Flow of Identified Hadrons at RHIC We review the production and flow of identified hadrons at RHIC with a main emphasis on the intermediate transverse momentum region ($2<p_{T}<5$ GeV/$c$). The goal is to unravel the dynamics of baryon production and resolve the anomalously large baryon yields and elliptic flow observed in the experiments. Introduction This paper explores the relativistic heavy ion collisions at RHIC and the medium produced in these collisions using hadronic observables. Being most abundantly produced, hadrons define the bulk medium behavior, which is governed by soft, non-perturbative particle production. Analysis of identified hadron spectra and yield ratios allows determination of the kinetic and chemical properties of the system. Hydrodynamics models [1] have been successful in reproducing identified hadron spectral shapes and their characteristic mass dependence at low p T as well as the azimuthal anisotropy of particle emission. Notably, in order to match the data the models require rapid equilibration of the produced matter and a QGP equation of state. The particle abundances also point to an equilibrated system and are well described by statistical thermal models [2]. The chemical freeze-out at T ch ≈ 170 MeV is suggestive, as it is at the phase boundary of the transition between hadron gas and QGP, as predicted by lattice QCD calculations [3]. Above p T ≈ 2 GeV/c, hard-scattering processes become increasingly important. After the hard-scattering, a colored object (the hard-scattered quark or gluon) traverses the medium produced in the collision and interacts strongly. As a result, it loses energy via induced gluon radiation. This phenomenon, known as jet-quenching, manifests itself as suppression in the yields of high-p T hadrons, when compared to the production in pp collisions and weakening of the back-to-back angular correlations between the jet fragments. The yield suppression is measured in terms of the nuclear modification factor R AA = Y ield AA /N coll /Y ield pp , where the number of binary nucleon-nucleon collisions, N coll , is introduced to account for the nuclear geometry. In this paper, we use the ratio R CP , which is obtained from the N coll scaled central to peripheral spectra and carries similar information. Jet quenching was discovered at RHIC both in suppressed hadron production at high-p T [4] (R AA < 1) and in vanishing back-to-back jet correlations [5]. Another discovery, unpredicted by theory, is a large enhancement in the production of baryons and anti-baryons at intermediate p T ≈ 2-5 GeV/c [6,7],compared to expectations from jet fragmentation. This is in contrast to the suppression of π 0 [8]. In central Au + Au collisions the ratio p/π is of the order 1 -a factor of 3 above the ratio measured in peripheral reactions or in pp collisions. In this region of p T fragmentation dominates the particle production in pp collisions. It is expected that fragmentation is independent of the colliding system -hence the large baryon fraction observed at RHIC comes as a surprise. At RHIC, the medium influences the dynamics of hadronization resulting in enhanced baryon production but the exact mechanism is not yet completely understood. This paper reviews the latest experimental results relevant to this subject. Radial flow at intermediate p T . The most common conjecture that is invoked to explain the large p/π ratios observed by PHENIX [7] is the strong radial flow that boosts the momentum spectra of heavier particles to high p T . In this scenario, the soft processes dominate the production of (anti)protons at 2-4.5 GeV/c, while the pions are primarily produced by fragmentation of hard-scattered partons. In Fig. 1 we compare the 10% central spectra of π ± , K ± ,p, and p to a hydrodynamics model [9] that has been fitted to the data. The free parameters in the model are the kinetic freeze-out temperature T f o , the transverse flow velocity β T and the absolute normalization. The line drawn through the φ-meson spectrum is the model's prediction obtained after fitting all other particle species. We see that: 1) Hydrodynamics gives a good description of the p and p spectral shapes up to ≈ 3 GeV/c, and 2) the φ-meson spectrum can be described by the same parameter set as the protons.For lighter particles, the deviation from hydrodynamics happens at lower p T . These results may lead to the conclusion that the enhanced p/π ratio is a mass effect and the intermediate p T (anti)protons are primarily produced in soft processes. We now examine the scaling of the yields in different centrality classes. We expect that for soft production the yields will scale as the number of nucleons participating in the collision, while for hard processes the scaling is with N coll . In Fig. 2 the p T distributions for (anti)protons and φ are scaled down each by their respective N coll . To isolate mass effects from baryon/meson effects we compare a heavy meson to the protons. The result is rather surprising. At intermediate p T the p + p yields scale with N coll as expected for hard processes. The φ yields do not scale. Although the shape of the p + p and φ spectra is the same and is well reproduced by hydrodynamics, the absolute yields for φ grow slower with centrality. When the central and peripheral yields are used to evaluate the nuclear modification factor (Fig. 3), the (anti)protons show no suppression (R CP ≈ 1) , while the φ are suppressed similar to π 0 . This result rules out the radial flow (and the mass) as the sole factor that is responsible for the baryon enhancement. The similarity in the centrality dependence of φ and π production suggests an effect related to the number of constituent quarks rather than the mass. The STAR experiment also observed a clear baryon/meson distinction in R CP of K * , K 0 s , Λ, and Ξ [12]. Recombination and empirical scaling of elliptic flow. Recently, several quark recombination models [13,14,15] have been proposed to resolve the RHIC baryon puzzle. In the dense medium produced in central Au + Au collisions, recombination of quarks becomes a natural hadronization mechanism. When produced from the same underlying thermal quark distribution, baryons get pushed to higher p T than mesons due to the addition of quark momenta. At intermediate p T recombination wins over fragmentation for baryons, while mesons are still dominated by fragmentation. After fitting the inclusive hadron spectra to extract the thermal component, the models are able to reproduce a large amount of data on identified particle spectra, particle ratios and nuclear modification factors. The most spectacular success of the recombination models comes from the comparison with the data on elliptic flow. At low-p T hydrodynamics describes both the magnitude and the mass dependence of v 2 . However, at p T > 2 GeV/c the mass ordering of v 2 changes,namely : v 2 (p) > v 2 (π) [16] and v 2 (Λ) > v 2 (K s ) [12]. In addition, the size of the signal is too big to be explained by asymmetric jet absorption [17].The recombination models solve the problem by assigning the elliptic flow signal to the quarks, instead of the hadrons. Then the baryon/meson split in v 2 is naturally explained. It has been demonstrated empirically, that the flow per quark is universal. Recent results from the STAR [18] experiment that include the measurement of multi-strange baryons are shown in Fig. 4. A clear baryon/meson difference is observed in the data at p T > 2GeV/c. The results from a typical hydrodynamic model calculations [19] are shown with a band. After re-scaling of both axes in Fig. 4 to represent the quark flow, the data falls on a universal curve as demonstrated in Fig. 5. Jet correlations with leading baryons or mesons The recombination models resolve most of the baryon/meson effects observed in the data. However, from spectra, particle ratios and elliptic flow it is difficult to infer whether the recombining quarks come from the thermal bath (soft processes) or from hard-scattering. To unravel the nature of the baryon enhancement and to test the recombination approach, the PHENIX experiment examined the two-particle angular correlations with identified meson or baryon trigger particle [20]. The momentum of the trig- Fig. 6 represents an upper limit to the centrality dependence of the jet partner yield from thermal recombination. The data clearly disagree with both the centrality dependence and the absolute yields of this estimation, indicating that the baryon excess has the same jet-like origin as the mesons, except perhaps in the highest centrality bin. The bottom panel of Fig. 6 shows the conditional yield of partners on the away side. It drops equally for both trigger baryons and mesons going from p+p and d+Au to central Au+Au, in agreement with the observed disappearance [5] and/or broadening of the dijet azimuthal correlations. It further supports the conclusion that the baryons originate . Yield per trigger for associated charged hadrons between 1.7 < p T < 2.5 GeV/c for the near-(top) and away-(bottom) side jets [20]. The dashed line (top) represents an upper limit of the centrality dependence of the near-side partner yield from thermal recombination. from the same jet-like mechanism as mesons.The description of the data in the pQCD framework would require an in-medium modification of the jet fragmentation functions. For recombination models, the experimental results imply that shower and thermal partons have to be treated on an equal basis [14]. Summary We reviewed the results on hadron production and flow in relativistic heavy ion collisions at RHIC. The production mechanisms at low-p T and high-p T are relatively well understood in terms of soft and hard processes, respectively. The intermediate p T region (2 < p T < 5GeV/c) is marked by a number of puzzling experimental observations and most notably, by the baryon excess over the expectation from vacuum fragmentation functions. By comparing spectra and centrality scaling of (anti)protons and φ-mesons, we established that the excess of anti-protons with respect to pions is not due to the larger mass of the anti-proton, but is related to the number of constituent quarks. The recombination models get a beautiful confirmation in the empirical scaling relation of the elliptic flow results. Jet-correlations with trigger baryons or mesons show a similar hard-scattering component in both. This observation is also in line with the N c oll scaling observed in the yields of protons and anti-protons. However, it implies that protons originate from hadrons that experience little or no energy loss, while pions come from partons that have suffered large energy loss. This result is conceptually difficult, unless baryons and mesons have a very different formation time and thus -the original partons have different time to interact with the medium. Recombination models which combine hard-scattered partons with thermal ones give the most likely explanation of the experimental results as a whole. The baryon excess is clearly an effect of the medium produced in Au + Au collisions and maybe, thorough the comparison with recombination models, gives evidence for its partonic nature.
2014-10-01T00:00:00.000Z
2004-11-18T00:00:00.000
{ "year": 2004, "sha1": "20d4c76a03c3cfa89254672fe4da67e17ab4b87a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "20d4c76a03c3cfa89254672fe4da67e17ab4b87a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
118531135
pes2o/s2orc
v3-fos-license
An extension of Poincar\'e group abiding arbitrary acceleration The class of accelerated reference frames has been studied, on the basis of Fermi-Walker coordinates. The infinitesimal transformations connecting two of these frames has been obtained, and also their commutation relations. The outcome is an infinite dimensional extension of the Poincar\'e algebra. Although this extension turns out to be Abelian, and hence trivial, it is noteworthy that, contrarily to what happens with Lorentz boosts, acceleration boost generators commute with each other and with translation generators. Introduction According to the principle of relativity of Galileo, the laws of [Newtonian] mechanics hold in all inertial reference frames and are covariant under Galilei's transformations. Any couple of these reference frames are in uniform rectilinear motion with respect to each other. However, this principle of relativity can be extended to arbitrary Newtonian frames, that are mutually related by coordinate transformations like where R i j (t) is an orthogonal matrix and s i (t) arbitrary functions of time. Indeed, the laws of Newtonian mechanics are preserved by these transformations, provided that the necessary inertial force fields -dragging, Coriolis, centrifugal, . . . -are also included. In the special theory of relativity the laws of physics hold in all Lorentzian reference frames, the relative motion of any couple of these frames is rectilinear and uniform, and the coordinates in any pair of these frames are connected by a Poincaré transformation. Furthermore, every Lorentzian frame of reference is based on an outfit of synchronized clocks which are stationary at every point in a Euclidian reference space. In his attempt to set up a theory of gravitation consistent with his theory of relativity, Einstein initially adressed the generalization of relativity to accelerated motions [1], but he soon abandoned it in favour of the principle of general covariance which obviously allows for a much larger invariance group than Poincaré group, namely spacetime diffeomorphisms. However, as soon was pointed out by Kretschmann [2], [3] "since any theory, whatever its physical content, can be rewritten in a generally covariant form, the group of general coordinate transformations is physically irrelevant" [4]. More recently other authors have insisted in the convenience of restricting genral covariance [5], [6]. Moreover, in Kretschmann's view, special relativity is the one with the relativity postulate of largest content; indeed, its isometry group is a ten-parameter group, which is the largest group of isometry in four dimensions, whereas for generic spacetimes in general relativity the isometry group reduces to the identity. An alternative way to extend the class of Lorentzian frames could consist of requiring the reference space to be rigid (in the sense of Born), i. e. that the infinitesimal radar distance [7], [8] keeps constant. But, as it was clear very soon [9], even in Minkowski spacetime the only permited rigid motions are: (i) rectilinear uniform motions, (ii) uniform rotational motions around a fixed axis and (iii) arbitrary accelerated motions without rotation. It was clear that arbitrary rotational motions posed a genuine obstruction in two respects: the impossibility of synchronizing stationary clocks along a space path surrounding the rotation axis and in that the space geometry associated to the infinitesimal radar distance is neither Euclidean nor rigid [10]. To avoid this "no go" we shall here restrict to non-rotational motions with arbitrary acceleration in Minkowski spacetime and use Fermi-Walker reference frames to embody accelerated systems of reference, for they are often seen as the natural general-relativistic generalization of inertial Cartesian coordinates [11], [12]. We shall then prove that these are the only synchronous frames having a flat space. We shall then characterize the transformations connecting two Fermi-Walker coordiante systems as generalised isometries [13] of the spacetime metric and, by solving the corresponding generalised Killing equation, we shall obtain an infinite dimensional extension of Poincaré algebra which includes acceleration and may be taken as the mathematical counterpart of an extension of the principle of special relativity. Fermi-Walker coordinates Let z µ (τ ) be a proper time parametrized timelike worldline in ordinary Minkowski spacetime, which we shall take as the space origin (we take c = 1 ,) µ = 1 . . . 4 , u µ =ż µ (τ ) the unit velocity vector and a µ =z(τ ) the proper acceleration. (We refer to Lorentzian components unless the contrary is explicitely stated.) A 4-vector w µ (τ ) is said Fermi-Walker (FW) transported [14] along z µ (τ ) if If z µ (τ ) is a straight line (a µ = 0 and uniform rectilinear motion), Fermi-Walker transport coincides with parallel transport and the equation above reduces to w µ =constant. It is obvious that u µ (τ ) is FW transported along z µ (τ ) whereas, generally, it is not parallel transported. Thus FW transport is the minimal modification of parallel transport such that the proper velocity 4-vector is FW transported. On its turn, proper acceleration a µ (τ ) is FW transported only if worldline z µ (τ ) is contained in a plane, i. e. one-directional motion. Let us now consider an orthonormal tetrad e µ (α) (τ ) , α = 1 . . . 4 , which is FW transported along z µ (τ ) and such that e µ (4) = u µ . From the transport law (1) it follows that: where That is de µ For a given point in spacetime with Lorentzian coordinates x µ , the Fermi-Walker coordinates [15], [16] with space origin on z µ (τ ) are: The time τ (x ν ), given as an implicit function by The space coordinates , defined by By differentiating these two equations, it easily follows that where, for the sake of abbreviation, the ordinary vector notation in three dimensions has been adopted, namely ξ · a = 3 i=1 ξ i a i . As e µ (α) is an orthonormal base, dx µ = u µ 1 + ξ · a(τ ) dτ + 3 i=1 e µ (i) dξ i and the invariant interval in FW coordinates is The Fermi-Walker reference frame associated to z µ (τ ) As any FW transported tetrad e µ (α) is a solution of the linear ordinary differential system (1), which only depends on the origin line through u µ and a µ , we shall have that where Λ µ ν (τ ) is a solution of the differential system Therefore two tetrads, e µ (α) and e ′µ (α) , which are FW transported along the same worldline can only differ in their initial values and as, besides e µ (4) = e ′µ (4) = u µ , these initial values are connected by a space rotation being a constant orthogonal matrix. Combining the latter with (9) we have that Hence all FW transported tetrads along a given worldline are the same apart from an initial space rotation and, according to the definition (5-6) the Fermi-Walker coordinates based on any of these tetrads will differ at most in a constant rotation: Given a FW coordinate system based on z µ (τ ) , the worldlines ξ i =constant, τ ∈ R are the "history" of a place in the reference space associated to the FW coordinates and, as commented above, in any other FW coordinated system based on z µ (τ ) we shall still have that According to the invariant interval (8), the infinitesimal radar distance [8], [7] between two close places ξ j and ξ j + dξ j in the Fermi-Walker reference space is dl 2 = d ξ 2 . The reference space is therefore rigid and flat, and Fermi-Walker coordinates are cartesian coordinates in this space. We shall thus profit of the ordinary vector notation ξ = (ξ 1 , ξ 2 , ξ 3 ). In Lorentzian coordinates the worldline of the place ξ reads The proper time rate at this place is τ is the time ticked by a stationary atomic clock at ξ and it only coincides with τ at the origin, where ξ = 0. In general,τ = τ and usually the readings of proper timeτ by two stationary clocks at two different places ξ 1 = ξ 2 will not keep synchronized. This will happen only if which only admits a solution if all directions a(τ ) = 0 keep in the same plane. It will be thus convenient to use the synchronous time τ instead of the local proper timeτ . The factor 1 + ξ · a(τ ) is relevant in connexion with the domain of the FW coordinates, which does not embrace the whole Minkowski spacetime. Indeed, the procedure to obtain the coordinates of a point relies on solving the implicit function (1), which requires that the τ -derivative of the left hand side does not vanish, that is 1 + ξ · a(τ ) = 0 . The unit velocity vector (with respect to a Lorentzian frame) of the worldline ξ =constant in the FW reference frame at the synchronous time τ is and we shall restrict the domain of the FW coordinates to the region is the horizon of the Fermi-Walker coordinate system. As for the proper acceleration of the worldline at ξ, we have and the invariant proper acceleration a(τ, ξ) = a µ (τ, ξ) a µ (τ, ξ) is which differs from one place to another. It is worth to mention here that Einstein's statement [17]: «... acceleration possesses as little absolute physical meaning as velocity », does not hold avant la lettre. As a matter of fact, every place ξ in a FW reference space has a proper acceleration (13) which is measurable with an accelerometer. However, the laws of classical particle dynamics also hold in the accelerated reference frame provided that a field of inertial force −a µ (τ ; ξ) is included; the passive charge for this field being the inertial mass of the particle. It is only with this specification that two accelerated reference frames are equivalent from the dynamical (or even physical) viewpoint. Also notice that, as local proper acceleration is different from place to place, there is not such a thing as the acceleration of a FW reference frame and, when we use this expression, the origin acceleration should be understood. Uniqueness We shall now see that the form of the invariant interval (8) is unique in Minkowski spacetime provided that there is a synchronous reference whose reference space is flat. Proposition 1 If in a given coordinate system the Minkowski metric components are g ij = δ ij , g 4j = 0 , i, j = 1 . . . 3 , then there exist three functions a j (τ ) such that Indeed, let (ξ j , t) be coordinates for such a reference frame, being ξ j cartesian coordinates for the flat reference space. As the frame is synchronous, the invariant infinitesimal interval is It easily follows that all Christoffel symbols vanish except and the only non-vanishing components of the curvature tensor are As in Minkowski spacetime the curvature tensor is null, we have that e φ = B(t) + ξ · a(t) , for some functions B(t) and a i (t). Now, if B(t) = 0 the time coordinate can be redefined so that dτ = B(t) dt , and the proposed result immediately follows. ✷ The following Proposition is a kind of converse result: Proposition 2 If in some coordinate system (ξ j , τ ) the Minkowski spacetime invariant interval is (8), then (ξ j , τ ) are the Fermi-Walker coordinates based on some origin worldline. Indeed, take the three functions a i (τ ) in the time rate of the interval (8), then set up the matrix W β α as indicated in the linear ordinary differential system (4) and solve it for some arbitrary initial data e µ (β) (0) . Take then u µ (τ ) = e µ (4) (τ ) as the proper velocity and obtain the worldline z µ (τ ) by integration for some initial z µ (0). It is obvious that the tetrad e µ (β) (τ ) is Fermi-Walker transported (2) along z µ (τ ) and the Fermi-Walker coordinate transformation (12) connects the given coordinates (ξ j , τ ) with Lorentzian coordinates. As the above procedure allows for arbitrary choices concerning initial data the result is not unique: the origin worldline is determined apart form the initial position z µ (0), and the initial values of the tetrad e µ (α) (0) can be arbitrarily chosen. ✷ Generalised isometries To derive explicit expressions for the transformations connecting two different FW coordinate systems would require to invert the transformation law (12), which generally is not feasible in closed form, except if the accelerations of both frames are constant and parallel to each other. In such a case the problem would be essentially one-dimensional and we could choose the space axes in the standard configuration, that is mutually parallel, with X and X ′ parallel to the accelerations; then we should proceed as in Section 3 of ref. [18]. In the general case, a i (τ ) variable and arbitrary, we can only derive expressions for infinitesimal transformations and, to this end, the notion of generalized isometry advanced by Bel [13] is helpful. In any FW frame the invariant interval has the generic form (8) where a i (τ ) are some arbitrary functions of one variable. Therefore, the transformation formulae (ξ i , τ ) ←→ (ξ ′j , τ ′ ) connecting any two FW frames must preserve the form (14), perhaps with two different triples of functions (a i (τ )) and (a ′ i (τ ′ )) , i. e. it is not an isometry because the metric is not invariant but almost invariant because only the functions a i (τ ) change whereas the overall form is preserved. To find the infinitesimal generalized isometries -or generalized Killing vectors-we write the interval as . . m being a number of arbitrary functions -in our particular case m = 3-and consider the infinitesimal transformations Then, as g µν (x ′ , a ′ i (x ′ ))dx ′µ dx ′ν = g αβ (x, a i (x))dx α dx β , and keeping only first order terms in ε , we have that or, equivalently, the generalized Killing equation [13] where " " means covariant derivative. In the particular case of the interval (8), which contains only three function a i (τ ) depending on only one variable, we have that the above equation reduces to cross block, The space block implies that the spatial dependence of the components X i = X i is the same as for a Killing vector of flat Euclidean metric δ ij , that is: where the usual 3-dimensional vector notation has been adopted for brevity, and f (τ ) and ω(τ ) are arbitrary functions. On its turn, the cross block gives the expressions for the three spatial derivatives ∂ i X 4 , which carry as integrability conditions that˙ where a "dot" means ∂ τ . Substituting (19) in equation (17) and including (20), we arrive at where g(τ ) is an arbitrary function. Finally, introducing the expressions (19) and (21) in the time block and including the separation of space and time variables, we obtain the following conditions on g, f and ω: These equations are solved in Appendix A in terms of 4-dimensional variables, namely the 4-vector f µ (τ ) = f , g and the skewsymmetric tensor Ω µν (τ ) formed with F =˙ f + g a as electric part and ω as magnetic part -see equation (39). The solutions (40) and (41) depend on ten constant parameters, f µ 0 and Ω 0 αβ , plus three arbitrary one-variable functions, A i (t). Introducing then these solutions in the expressions (19) and (21), we have that the infinitesimal generator is where∂ i is the partial derivative with respect to ξ i and∂ 4 = 1 1 + ξ · a(τ ) ∂ τ , that is a kind of normalized partial derivatives. This infinitesimal generator acts on a manifold coordinated by (ξ j , τ, [a i (t)]) , where a i ∈ C 0 (R) and (ξ j , τ ) ∈ R 4 satisfy the condition 1 + ξ · a(τ ) > 0 . As the generator depends on the constant parameters f µ 0 and Ω 0 αβ and three arbitrary functions A i (t), we can separate this dependence as where: (The matrices Λ ν µ (τ ) and G µ ν (τ, t) are defined in Appendix A.) The derivation of the Lie brackets between pairs of commutators is simple and we shall omit the details. However, as it is tedious and intricated, we explicite some useful intermediate formulae in Appendix B. The commutation relations are: Thus, the algebra of the infinitesimal transformations connecting Fermi-Walker coordinate systems is an abelian extension of Poincaré algebra. Conclusion and outlook We have introduced a class of reference frames with arbitrary accelerated motion, namely Fermi-Walker frames. Each one is determined by the worldline representing the frame's spatial origin and, as it can be easily checked, if the origin is in uniform rectilinear motion, the reference frame is Lorentzian. Each Fermi-Walker frame is characterized by the coordinates τ , ξ 1 , ξ 2 and ξ 3 , and also by the three components of the [proper] acceleration of the origin. These quantities are three functions of time which are measurable from inside the frame, i. e. without referring to anything external, by means of accelerometers. The transformations connecting the coordinates of any pair of frames in the Fermi-Walker class preserve the form (8) of the spacetime interval, maybe with different functions a i (τ ). Thus we refer to these transformations as generalised isometries. Infinitesimal generalised isometries satisfy the generalised Killing equation (15), whose solution is an infinite dimensional Lie algebra that contains Poincaré algebra and acceleration boosts as well. A close look at the commutation relations reveals that it is an Abelian extension of Poincaré algebra; hence it is rather trivial from a mathematical viewpoint. Nevertheless it is curious that, contrarily to Lorentz boosts, acceleration boosts commute with each other and with any other Poincaré generators. We must also remark that the notion of generalized isometry [13] permits to go beyond Kretschmann's idea that, since special relativity admits the widest isometry group, it contains the largest relativity postulate. Our approach here has led to an intermediate group, namely the group of generlised isometries of the interval (8), which is larger than Poincaré group but much smaller than the whole diffeomorfism group. where W µ ν is the matrix in (4). Consider now the solution Λ µ α (τ ) of (10). As the tetrad e µ (α) (τ ) = Λ µ α (τ ) also satisfies equation (4), we shall have thatΛ and, being Λ µ ν a Lorentz matrix, its inverse L µ ν = Λ ν µ is a solution oḟ where the fact that W β µ = −W β µ has been included. (Indices are raised and lowered with η µν = diag[1, 1, 1, −1].) Hence, L µ ν (τ ) C ν = C ν Λ µ ν (τ ) , with C ν constant, is a solution of the homogeneous part of equation (32). We can then solve the complete equation by the method of variation of constants, so obtaining: acts as a kind of matrix Green function. This solves the first pair of equations (31) provided that F (τ ) is known. It is worth noticing that, although the matrices Λ µ ν (τ ) , τ ∈ R are not in general a one-parameter subgroup of Lorentz group (except in the case of one-directional motion, as commented above), the matrices G µ ρ (τ, t) do have the group property: and moreover: To solve the second pair of equations (31) we need the following result whose proof is straightforward. Lemma 1 Let Ω 0 µν be a skewsymmetric matrix and Λ µ ν (τ ) a solution of (33). Then the matrix is skewsymmetric and satisfiesΩ Consider now the skewsymmetric matrix Ω µν set up with ω and F as magnetic and electric parts, respectively In terms of it, the second pair of equations (31) becomes: where A µν is a skewsymmetric matrix whose coefficients all vanish except A i4 = −A 4i = −A i (τ ) . Lemma 1 gives the general solution of the homogeneous part of the latter equation and the solution of the complete equation easily follows by the method of variations of constants, namely where G ρ µ is obtained by raising/lowering the indices in the above matrix Green function (36), and the definition of A µν has been included. Finally, as F µ (t) = Ω µ 4 (t) , equation (35) leads to where (37) has been included. Appendix B Although we do not give an explicit derivation of the commutation relations (30), we next list some formulae which are of great help to this task. From equations (28) and (29) it easily follows that: and also: ∆ j (t, t, ξ) = ξ j and ∆ 4 (t, t, ξ) = 0 The commutators between the normalized partial derivatives and the functional derivatives are It is also easy to see that: On this basis we can obtain a differential equation satisfied by the functional derivative of Λ ν µ (τ ), namely and, as the initial condition is Λ ν µ (0) = δ ν µ , the initial condition for the above differential system is that the functional derivative of Λ ν µ (0) vanishes. Whence it easily follows that Besides, the 4-vector k ν (τ, ξ) defined in (26) has the nice properties that
2015-12-23T13:35:45.000Z
2015-12-23T00:00:00.000
{ "year": 2015, "sha1": "c4bcbd1529d944273fc14a8919b3322f35e34102", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c4bcbd1529d944273fc14a8919b3322f35e34102", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
212638200
pes2o/s2orc
v3-fos-license
Fine-Grained Lung Cancer Classification from PET and CT Images Based on Multidimensional Attention Mechanism Lung cancer ranks among the most common types of cancer. Noninvasive computer-aided diagnosis can enable large-scale rapid screening of potential patients with lung cancer. Deep learning methods have already been applied for the automatic diagnosis of lung cancer in the past. Due to restrictions caused by single modality images of dataset as well as the lack of approaches that allow for a reliable extraction of fine-grained features from different imaging modalities, research regarding the automated diagnosis of lung cancer based on noninvasive clinical images requires further study. In this paper, we present a deep learning architecture that combines the fine-grained feature from PET and CT images that allow for the noninvasive diagnosis of lung cancer. The multidimensional (regarding the channel as well as spatial dimensions) attention mechanism is used to effectively reduce feature noise when extracting fine-grained features from each imaging modality. We conduct a comparative analysis of the two aspects of feature fusion and attention mechanism through quantitative evaluation metrics and the visualization of deep learning process. In our experiments, we obtained an area under the ROC curve of 0.92 (balanced accuracy (cid:31) 0.72) and a more focused network attention which shows the effective extraction of the fine-grained feature from each imaging modality. Introduction In the 21 st century, cancer is still considered a serious disease as the mortality rates are high. Among all cancer types, lung cancer ranks rst regarding morbidity and mortality [1,2]. ere are two main categories of lung cancer: non-small-cell lung cancer (NSCLC) and small cell lung cancer (SCLC). For non-small-cell lung cancer, a subcategorization into lung squamous cell carcinoma (LUSC) and lung adenocarcinoma (LUAD) is further used. ese types of cancers account for approximately 85% of lung cancer cases [3]. Compared with the diagnosis of benign and malignant, further ne-grained classi cation of lung cancers such as LUSC, LUAD, and SCLC is of great signi cance for the prognosis of lung cancer. Accurately determining the category of lung cancer in the early diagnosis directly in uences the e ect of the treatment and thus the patients' survival rate [1,4]. Positron emission tomography (PET) and computed tomography (CT) are both widely used noninvasive diagnostic imaging techniques for clinical diagnosis in general and for the diagnosis of lung cancer in particular [4]. Immunohistochemical evaluation is considered the gold standard for lung cancer classi cation. However, this procedure requires a tissue biopsy, an invasive procedure with the inherent risk of a delayed diagnosis and thus exacerbation of the patient's pain. Advances in arti cial intelligence research enabled numerous studies on the automatic diagnosis of lung cancer. e use of data in lung cancer-type classi cation is roughly divided into three categories: CT and PET image data as well as pathological images [5]. e well-known data science community Kaggle provides high-quality CT images for participants with the task to distinguish malignant or benign nodules from pulmonary nodules. Kaggle competitions repeatedly produce excellent deep learning approaches for these tasks [6,7]. With the progresses in the research of automatic lung cancer diagnosis, studies are no longer limited to the classification of benign and malignant nodules and data sets are no longer limited to CT images [8][9][10][11][12]. Wu et al. [9] use quantitative imaging characteristics such as statistical, histogram-related, morphological, and textural features from PET images to predict the distance metastasis of NSCLC, which shows that quantitative features based on PET images can effectively characterize intratumor heterogeneity and complexity. Two recent publications propose the application of deep learning to pathological images to classify NSCLC and SCLC [10] and to classify transcriptome subtypes of LUAD [11]. e complexity of the clinical diagnosis of lung cancer is also characterized by the wide range of imaging modality, which is employed in the diagnosis [13,14]. Previous research already proved that deep learning approaches can not only use the feature distribution patterns from different pulmonary imaging modalities but even merging different features to achieve the computer-aided diagnosis. Liang et al. [15] employ multichannel techniques to predict the IDH genotype from PET/CT data using a convolutional neural network (CNN), while other approaches use a parallel CNN architecture to extract several features of different imaging modalities [16,17]. Compared with the classification of the benign and malignant, the classification of the three types of lung cancer from medical images are more suitable to constitute a fine-grained image recognition problem as diverse distributions of features and potential pathological features need to be considered. Because the fine-grained features which need to extract in images, and meanwhile the lesion region is a small part of the whole image, the deep learning framework is susceptible to feature noise. At present, most methods based on various deep learning frameworks have proved to have certain bottleneck in fine-grained problems. In order to solve this problem, the previous research mainly implements the attention mechanism from the two dimensions (channel and spatial) of the feature representation. e channel attention mechanism models the relationship between feature channels [18], while the spatial attention mechanism ensures that noise is suppressed by weighting feature representation spatially [19][20][21]. So far, spatial attention mechanism has been used in medical image processing to enhance extracted features [20,21]. e channel attention mechanism has been used in the detection and classification of pulmonary disease [22,23]. e presentation of these attention mechanisms illustrates the source of characteristic noise from different perspectives. ere are few related studies on how to use the attention mechanism more effectively on images with different imaging modalities, so the deep learning model based on the multimodality dataset still has problems in fine-grained problems. In this paper, we use noninvasive clinical images to achieve the computer-aided diagnosis of fine-grained lung cancer on the basis of deep learning. Our network architecture consists of two parallel three-dimensional Dense-Net, and each DenseNet corresponds to one input imaging modality. To more effectively extract the fine-grained features in different modalities, we combine the 3D DenseNet with a multidimensional (channel and spatial) attention mechanisms to further enhance the extraction of fine-grained features. is network architecture is used to extract features from different imaging modalities in parallel. rough the fusion of features, the fine-grained feature representation of different modalities is used to achieve the final classification. We evaluate our method and prove the effectiveness of our fine-grained lung cancer classification approach. Furthermore, the visualization experiment of deep learning network reveals the benefits of different attention mechanisms for different imaging modalities, demonstrating the effectiveness of multidimensional attention. Methods e network construction is mainly divided into the following two parts: (1) e multidimensional attention mechanism proposed in the single-path network architecture is the method of fine-grained feature extraction for each modality. (2) On the basis of the single-path network architecture, a parallel network architecture is established to achieve parallel extraction and fusion of multimodality features. Fine-Grained Feature Extraction Network Based on Multidimensional Attention Mechanism. 3D CNNs [24] were used for early cancer detection to preserve the spatial relationship between neighboring CT slices [25,26]. DenseNet [27] has been applied to numerous problems within the medical field [28,29] because of its connectivity pattern and the small number of parameters needed. For these reasons, we use 3D DenseNet as our baseline model in the approach presented in this paper. To address the problem of noise in the extraction of finegrained features, we proposed a multidimensional attention mechanism embedded in a single-path network. Our network structure consists of three main components: a 3D DenseNet block, SE block, and a spatial attention-gated module. Figure 1 shows the structure of our single-path model. e main part of our network is composed by a 3D DenseNet [27]. Each of the dense block consists of a specific number of three-dimensional convolutional layers. e parameter quantity of DenseNet is determined by the feature channel (growth rate k) output by each convolutional layer. To ensure feature depth, the number of convolutional layers is set to 4 in different dense blocks. e k is set to 16, making the parameter in the network a small number to mitigate overfitting. e feature map after each dense-block contains all features of the previous convolutional layer. e SE block [18] is used after each dense block to employ channel attention. e SE block in the 3D model is calculated according to equation (1) as follows: Gap refers to the 3D global average pooling operation and σ refers to the sigmoid function. W 0 ∈ R C/r×C and W 1 ∈ R C×C/r compose a multilayer perceptron (MLP) with one hidden layer and r as the reduction ratio. e spatial attention-gated module [20] uses high-level semantic features and the feature after each SE block to generate the corresponding spatial mapping. Moreover, it weights all feature channels spatially to suppress noise originating from a nonlesion area. e spatial attention mechanism computes as defined in equation (2). e flow chart of the attentiongated module is shown in Figure 2: where δ denotes the ReLU activation function, ⊙ denotes the element-wise multiplication, and U denotes the upsampling operation. F l refers to a feature map from different SE blocks. M l (F l ) ∈ R H1×W1×Z1×G , M g (F) ∈ R H2×W2×Z2×G , and M c ∈ R H1×W1×Z1×G describe the convolution operation with the specific channel output. As shown in Figure 2, the spatial attention mechanism generates the attention mapping through the element-wise add operation following the sigmoid activation function. Subsequently multiplied by F l , the spatial weighted feature representation is generated. As F l holds the feature map after SE block, we obtain the feature map weighted among the feature channel and spatial through this operation. e global average pooling operation used after each attentiongated module is to achieve the feature dimension reduction. Parallel CNN Architecture Based on Multimodality Feature Fusion. In this section, we employ a parallel network architecture to extract and fuse features from multimodality data. e overall network structure and the flow chart of GMU are shown in Figure 3. As illustrated in Figure 3, each single-path network equals the single-path network described before. We employ the gated multimodal unit (GMU) fusion strategy [30] for the fusion of different modality features. In contrast to the widely used connection operation, GMU allows to use hidden structures and gate controls to learn the intermediate representation of the multimodality features, thus enabling the prediction layer to assign weights to features that have intrinsic associations better. e calculation process of GMU is shown in the following equations: Complexity where x c refers to the feature extracted from the CT image, while x p refers to the feature extracted from the PET image; H 1 and H 2 are the hidden states that are reached after the fully connected layer with ReLU activation function δ. Let the W 2 , W 3 ∈ R C′×C int and W 4 ∈ R C′×2C int . [·, ·] refers to the connection operation and Z refers to the nonlinear weight learned from the combined features, which reveal the intrinsic relationship between the two modalities. e fused feature H is finally constructed by a linear weight between H 1 and H 2 . Inspired by the deep-supervision [31] method, the loss from each single-path network is, respectively, calculated and finally integrated with the loss from the feature fusion representation to obtain the joint optimization. Under the premise of a final classification, this training method forces the network to extract better high-level features from each modality for the generation of spatial attention to avoid trapping in a local minimum because of the use of feature from each level. Experiments and Results Our experiments mainly demonstrate the methods presented in this paper considering three aspects: (1) the validity of multimodality data. (2) e validity of multidimensional attention mechanism on each modality. (3) e validity of the feature fusion strategy. We evaluate the results under certain evaluation criteria to reflect the effectiveness of these methods. Regarding experimental details, batch normalization is employed prior to the Leaky-ReLU activation function [32]. e stochastic gradient descent (SGD) with a momentum of 0.9 is the optimizer. In the final fully connected layer of DenseNet, L1-regularization and dropout strategies are used to prevent overfitting. is framework is executed using Keras under a TITAN V 12 GB GPU. To demonstrate the generalization power of our proposed network, we statistically analyzed the performance of the model in tenfold cross-validations. e area under the ROC curve (AUC), a metric that is widely used in medical image classification, objectively reflects the ability to classify positive and negative samples correctly. In addition, accuracy is also used as a criterion of the model. e final performance of the model is given by the average value of ten cross-validations. Data Preprocessing. e PET and CT data used in our experiments are provided by the Department of Radiology of the Henan Provincial People's Hospital, a governmental and public medical institution in China. In case patients explicitly requested that their data may not be shared for research purposes, the respective data samples were excluded when creating the dataset. For all data samples, the corresponding patient has a confirmed diagnosis. For the patients in our dataset, both CT and PET examinations are implemented in the same stage to ensure that the lesion's tissue morphology and metabolic levels are consistent. e datasets consist of data samples of 397 patients in total, 91 patients with SCLC, 103 patients with LUSC, and 203 patients with LUAD. Example lesion slices for three different types of lung cancer from CT and PET examinations are shown in Figure 4. For three types of lung cancer, not only the lesion and its surrounding areas have important discriminative information but also some global information (such as location information), which is also helpful for classification in clinical diagnosis. So we use the whole image as input of CNN to preserve useful information and extract fine-grained features. For each patient's lesion, a varying number of slices (between 39 slices at maximum and 3 slices at minimum) were available in the direction of the vertical axis, which poses a variable input scale. We defined a fixed slice amount (P) for network input and provided the corresponding number of slices through sampling along the direction of the vertical axis in the 3D lesion area. PET and CT devices obtain images of different resolution: for CT images, the resolution of each slice in the direction of the vertical axis of the 3D image is 512 × 512 pixels; for PET images, this resolution is 256 × 256 pixels. We resize each slice to 112 × 112 pixels and normalize the range of pixel values to [0, 255]. rough this preprocessing, data samples of each modality are converted to a 3D image of size 112 × 112 × P. As a small data set, we used data augmentation during the network training. rough the random combination of flipping up or flipping right, it is equivalent to expanding the data set by 4 times. Analysis of Multimodality Data Validity. In the first experiment conducted, we use a single-modality model, a 3D DenseNet, on either a PET or a CT dataset. For the preliminary evaluation regarding the effectiveness of multimodality (both CT and PET) approach, we use the multimodality feature network named MF-DenseNet. Each parallel network of MF-DenseNet is equal to the 3D Den-seNet with four dense blocks and uses the GMU as the feature fusion. e results are shown in Table 1. e variance term in the table reflects the variance of the AUC values between each round of cross-validation, and the average score term reflects the mean value of the AUC between each round of cross-validation. e balanced accuracy evaluates the balanced performance. e experimental results show that the extraction and fusion of features from different modalities improve the performance considerably. e best average AUC score for single-modality model was reported as 0.678, which is achieved by the PET dataset. Comparing the performances achieved for CT and PET data shows that features from PET images even more facilitate for the classification. e combination of multimodal features has achieved the best performance under both AUC and accuracy verification metrics. e average AUC score of our MF-DenseNet is 0.810, and the accuracy is 0.68. Smaller variances between each round of cross-validation also show the effectiveness of multimodality data. ey further demonstrate that different modalities show different feature distribution patterns, which need to be extracted. From the perspective of balanced accuracy indicators, our model also has a relatively balanced performance. Analysis of Multidimensional Attention Mechanism Validity. To extract fine-grained features and further improve the network performance, we propose a multidimensional attention mechanism for MF-DenseNet. MFSE-DenseNet (r) consists of the MF-DenseNet with the SEblock, for which the parameter r indicates the reduction ratio of the SE-block. e MFSA-DenseNet on the other hand consists of MF-DenseNet with a spatial attention mechanism. e MFSCA-DenseNet employs both a spatial attention mechanism and a channel attention mechanism. e results are listed in Table 2, and the ROC curve of the different attention mechanisms is shown in Figure 5. Comparing the MFSE-DenseNet (r � 4) and the MFSE-DenseNet (r � 16) with the MF-DenseNet shows that the channel attention mechanism has the ability to improve the overall performance. Although the AUC values of the model are similar between different reduction rates, it can be seen from the variance that the model has a more stable generalization performance when r � 4. e performance of the MFSA-DenseNet can be improved by the spatial attention, but the AUC value between classes remains unbalanced. Under these conditions, the best AUC score of 0.920 (accuracy � 0.82) was achieved using the MFSCA-DenseNet (r � 4). Although variance of 10-fold crossvalidation is not the smallest (variance � 0.05), it is also close to variance of MFSA-DenseNet (variance � 0.04). In addition to the highest average AUC and accuracy score, this model also provides a more generalized performance through the smaller variance value of cross-validation. By analyzing the ROC curve, when the false-positive rate is reduced, that is, the misdiagnosis rate reduces, the SE module is less sensitive to LUAD than the other two types. When the reduction rate increases, the sensitivity of the model to SCLC increases simultaneously, but the sensitivity of the model to LUAD and LUSC decreases. On the contrary, the spatial attention module has a high sensitivity to LUAD when the misdiagnosis rate decreases. is also reflects the advantages of the two attention mechanisms in feature extraction for different categories. rough the combination of the two-dimensional attention mechanisms, our model relieves the sensitivity difference between categories, which also balances the constraints of the fine-grained feature extraction between categories. We will conduct a more detailed discussion and analysis in the Discussion section. Evaluating Different Feature Fusion and Loss Supervision Strategies. We compare the following feature fusion strategies: (1) optimization strategy and (2) fusion strategies. Deep-supervision strategy [31] has been introduced as an effective method for fine-grained feature extraction from a Complexity 7 single modality. e idea is to implement loss supervision on different feature outputs to achieve deeper optimization of the network. Inspired by this idea, we apply separate loss supervision on the output of each modality and integrate with the loss of final fusion feature to achieve joint optimization. e loss supervision strategy employed after the high-level semantic features of each modality enhances the effectiveness of the spatial attention mechanism. For the multimodality feature fusion, we employ GMU as the fusion strategy. e quantitative results of this experiment are listed in Table 3. In this table, joint optimization refers to the loss supervision that includes each single-path network integrated with the loss of final fusion feature to obtain the joint optimization. e connection refers to the connection operation of features to achieve the feature fusion, while the GMU models the relevant features of the two modalities. As can be seen from the results in the table, the performance obtained by GMU (variance � 0.05) has smaller fluctuations in the model prediction than the feature fusion mode of the connection operation (variance � 0.12). e result shows that both GMU and joint optimization provide the best endto-end prediction for multimodality data. Visualization Experiment and Discussion In order to verify the role our attention mechanism plays for each modality, we use Grad-CAM [33] to generate class activation mapping (CAM) of the network. More concentrated and precise CAM responses mean a higher reduction of noise in the feature extraction. Figures 6 and 7 show the CAM on each 2D slice. We visualize the CAM in different modalities and conclude that the multidimensional attention forces the network to focus on the lesion area and to reduce feature noise in this way. us, the multidimensional attention mechanism accurately extracts features from the lesion area while excluding interference of the feature with the surrounding tissue. is is important especially when the entire image instead of a segmented image is used in the automatic diagnosis. To optimally use these intensively distributed features for the classification, it is necessary to ensure an accurate extraction of these characteristics from the lesion area. It can be seen from the results that our proposed attention mechanism can better concentrate the features extracted by the network in some areas. Due to the differences in the feature distributions of various imaging modalities, we further investigate the impact that the two attention mechanisms have on the network's CAM. e results of this comparison are shown in Figures 8 and 9. Our observations show that the type of attention mechanisms has a great influence on the CAM of different modalities. For CT images, the CAM generated by a spatial attention mechanism appears similar to the CAM of models without attention mechanism. In contrast, the application of the channel attention mechanism leads to a concentrated CAM with the least amount of feature noise, but it lacks in localization accuracy and deviates from the true position of the lesion area. For PET images, the channel attention mechanism cannot focus the attention map; however, the spatial attention mechanism proved itself useful in this regard. CT images illustrate intricate structures which are represented by complex spatial features. While their identification poses a challenging task, the modelling of the feature weights on the basis of the channel dimension is more effective. In contrast, PET images, as a binarization image, hold less feature types that are difficult to distinguish based on the feature channel. However, the spatial dimension facilitates the modelling of feature weights in PET images. is experiment and the different performances we observed for PET and CT images demonstrate the complementarity of two attention mechanisms. rough the LUAD which greatly fluctuates in sensitivity, Figure 5, we try to explain the reasons why attention mechanisms in deep learning network are effective. e metabolic level of the lesion area is measured in PET images using the standardized uptake value (SUV). is measurement plays an important role in the clinical diagnosis as, in general, LUSC and SCLC have higher SUV values than LUAD. Because the channel attention mechanism has poor ability to locate features in PET images, models based on this are not sensitive to LUAD. e spatial attention mechanism has a better feature extraction effect on the PET image, which greatly improves the sensitivity of the model to LUAD. Compared with LUAD, SCLC lesions have low density, and there is no clear edge information in CT image. e clinical features of SCLC make it less sensitive on models based on spatial attention mechanisms. Channel attention can more effectively extract complex features on CT images and reduce feature noise, so it has better performance on SCLC. Different types of lung cancer have different characteristics on different modality images, thus demonstrating the necessity of multimodal image application. On the other hand, the complexity of feature extraction from different modalities also illustrates the necessity of multidimensional attention mechanism for different image feature extractions. Conclusions In this paper, we propose an approach for the classification of lung cancer using multimodality noninvasive clinical images (CT and PET). A parallel network for automatic lung cancer diagnosis is proposed. Furthermore, we optimize the network regarding the extraction of fine-grained features in both channel and spatial dimensions and utilize the GMU to consider the intrinsic correlation between different modalities. We consider two attention mechanisms in different modality images and visualize the results to provide a comparison between them. In future work, we will address the following topics to improve our approach furtherly: we plan to expand the dataset used in the training to achieve a clinical application level. In addition, we will collect more segmentation labels for the data in our dataset and complete the objective evaluation of our weakly supervised detection approach. Data Availability e part of data used in the research can be obtained from https://pan.baidu.com/s/1FBH7WZ5PoeggvcJrvX_0ug. Conflicts of Interest e authors declare no conflicts of interest.
2020-01-23T09:08:13.854Z
2020-01-20T00:00:00.000
{ "year": 2020, "sha1": "5dcc7b395422986abd851ef785dcc9e47febbdd8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2020/6153657", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "49b1c9b08bcd3751406f5b4faa87182123185903", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
119274052
pes2o/s2orc
v3-fos-license
Developing Atmospheric Retrieval Methods for Direct Imaging Spectroscopy of Gas Giants in Reflected Light I: Methane Abundances and Basic Cloud Properties Upcoming space-based coronagraphic instruments in the next decade will perform reflected light spectroscopy and photometry of cool, directly imaged extrasolar giant planets. We are developing a new atmospheric retrieval methodology to help assess the science return and inform the instrument design for such future missions, and ultimately interpret the resulting observations. Our retrieval technique employs a geometric albedo model coupled with both a Markov chain Monte Carlo Ensemble Sampler (emcee) and a multimodal nested sampling algorithm (MultiNest) to map the posterior distribution. This combination makes the global evidence calculation more robust for any given model, and highlights possible discrepancies in the likelihood maps. As a proof-of-concept, our current atmospheric model contains 1 or 2 cloud layers, methane as a major absorber, and a H$_2$-He background gas. This 6-to-9 parameter model is appropriate for Jupiter-like planets and can be easily expanded in the future. In addition to deriving the marginal likelihood distribution and confidence intervals for the model parameters, we perform model selection to determine the significance of methane and cloud detection as a function of expected signal-to-noise in the presence of spectral noise correlations. After internal validation, the method is applied to realistic spectra of Jupiter, Saturn, and HD 99492 c, a model observing target. We find that the presence or absence of clouds and methane can be determined with high confidence, while parameter uncertainties are model-dependent and correlated. Such general methods will also be applicable to the interpretation of direct imaging spectra of cloudy terrestrial planets. INTRODUCTION Space-based telescopes equipped with coronagraphic imagers can separate light scattered by orbiting planets from that of their primary stars. The detection of light that penetrates deeply into an atmosphere rather than Roxana.E.Lupu@nasa.gov merely skimming its upper layers, as with transit methods, potentially permits more extensive and informative characterization of atmospheric gaseous absorbers as well as cloud and haze layers. However the interpretation of the scattered light signal will in practice be limited by a multitude of uncertainties beyond the basic limitations 2 of data quality. Among these are the uncertain or unknown planetary radii, masses, and cloud layers. Here, in the first of what we plan to be a series of papers, we present the initial development of an atmospheric retrieval methodology that quantifies the resultant uncertainties and clarifies the precision with which the planet's gravity, composition, and cloud structure and other parameters can be discerned. Direct imaging offers the possibility of characterizing planets around nearby stars and at larger orbital distances than is possible for transit observations. Directly imaged planets see less stellar irradiation than traditional transit observation targets and can either be young, warm, and self-luminous, or older and much colder than those studied by transit methods. While a multitude of space coronagraph missions have been studied or proposed over the last two decades, the only mission currently in development by NASA with the capability of imaging cool giant planets in reflected light is WFIRST (Spergel et al. 2015). Current estimates are that a coronagraph-equipped WFIRST mission will be able to obtain photometry and spectra for at least a dozen known radial velocity (RV) planets as well as search for lower mass planets (Traub et al. 2016). An example of the diversity of the known RV planets favorable for direct imaging is shown in Figure 1. This sample was drawn from the Exoplanet Encyclopedia and will likely increase with future discoveries from RV or WFIRST surveys. In this figure the known M sin i, measured by RV methods, is plotted against estimated blackbody radiating temperature (or effective temperature) in order to illustrate the phase space of atmospheric conditions that might be expected among these most favorable planets. The effective temperatures have been calculated using an evolution model for the range of masses and the age ranges of the stars, accounting for both internal heat sources and the incident flux (Marley et al. 2014). The planet's inclination (i) will be determined from the direct imaging observations, therefore constraining their approximate masses and, with the aid of the mass-radius relationship, their surface gravities. Vertical color bands show the approximate ranges over which various atmospheric compounds form clouds. While many Jupiter and Saturn-like worlds with ammonia clouds are expected, some planets with water, alkali, and even methane clouds may also be observed. The Coronagraph Instrument onboard WIFRST, in combination with an Integral Field Spectrometer (Traub et al. 2016), is currently planned to provide us with images (430-970 nm) and low-resolution (spectral resolution R ∼ 70) reflected light spectra of gaseous planets around nearby Sun-like stars (600-970 nm). Unlike transit spectroscopy that only probes the top of the atmosphere to ∼ 1 mbar (e.g., Kreidberg et al. 2014), re-limited time available. Nevertheless the results clearly sh scientifically interesting and valuable conclusions can be dr giant planet atmospheres from noisy spectra. Further re warranted and would be valuable to help guide instrument d An exam known direct im In this measured by RV methods, is plotted against estimated eff to understand the phase space of atmospheric propertie among the favorable planets. Vertical color bands show th which various atmospheric compounds form clouds. The ke that while many Jupiter and Saturn-like worlds, with amm many planets with water, alkali, and even methane clouds m M sin(i) and ranges of estimated effective temperature (T eff ) of a selection of announced RV planets that are favorable for direct imaging. The orange circle represents Jupiter while the green one hints at Uranus which actually falls below the lower axis. Estimated T eff computed from planet orbits, Jupiter's Bond albedo, and estimated internal heat flows given available constraints on the ages of the primary stars. Bands show major cloud species expected in various ranges of T eff . The existence of two of the planets shown, Ups And e and Eps Eri b, is controversial. flected light can probe deep into the atmosphere of these gas giants (e.g., Marley et al. 2014), and therefore offers a more comprehensive view of composition and cloud layers. Most planets in Figure 1 have effective temperatures of ∼ 150 − 350 K. Assuming these worlds are comparable to Solar System gas giants, their 600 − −970 nm spectra will be dominated by cloud decks of water or ammonia and gaseous absorption by methane and possibly water. Photochemical hazes will doubtless be important as well. There is a long and comprehensive history of interpretation of such spectra of Solar System planets dating back to Sato & Hansen (1979) and before. For Jupiter-like atmospheres the continuum scattered flux level at these wavelengths is set by scattering from the bright clouds while Rayleigh scattering is more important at the bluest wavelengths. The bright continuum is punctuated by gaseous methane absorption features of varying strengths. The relative strengths of the various methane absorption bands, combined with the continuum flux level set by the clouds, together constrain the cloud properties and methane column abundance. Short-ward of 600 nm, the photometric measurements will give us information about the shape of the continuum, dominated by Rayleigh, haze, and cloud scattering. If both CH 4 and H 2 O features are present in the spectra, we can constrain the C/O ratio, value related to the place of planet's formation in the circumstellar disk (Bond et al. 2010;Helling et al. 2014;Öberg et al. 2011). Extracting such information from low to moderate spectral resolution data at modest signal-to-noise ratios will be a challenge. Cloud properties and location, absorber abundances, planetary radius (and thus gravity), and the atmospheric thermal profile will all be unknown. While forward modeling techniques, such as Cahoy et al. (2010), can give insight into the range of possible spectra, extraction of cloud properties and absorber abundances will require the application of retrieval methods to the available data. We aim to develop the necessary theoretical and computational framework to enable such retrievals. As this will be a complex endeavor we approach the problem in steps. Here we present a first step in the development of this framework, focusing on the retrieval of gross cloud properties, surface gravity, and methane mixing ratio. In future papers we will add retrievals for orbital phase, star-planet distance, planet size, additional absorbers and atmospheric thermal profile. In the remainder of this paper we provide more detailed background on reflected light spectra of giant planets, present the conceptual model and Markov Chain Monte Carlo retrieval method, and the results of this study. The paper is organized as follows: Section 2 provides more context and background to the problem. Section 3 describes our albedo code and the forward models used in the retrievals; Section 4 describes our noise model used to generate the input datasets; Section 5 contains the Bayesian retrieval scheme, followed by its validation in Section 6. Other retrieval results for more realistic spectra of known gas giants are shown in Section 7, and the conclusions are summarized in Section 8. BACKGROUND In this section we provide a brief overview to a few of the key concepts used throughout the remainder of the paper. Geometric Albedo The analysis of extrasolar planet reflection spectra owes much to the Solar System literature. However this literature also brings its own set of conventions, not all of which translate smoothly to the exoplanet context. For expediency we nevertheless choose here to follow these conventions, although we recognize that as exoplanet direct imaging evolves into its own sub-field that this terminology will likely evolve to shed some vestigial struc- Model geometric albedo spectra for three example cases: cloud-free (black), a single optically thick cloud deck (blue), and one cloud deck plus and optically thin haze layer (red). All models assume a CH 4 abundance of 10 −3 , and a surface gravity of 25 m s −2 . The cloud deck is at a depth of 1.8 bars in both red and blue examples, and has an albedo of 0.95. The simulated haze layer in the red model has an optical depth of 0.2, an albedo of 0.6, and is occupies the region between 0.2 and 0.5 bar. tures. A foremost concept is the geometric albedo, the ratio of light received from a planet when observed at full phase to that which would be measured from a perfectly reflective Lambert disk of the same size as the planet. Because the angular distribution of light scattered by a real atmosphere differs from that scattered by a Lambert disk, the geometric albedo of even a perfectly scattering atmospheres is not unity. For a conservative, infinitely deep Rayleigh scattering atmosphere the geometric albedo is 0.750. The fractional reflectivity measured at a star-planet-observer angle differing from 180 • is given by the product of the geometric albedo and the planetary phase function. Theoretical calculations of reflected light spectra for extrasolar giant planets have been preformed to date by Marley et al. (1999); Burrows et al. (2004); Burrows (2014); Cahoy et al. (2010); Greco & Burrows (2015), showing the wide variations determined by metallicity, effective temperature, cloud presence, and orbital phase angle. There are two important reasons why "geometric albedo spectra" will not be directly measured for directly imaged exoplanets. First, while transiting planets can be observed at full phase just before they are eclipsed on the "far" side of their orbits, directly imaged planets will never be observed even close to full phase because they would lie too close to the primary star to be resolved from the star. Second, the radius of a planet will Model geometric albedo spectra comparing the effects of increasing methane abundance (left) and surface gravity (right) for a cloud-free planet. In the left plot the surface gravity is kept constant at 25 m s −2 , while in the right plot the methane abundance is kept constant at 10 −3 . The thermal profile is kept constant in all cases. not be directly measured, rather only the product between the planet's area and its reflectivity as a function of wavelength. Thus it is an oversimplification to discuss "geometric albedo spectra" for directly imaged extrasolar planets. Nevertheless to simplify the model development for this work, we consider here only the planetary spectrum at full phase, cast as "geometric albedo spectra". In the second paper of this series (Nayak et al., submitted) we will explore issues arising from the phase dependence of planetary reflectivity (see Cahoy et al. (2010)) and the unknown planetary radius. Figure 2 shows model geometric albedo spectra we calculated for three typical planet cases following the methods described in this paper. Depending on the temperature and composition of the planet, certain species can condense forming cloud decks (mostly alkalis, methane, ammonia, and water for the RV planets shown in Figure 1). As known from our Solar System (e.g. Jupiter, Titan), a haze layer can also form in the upper layers of the atmosphere under the action of stellar ultraviolet radiation. The figure compares computed geometric albedo spectra with (blue) and without (black) the expected clouds and haze layer (red). Cloudy giant planets are brighter in reflected light at red wavelengths as incoming photons are scattered before they can be absorbed (Marley et al. 1999). Figures 3 and 4 present additional model geometric albedo spectra for varying atmospheric parameters, that can be expected given the diversity of extrasolar planets. These plots emphasize the changes that can be expected in the albedo spectra given variations in methane abundance and surface gravity, as well as cloud albedo and depth in the atmosphere when the atmosphere is not clear of clouds. More spectral variations as a function of mass, orbit, metallicity, and phase are described in detail in Cahoy et al. (2010) and Sudarsky et al. (2000). Distinctive differences diagnostic of important atmospheric processes between the spectra of known planets can clearly be expected. This study explores how well an instrument like the coronagraph on WFIRST would be able to constrain planet atmospheric composition. Retrieval Approaches Our atmospheric retrieval procedure involves combining a well-tested planetary albedo code Marley et al. 1999;Cahoy et al. 2010) that can take into account multiple absorbers, cloud and Rayleigh scattering, and arbitrary incident and observed angles, with state-of-the-art Bayesian inference tools, namely the Markov chain Monte Carlo (MCMC) ensemble sample emcee (Goodman & Weare 2010;Foreman-Mackey et al. 2013) and the multimodal nested sampling algorithm MultiNest (Feroz & Hobson 2008;Feroz et al. 2009Feroz et al. , 2013) that can be used interchangeably. We believe that this is the first time such powerful retrieval techniques have been designed to simultaneously measure molecular abundances and cloud properties and their correlations from scattered light spectra. NEME-SIS (Rodgers 2000;Irwin et al. 2008) is the only other existing retrieval method for planetary atmospheres in reflected light that has been applied to exoplanet characterization (Barstow et al. 2014). By contrast to our Bayesian approach, NEMESIS uses non-linear optimal estimation to derive the best-fit model parameters and their uncertainties, and for exoplanet characterization did not include cloud properties explicitly as free parameters in the retrieval process. Instead, the effect of cloud properties on the retrieval results was investigated separately by calculating the χ 2 goodness-of-fit over a large grid spanning cloud particle size, optical depth, and base pressure (Barstow et al. 2014). Recently, cloud properties have been introduced in the NEMESIS retrieval scheme to analyze the scattering properties of Uranus (Irwin et al. 2015). In this new approach the code retrieves the imaginary refractive index spectrum together with a Gamma distribution for particle size, characterized by a mean radius and variance. The extinction cross-section, single scattering albedo and phase function spectra are then calculated using standard Mie theory. Such parameterization allows for a more physical and self-consistent description of cloud and haze layers. Our method goes in the opposite direction, retrieving optical properties (optical depth, scattering albedo, and asymmetry factor) and cloud depth as model parameters, but not linking them to a physical model of cloud composition (such as particle size). As shown later in this paper, the presence of clouds naturally leads to degeneracies between methane abundance, cloud positions, and surface gravity. Irwin et al. (2015) also highlight this degeneracy and constrain the cloud properties only by using a fixed, previously measured, methane abundance profile. As shown by Line et al. (2013Line et al. ( , 2014, the Bayesian inference tools are better equipped to handle highly nongaussian posterior distributions that are expected for future exoplanet observations, given the limited data and complex atmospheric models. Moreover, clouds play a significant role in the atmospheres of both gas giants in our Solar System and the exoplanets considered as future observing targets, given their expected effective temperatures. By including simple cloud properties (optical depth, albedo, depth in the atmosphere, etc.) as model parameters alongside molecular abundances, we can fully explore the degeneracies in the atmospheric structure, given the spectrum. For our initial retrieval tests we constructed two highly idealized cloud models, one with a single cloud deck of arbitrary opacity, and the other with a scattering haze overlying a completely opaque cloud layer. Such atmospheric 6 models are adequate for the types of planets addressed in this paper, and unquestionably can be improved in future work. Our goal is to determine if consistent results for scientifically interesting quantities (abundances, cloud properties) can be obtained using reflected light spectra from a space based coronagraph, given the likely modest signal-to-noise and spectral resolution. FORWARD MODEL Our geometric albedo code for giant planets was originally developed by Marley et al. (1999) and is based on the methods of McKay et al. (1989). This code was subsequently modified and improved by Cahoy et al. (2010), who investigated the albedo variations as a function of star-planet distance, metallicity, mass, and phase angle. This original albedo code uses as input parameters the exoplanet's gravity and depth-dependent temperature, pressure, composition, and cloud properties which are in turn computed by a 1-D radiative-convective equilibrium model (Marley et al. 1999;Cahoy et al. 2010). The atmosphere is divided in 60 layers, with the bottom pressure marking the point beyond which photon scattering is negligible. This pressure level is taken from the radiative-convective equilibrium model for HD 99492c, and from the measured pressure-temperature profiles for Jupiter and Saturn (Seiff et al. 1998;Tyler et al. 1982). In all these cases, this pressure level is below the observable cloud decks. In summary, P bottom is 40 bars for HD 99492 c and the cloud free and 1-cloud validation cases, 10 bars for Jupiter and the 2-cloud validation case, and 251 bars for Saturn. In the full forward model the clouds are parametrized by wavelength-dependent optical depth τ cld , single scattering albedo (ω cld ), and scattering asymmetry factor (ḡ cld ), obtained from a full Mie scattering treatment of particle sizes predicted by a cloud model (Ackerman & Marley 2001). The single scattering albedo represents the ratio between the amounts of scattering and total particle extinction, and the asymmetry factor, g cld , is a measure of the degree of forward scattering. To simulate a spherical planet, we cover the illuminated surface of a sphere with 100 plane-parallel facets (Cahoy et al. 2010), where each facet may have different incident and observed angles, µ 0 = cos θ 0 and µ 1 = cos θ 1 , where θ 0 and θ 1 are the angles between the local normal vector and the star and observer, respectively. Although the ability to use different combinations of incident and observed angles allows for arbitrary planet phase angles, we modeled the planet as observed at 0-degree phase angle (face-on), in which case the observer and the source are collinear and µ 0 = µ 1 for every facet. Increasing the number of facets proportionally increases the computing time, and only leads to a modest increase in accuracy. In this case, the albedo code takes about 3 s to run, which is reasonable to use in combi-nation with an MCMC sampler. Although the general case permits θ 0 = θ 1 , for the work reported here we set θ 0 = θ 1 in order to compute geometric albedo, which by definition is the reflectivity at zero phase angle. In a future work (Nayak et al, submitted) we will consider observations at arbitrary phase angle. Following the approach of Horak (1950) and Horak & Little (1965), we use two-dimensional planetary coordinates and Chebyshev-Gauss integration to integrate over the emergent intensities and calculate the albedo spectra. The radiative transfer is performed point by point for each of the points sampling the planetary disk. The scattering source function (Toon et al. 1989;Meador & Weaver 1980) includes the contributions of both diffuse and direct scattering: where F 0 is the Solar flux at to top of the atmosphere, normalized to 1, and p(µ 1 , µ 2 ) is the scattering phase function. The two terms on the right-hand side represent the single and multiple scattering components, respectively. We use a two-stream quadrature (Toon et al. 1989) to solve for the diffuse, angle-independent radiation field. This solution is then used as an approximation to the source function, which is then back-propagated to the top of the atmosphere, while adding the angular dependence given by the scattering phase function. This is a completely scalar approach and does not include any polarization effects. Based on our experience and the results of Cahoy et al. (2010), we expect that the most important model parameters for Jupiter-like exoplanets in reflected light will be the methane abundance, surface gravity, and cloud properties. In a future paper we will consider other gaseous opacity sources. The code uses the opacity for methane in the visible following Karkoschka (1994), and the collision-induced absorption (CIA) for H 2 -H 2 , H 2 -He and H 2 -CH 4 as summarized in Freedman et al. (2008). The total gaseous absorption optical depth is then τ abs = τ CH4 + τ CIA . In spite of newer methane line lists, difficulties remain in calculating the high-energy transitions of methane and Karkoschka (1994) is still the best reference for the methane opacity in the visible, and is used to reproduce Solar System measurements. We define τ total = τ scat + τ abs , where the total optical depth to scattering is τ scat = τ Ray + τ cloud . Following Cahoy et al. (2010), for the direct scattering (or single scattering term in Equation 1) we use a twoterm Henyey-Greenstein scattering phase function with Atmospheric Retrieval for Gas Giants in Reflected Light I 7 high forward scattering and moderate backscattering: p HG (ḡ, Θ) +ḡ 2 4 p HG (−ḡ/2, Θ), (2) where p HG (ḡ, Θ) = 1 4π and Θ is the scattering angle, related to the planet's phase angle α by α = π − Θ, andḡ is the scattering asymmetry factor associated with the scattering by cloud particles,ḡ =ḡ cld × τ cld /τ scat , since Rayleigh scattering is treated separately. For the multiple scattering term in Equation 1, the diffuse scattering phase function is written as a Legendre polynomial expansion, assuming azimuthal independence: where µ and µ denote the scattered and incident angle, respectively, andḡ 2 contains the Rayleigh scattering contributionḡ 2 =ḡ Ray × τ Ray /τ scat . Here µ and µ are chosen such that the right solution is obtained in the Rayleigh limit. Rayleigh scattering is calculated following Hansen & Travis (1974), withḡ Ray = 0.5, and ω Ray = 1. The total layer single scattering albedo then becomes (ω Ray τ Ray +ω cld τ cld )/τ total , for every layer in the atmosphere. Further details of the radiative-transfer modeling are described in Marley et al. (1999); Cahoy et al. (2010). For retrieval purposes, we have preserved the radiative transfer and scattering prescription of the original albedo code, but made large simplifications to the input parameters. The simplified model used in the present study has constant molecular abundances throughout the atmosphere, with H 2 and He in primordial solar ratio. The pressure-temperature profile T (P ) of the atmosphere is kept fixed since we do not expect that our spectral range of interest (0.4 − 1 µm) will contain any information for constraining it (see also Barstow et al. (2014)). The wavelength dependence of the cloud parameters is also ignored (gray assumption for τ cld ,ḡ cld , andω cld ). The depth dependence is limited to parametrizing the cloud height and cloud top pressure, as described below. In actuality of course the temperature-pressure profile will vary with surface gravity and this will primarily affect the atmospheric scale height. Here our variation of atmospheric gravity, g, stands in for variations in both T (P ) and g. As we add complexity to the model we will explore the sensitivity of retrievals to a varying T (P ). Cloud Models As commonly employed in solar system giant planet atmosphere retrievals (e.g., Sato & Hansen 1979), for the !, g, " purposes of atmospheric retrieval we consider two different cloud treatments as illustrated in Figure 5. The simpler of the two models a single cloud layer while the more complex allows for two distinct clouds/hazes. We describe each model in turn below. 1-Cloud Model The one-cloud model is parameterized as a semiinfinite layer with a cloud top at pressure P in the atmosphere and characterized by the single scattering albedō ω, scattering asymmetry factorḡ, and the gray optical depth τ of the layer where the top cloud is found. For simplicity of notation, we have dropped the subscript 'cld' from the quantitiesω cld ,ḡ cld , τ cld , as defined in the previous section. This structure is shown in panel A of Figure 5. The pressure of the cloud top is allowed to vary freely. Our typical input pressure-temperature profile has N = 60 vertical atmospheric layers. We find the model layer in which the cloud top pressure is located, j c (1 ≤ j c ≤ N ), and scale the cloud optical depth in this layer by the position of the cloud top pressure relative to the pressure at the bottom of the layer. The next deeper layer (j = j c + 1) will have cloud optical depth τ j = τ jc × (P j+1 /P j ), where the layer number j increases with depth in the atmosphere from 0 to N and P j denotes the pressure at the top of layer j. The cloud optical depths in the following layers all the way to the bottom are calculated iteratively as τ j+1 = τ j × (P j+2 /P j+1 ). Thus in this model τ is essentially a measure of how opaque the cloud top is, and the optical depth per unit mass is constant over the entire vertical extent of the cloud. Large values of τ imply a rapid transition from cloudless atmosphere to cloud, whereas small values imply a more gradual increase of cloud opacity. Other cloud profile parameterizations are of course possible and we will explore these in future work. The cloud single scattering albedoω and scattering asymmetry factorḡ are kept constant as a function of wavelength and depth in the atmosphere, below the layer containing the top of the cloud, e.g.ω j = ... =ω N =ω for j ≥ j c . This model will be referred in what follows as the "1-cloud model", and is characterized by 6 parameters: f CH4 , g, P ,ω,ḡ, and τ , where g is the planet's surface gravity, to be distinguished fromḡ, and f CH4 is the methane abundance. 2-Cloud Model Increasing complexity, we created a model appropriate for a cloud deck overlain by a haze layer with a very simple 2 layer structure shown in panel B of Figure 5. Such a model is roughly capable of reproducing the structure observed in Solar System planets, and is a slight modification of the model used in the classic analysis of Jupiter's atmosphere by Sato & Hansen (1979). The parameters describing the lower cloud are its top pressure P and single scattering albedo (ω 2 ). Following the same approach as in Section 3.1.1, the pressure of the top of the bottom cloud is found in layer j c , the optical depth below this level is scaled in the same way, except now τ = 1 in the top cloud layer, and is not variable. Thus this lower cloud has a sharply defined top layer and its total column optical depth is 1 in all cases. This ensures that the bottom cloud is always optically thick, and makes it effectively act as a reflective surface, with a reflectivity controlled byω 2 , and situated at a variable depth given by P . The position of the upper cloud (or haze layer) relative to the bottom cloud is parametrized by the pressure difference between the top of the lower cloud and the bottom of the upper cloud (dP 1 ) and the pressure difference between the top and the bottom of the upper cloud (dP 2 ). For computational convenience, these quantities are defined in log space, and are related to the size and location of the top cloud by log P bottom = P − dP 1 and log P top = P − dP 1 − dP 2 , where P top and P bottom are the pressures at the top and at the bottom of the upper cloud, respectively (see Panel B, Figure 5). Similar to the 1-cloud approach, we find the layers in which the top and bottom pressure of the upper cloud are located and the corresponding fractions, or locate the cloud in a single layer, if necessary. For all the layers be-tween the top and the bottom, the optical depth of the upper cloud is scaled as τ j = τ × (P j+1 − P j )/(P bottom − P top ), where τ is the input variable and is wavelengthindependent. The single-scattering albedoω and asymmetry factorḡ are again kept constant as a function of wavelength and for all layers between P top and P bottom . This model will be referred in what follows as the "2cloud model", and is characterized by 9 parameters: f CH4 , g, P , dP 1 , dP 2 ,ω,ḡ, τ , andω 2 . Note that the haze single scattering albedo is treated as a constant with wavelength. Thus hazes that absorb preferentially in the blue, lowering the albedo in the short-wavelength part of the spectrum, such as are commonly found in Solar System giant planet atmospheres, are not taken into account here. These effects become more important below 0.5 µm, and are unlikely to affect the region of interest for this study (0.6 − 1 µm). We will address the wavelength dependence of the single scattering albedo in future work, especially when adding photometric points in the blue. SIMULATED DATA To simulate the direct imaging observations, we use a generic prescription for the total signal and associated noise expected in the planet's point spread function (PSF). This model is sufficient for investigating the effect of data quality (as quantified by the signal-to-noise ratio, SNR) on the size of uncertainties associated with the atmospheric parameters and on the significance of methane and cloud detection. We consider this to be a sufficiently general synthetic data model, that will be improved upon as more a detailed instrument simulator for the WFIRST coronagraph becomes available (e.g. Robinson et al. 2016). The plots in Figure 6 exemplify our simulated data for a Jupiter-like planet around a Sunlike star, at a distance of 25 pc from our Solar System, using the method detailed below. Let the total number of counts on the detector, within the planet's PSF, be the sum of planet counts n pl , raw speckle counts n spec , zodiacal light n zodi , and the total detector background counts from all other sources. The spectral bins are chosen such that the resolving power R = 70 is constant across the 0.4 − 1.0 µm bandpass. For each spectral bin, we define where n total (e − /s) = [n pl + n zodi + n raw speckle Atmospheric Retrieval for Gas Giants in Reflected Light I n total is the total number of counts within the planet's PSF, t is the total integration time, and the other quantities characterize the detector background noise, with "typical values" for an electron multiplying (EM) CCD detector: m pix = 5 pixels, D C = 0.001 e − (pixel s) −1 , N R = 3 RMS e − (pixel frame) −1 , t frame = 300 s, CIC = 0.001 e − (pixel frame) −1 , EN F = 1.414, G = 1000, and t = 14000 s. These estimated count rates are generic values, and will vary with the type of planet and wavelength. However, they are a good starting point for our study in SNR space, to scale the relative contributions of different noise sources. The factor f pp quantifies the speckle reduction efficiency that is expected in post-processing, and can take values roughly between 1/10 and 1/30 (Traub et al. 2016). We use the generally adopted value f pp = 1/20 in this paper. Assuming the stellar spectrum to be a blackbody at 6000 K, and using the model geometric albedo of the planet, we have calculated the expected number of photons in each spectral bin. This number was converted to a count rate, using estimated count rates of n pl = 0.012 e − /s, n zodi = 0.012 e − /s, n spec = 0.010 e − /s, which contain information about the expected quantum efficiency. It should be noted that here we are making the simplest assumptions on the noise model and in general n pl depends on wavelength and planet type. A more sophisticated noise model for the WFIRST coronagraph instrument has recently been made available (Robinson et al. 2016) and will be used in future work. The number counts coming from all contributions to the total signal are shown in Figure 6, top left panel. The observed spectrum is simulated assuming that the planet and zodi counts have a Poisson distribution (per channel), while the speckle and detector noise counts have a Gaussian distribution ( Figure 6 center left). In other words, the simulated data points are drawn from their respective distributions. In addition, we consider the possibility of noise correlations among different spectral regions. Since the speckle positions relative to the central star change with wavelength, we expect that at the position of the planet in the observed image certain wavelengths will be more affected by speckle noise than others. In our model, we assume that this will affect only the Gaussian-distributed counts, which are dominated by speckle counts, and not Poissondistributed ones, which consist of planet and zodi counts. Therefore, the total noise contribution of the Gaussiandistributed counts (their distribution around the mean) was split into 2 components, one spectrally correlated, and one spectrally uncorrelated. The correlated noise component was generated as a Gaussian random process with a squared-exponential kernel and correlation length scale of either 25 or 100 nm. These length scales are appropriate for our chosen spectral range and expected spatial resolution, and the choice of a random process reflects the existing uncertainty in the exact behavior of the speckle noise correlation. Furthermore, we assumed that both correlated and uncorrelated components have equal contributions to the total scatter in the data points, and therefore their distributions will have mean zero and equal variance. This combination of spectrally correlated and uncorrelated noise is shown in the top right panel of Figure 6. We define the signal-to-noise reference value (SNR 0 =signal/noise, from Equation 5) as corresponding to the integrated number of counts in a 6%-wide bandpass centered at 450 nm. Therefore, the integration time needed to achieve a given SNR 0 can be calculated as (7) where the index 0 denotes the fact that these values are calculated for the 550 nm reference bandpass. We calculate the integration time t 0 necessary to obtain a SNR 0 of 5, 10, or 20, respectively, which is then used to calculate the expected number of counts and scale the signal and noise across the entire bandpass. The final error bars are computed individually for each simulated data point using Equation 5. As shown in Figure 6, the resulting spectrum will have a SNR<SNR 0 on average, but we will take the SNR 0 as the reference value in what follows. The values for SNR 0 and speckle noise correlation length as defined above serve as a parametrization of the data space over which we perform our retrievals. The combination of the three SNR values and two possible speckle noise correlation lengths result in 6 simulated datasets for each planet model. Lacking more detailed information about the instrument, in the above we have assumed that the entire bandpass is observed simultaneously and the quantum efficiency (detector response) is constant across the bandpass. Although these conditions will not be satisfied in a real observation, they amount to assuming that we can achieve the final SNR distribution with wavelength shown in the bottom right panel of Figure 6. This is just one of the many possible realizations of SNR variation over the bandpass, and this is likely to be unique to each dataset, which will likely be a combination of different observing modes. It is to be expected that the best fit parameter values from our retrievals will depend on the noise distribution with wavelength, as well as on the individual random point generation for each simulated dataset. A complete instrument simulator will be needed to estimate the actual science return from a future mission. ATMOSPHERIC RETRIEVAL SCHEME The allowed ranges and best fit values for the forward model parameters, given the data, are determined using two Bayesian posterior sampling algorithms, namely the affine invariant ensemble Markov chain Monte Carlo sampler, emcee (Goodman & Weare 2010;Foreman-Mackey et al. 2013), and the multimodal nested sampling algorithm M ultiN est (Feroz & Hobson 2008;Feroz et al. 2009Feroz et al. , 2013. These approaches permit efficient sampling of highly correlated, non-gaussian, and high-dimensional parameter spaces, and are very readily scaleable to multiprocessor computing. The different approaches taken by the two algorithms in sampling the posterior parameter space can help us avoid the pitfalls of either one. While emcee starts with a first guess and can become trapped in a local minimum, MultiNest starts with a grid of points covering the entire prior parameter space and proceeds by narrowing down the maximum likelihood regions. On the other hand, M ultiN est could favor highly-peaked, multimodal, Gaussian-like distributions, while emcee is more agnostic to the shape of the posterior and can reveal additional tails and correlations. The total evidence for any given model (the integral over the posterior distribution) is automatically calculated by M ultiN est as a part of the algorithm, but requires extra steps and can be tricky to compute for emcee. Ideally, the two methods will converge to the same solution. Overall, we consider the two approaches complementary, and offer greater confidence in avoiding potential biases. Recently, Allison & Dunkley (2014) have compared in detail these sampling techniques and found that nested sampling is more time-efficient while still providing good accuracy, and the affine-invariant MCMC sampler can be competitive when massively parallelized. They both outperform by far traditional Metropolis-Hastings algorithms. For completeness, we provide a brief description of the two posterior sampling algorithms in the Appendix. A second component of the retrieval process consists of model comparison, with the purpose of quantifying not only the uncertainties in the model parameters, but also the evidence in support of a chosen model. In this step we can assess whether the 1-cloud or 2-cloud model presented in Section 3.1.1 and 3.1.2 offer a better representation of the data and calculate the significance associated with the cloud or methane detection. The choice between two competing models M X and M Y then comes down to comparing their probabilities by constructing the Bayes factor where Z is the Bayesian evidence defined in the Appendix. Usually the last term in Equation 8 is 1 (both models have the same probability). We use the guidelines provided by Jeffreys (1961); Raftery (1996) for assessing the evidence in support of model M X vs M Y in terms of Bayes factor: This ranking system is equally applicable when the evidence supports model Y , in which case we simply calculate B Y X . Since the posterior distribution in general does not have an analytic form, the difficulty arises when attempting to compute Z for each model under consideration. In general, the evaluation of Bayesian evidence from an existing MCMC posterior is limited by the poor sampling of regions of low likelihood. This problem can be overcome using thermodynamic integration, at computational costs 10 − 100× higher than a regular MCMC (e.g. Trotta 2008;Calderhead & Girolami 2009). However, as long as the Bayes factor is found within the ranges in Equation 9, the precise value of B XY is not important. In general, some rough assumptions are made on the functional shape of the prior and posterior distributions to be able to approximate the value of this integral. While these approximations are not very accurate, Cornish & Littenberg (2007) show that for high signal-to noise data (SNR 9) all methods converge toward the same values. In this paper we estimate Z using three different methods: the Schwarz-Bayes information criterion (BIC, Schwarz 1978), the Laplace approximation (Lopes & West 2004;Cornish & Littenberg 2007), and the Numerical Lebesgue Algorithm (NLA) described by Weinberg (2012). We refer the reader to the Appendix for a summary of these methods and relevant definitions. The scatter among the results given by these three methods are indicative of the reliability of these approximations for various models and SNR regimes. In general, we observe that the values converge when the evidence for a given model is very strong. Further, these results obtained from the MCMC samples are validated by comparison with the evidence values calculated by default with the nested sampling algorithm. Priors The parameters retrieved for each of the cloud models are described in Sections 3.1.1 and 3.1.2. In addition to the cloud properties, we are retrieving the methane abundance and surface gravity. For each retrieval case, the priors on the parameteres for the 1-cloud and 2-cloud a For correspondence with Ptop and P bottom in Figure 5, P , dP1 and dP2 are defined such that log P bottom = P − dP1 and log Ptop = P − dP1 − dP2. b Extra prior for the 2-cloud model ensuring that the sum of the layers does not exceed the height of the atmosphere. c For clarity, here the cloud optical depth parameterization is written as τ total , to show the difference between the two forward models (see Sections 3.1.1 and 3.1.2). models are shown in Tables 1 and 2, respectively. Water and alkali abundances will be included as model parameters in future work; however, for the applications considered in this paper (e.g. Jupiter, Saturn), methane is the main absorber. We define the atmospheric methane mixing ratio, fCH4, as the volume mixing ratio of methane. Since in a giant planet atmosphere 98% of the atmospheric constituents are H 2 and He, this uniquely defines the atmospheric methane content. Such an approach would not be possible for a terrestrial planet of course. We allowed gravity to vary because in the realistic case neither the size of the planet nor the planetary mass will be known precisely. We allowed an exceptionally large range of gravities to be tested by the retrievals. In a realistic case the planet mass (for RV planets) will be known to substantially better than a factor two by the orbital astrometry solution. From the mass-radius relationship for gas giant planets and albedo scaling arguments the radius will likely be known to within 50%, which dominates the gravity uncertainty. Thus for a Jupiter twin the gravity (g = 25 m s −2 ) would plausibly be known to be < 100 m s −2 , not < 1000 m s −2 as is the constraint placed in most of the results shown here. This turned out to be very important as, all else being equal, a large methane mixing ratio is required at high gravity to produce equivalent absorption band depths as a lower abundance at lower gravity. We recognize the degeneracies that will be introduced by the unknown planet radius and phase angle. In an extension of this work (Nayak et al., submitted) we are explicitly separating the mass and radius and introduce the phase angle as a new parameter. In the current work, the stellar flux is normalized to 1, such that the planet radius does not factor in directly. However, in a realistic case the radius of the planet will act as an overall scaling factor, and we expect to see degeneracies between the radius, phase angle, and planet reflectivity (hereω and/or ω 2 ). These correlations will add to the uncertainties, and have to be seen as a caveat in the present work. The only restriction on the vertical cloud structure (P , dP 1 , and dP 2 ) is that it does not exceed the total vertical extent of the atmosphere. The cloud albedos and asymmetry factor are allowed to take any value between 0 and 1, while the optical depth of the upper cloud varies between 10 −3 and 10 3 . This optical depth is also varied in the 1-cloud model, but the lower cloud in the 2-cloud model is assumed optically thick (see Section 3.1.2). The pressure-temperature profile of the atmosphere is kept constant, since there is no information in the spectra at these wavelengths (0.4 − 1.0 µm) to constrain it. We are considering replacing this fixed profile by a parametrized one, to better account for the effect of sur-face gravity (Line et al. 2013). Implementation The forward models described in Sections 3.1.1 and 3.1.2 have been coded in Fortran and converted into a Python-callable library using f2py (now part of the NumPy package). The retrieval scheme integrates this library with either emcee or PyMultiNest, alternatively. Both MCMC and nested sampling implementations are easily scalable to run from a laptop to a computer cluster. The Fortran code is also parallelizable, but this does not provide a significant increase in speed as long as the MCMC is parallelized. Our retrievals were run on the NASA Pleiades Supercomputer, where we highly optimized the code for the forward models, and took advantage of the parallel nature of the algorithms to run on up to 216 processors at the same time (one 24-core node per model parameter). The MultiNest algorithm is found to converge rapidly even when run on just 1-2 nodes. We have quantified the methane and cloud detections by calculating the ratios of their respective Bayes factors, as described in Section 5. For each case (SNR and spectral correlation length combination), a set of four different forward models was used: the 2-cloud model with 9 parameters (Section 3.1.2), the 1-cloud model with 6 parameters (Section 3.1.1), a model without clouds (the cloud subroutines are turned off in the previous models), and a model without methane (the methane abundance is set to 10 −20 in the previous models). Therefore, for each planet example, we ran a set of 24 retrievals using emcee. In addition, we performed the same retrievals using M ultiN est for the models with a spectral correlation length of 25 nm mainly to cross-check the Bayesian evidence values calculated from the MCMC chains. In cases of good convergence, M ultiN est also provided parameter constraints in agreement with emcee at a lower computational costs. RETRIEVAL VALIDATION In order to validate our retrieval procedure, we generate albedo spectra using the 1-cloud and 2-cloud models presented in Section 3.1.1 and 3.1.2, respectively. We use the 1-cloud forward model to generate 2 types of spectra: one for an optically thin cloud very deep in the atmosphere, equivalent to a cloud-free atmosphere; and one for an optically thick cloud at moderate height. The third case is generated with the 2-cloud model. The model spectra are then converted to simulated observations using the noise prescription described in Section 4. For each of these three cases we investigate the ability to retrieve the input model parameters, as a function of SNR and noise correlation length. For each of the three cases we ran retrievals using the full 1-cloud and 2-cloud models, a forward model with the clouds turned off (referred to as "no clouds"; defaults to 0 for allḡ,ω, and τ 's), and a forward model with negligible methane abundance (referred to as "no methane", fCH4= 10 −20 ). For convenience of notation, we will refer to these four model retrievals as 1c, 2c, -c, and -m, where a 2c-m notation for example would stand for "2-cloud forward model without methane". Each SNR and spectral noise correlation length combination was run through the retrieval procedure four times to enable model comparison and assess the significance of methane and cloud detection. Tables 3 and 4 summarize the input parameter values for each of the simulated spectra, and the confidence intervals for each parameter obtained after running the retrieval procedure. Cloud-free Case We construct the albedo of a cloud-free planet using the 1-cloud model in Section 3.1.1, where the optical depth τ is set to 10 −8 and the top pressure of the cloud to 10 bar. The other parameters used to generate the model spectrum are listed in Table 3. Using the noise prescription in Section 4, we generate simulated datasets for SNR values of 5, 10, and 20, and spectral noise correlation lengths of 25 and 100 nm. The data realizations can be seen in the left panel of Figure 7. The retrieval is performed over the wavelength range 0.4-1.0 µm, indicated by the green line in Figure 7. Figures 8 and 9 show the retrieval results. The marginal probability distributions for the model parameters are shown in the top panel in Figure 8. The associated confidence intervals are bounded by the 16% and 84% quantiles of the cumulative probability distributions and are shown in the bottom panel of the same figure. These confidence intervals are also listed in Table 3. We find that for a cloud-free planet both the methane abundance fCH4 and surface gravity g are well constrained. The methane abundance is constrained to within a factor of ∼ 2.6 at a SNR of 5 and within a factor of ∼ 1.15 at a SNR of 20. The surface gravity is constrained to within a factor of ∼ 4 at a SNR of 5 and within a factor of ∼ 1.2 at a SNR of 20. As expected, the cloud albedoω and scattering asymmetry factorḡ are not constrained, since they do not contribute to the observed spectrum. The 2-dimensional posterior probability distributions shown in Figure 9 trace the changes in the parameter constraints as the SNR increases from 5 to 20. This is also reflected by the decrease in the size of confidence intervals shown in the bottom panel of Figure 8. The distributions clearly become narrower and more peaked as the SNR increases. This projection also shows that the pressure of the top of the cloud deck in the model is partly correlated with the optical depth τ . A larger top cloud pressure (deeper cloud) allows for a larger range of Figure 7. Simulated data and best fit spectra for the cloud free case in Section 6.1 (left) and the single cloud case in Section 6.2 (middle), using the 1c forward model, and for the for the 2-cloud case in Section 6.3 (right), using the 2c forward model. The data correspond to SNR=5, 10, 20, from top to bottom and a spectral correlation noise of 25 nm. The results for a correlation length of 100 nm are similar. The solid and semi-transparent red regions represent 1 − σ and 2 − σ intervals, respectively. These intervals represent the standard deviation a set of 500 spectra generated using random samples from the converged MCMC distribution. The blue line represents the median of this set. The retrieval was performed over the 0.4 − 1.0 µm region, as indicated by the green vertical line. Table 3. Note that the confidence intervals are calculated from the distribution quantiles, and do not reflect possible upper/lower limits or unconstrained parameters that can be seen in the histograms. cloud-free case SNR = 5 corr.len. = 25 nm cloud-free case SNR = 10 corr.len. = 25 nm cloud-free case SNR = 20 corr.len. = 25 nm Figure 9. 2-D marginal posterior probability distributions for SNR=5, 10 and 20, and spectral noise correlation length of 25 nm, for the cloud free case in Section 6.1, using the 1c forward model. Since theḡ andω parameters are unconstrained in this case, we only plot the remaining ones. The red color map corresponds to distributions obtained using the MCMC algorithm, and the blue contours to nested sampling. The black lines show the real solution. Significance (σ) Figure 10. Bayes factors and associated significance levels, as defined in Section A.1, for the cloud free case in Section 6.1. The vertical shading grades follow the intervals defined in Equation 9. The yellow triangles correspond to the ratios Z 1c /Z 1c−m , the blue circles to Z 1c /Z 1c−c , and the green stars to Z 1c /Z 2c . The colored symbols represent the results derived from the MCMC samples, with the solid color corresponding to a noise correlation length of 25 nm, and the semi-transparent to a noise correlation length of 100 nm. For comparison, the black symbols use the evidence values provided by the nested sampling algorithm for the cases with a noise correlation length of 25 nm. The symbols correspond to the same Bayes factors shown in color. The values calculated using nested sampling have associated error bars, but too small in general to see on this plot. optical depths. This can be intuitively understood since a deep cloud will have little effect on the observed spectrum even when its optical depth is larger. The range of spectra obtained using parameters drawn from the posterior probability distributions are shown by the red contours in Figure 7. We also note the excellent agreement between the MCMC and nested sampling methods, where the nested sampling results are shown by the blue contours in Figure 9, and by the black lines in Figure 8. The posterior constraints on the cloud parameters P , τ ,ω, andḡ already indicate that the spectrum does not support the presence of an observable cloud. This is further confirmed by the Bayesian evidence analysis. We sample the posterior probability distributions for a set of 4 models: 1c, 1c-m, 1c-c, and 2c, as defined above. The pairwise Bayes factors for these models are shown in Figure 10. Clearly, methane is detected with a high significance even when the spectral SNR is 5 (yellow triangles). However, the presence of a cloud is not supported. The models containing 2-clouds, 1-cloud, or no clouds are equally able of describing the data, since even in a multiple cloud model the optical depth of the clouds can be very low, effectively acting as a no-cloud model. No preference for a given cloud model in this case means that the presence of a cloud is not necessary to explain the observed spectrum. In this sense, the Bayesian evidence for all these models should be approximately equal, and the scatter in the Bayes factors in Figure 10 shows the poor performance of the evidence approximations when the significance is low. A large scatter in the Bayesian evidence calculations by different methods has also been observed by Cornish & Littenberg (2007) when SNR 7. When the support for a certain model is low, we also note a lack of correlation between the model significance and the SNR (e.g. green and blue lines in Figure 10). This shows that the retrieval results in such cases are dominated by the particular noise realization. The black symbols in Figure 10 show the Bayes factors obtained using the evidence calculated by the nested sampling algorithm. The agreement is excellent for the high-significance methane detection, but lays within the large scatter for the cloud-model comparison. Single-cloud Case By raising the optical depth τ to 1, and the cloud top pressure to 0.2 bar, we can use the 1-cloud model to generate the albedo spectrum of a planet with an observable cloud deck. The simulated observations of such a planet are shown in the middle panel of Figure 7. The results of this retrieval are shown in Figures 11 and 12, and in the bottom half of Table 3. In this case the methane abundance is still well constrained, although within a wider range than for the no-cloud case, namely within a factor of ∼ 5 for a SNR of 5 up to within a factor of ∼ 3 for a SNR of 20. The original abundance value is well within the predicted ranges, where the SNR=10 case with a correlation length of 100 nm seems to be an outlier. The surface gravity of the planet is no longer constrained in this case, but is found instead to correlate with the cloud top pressure (Figure 12). The power of the posterior sampling lays in discovering such correlations between model parameters. Figure 12 also shows the correlation between the cloud albedoω and scattering asymmetry factorḡ, and between the top cloud pressure and its optical depth. Essentially, an optically thick cloud also constrains the cloud top pressure between ∼ 0.01 and 1 bar, while an optically thin cloud would require the cloud top pressure to be very close to the top of the atmosphere. Independent constraints on the surface gravity, such as provided by RV measurements would narrow the allowed range for the cloud top pressure, which in turn would constrain the cloud optical depth. Lacking this information, we obtain a lower limit for the optical depth and an upper limit for the cloud top pressure. The other very well constrained parameter is the cloud albedoω. The confidence intervals on this parameter are only of the order ±5% to 2% depending on the SNR and particular noise realization. The correlation with the scattering asymmetry factor leads to a slight asymmetry in these confidence intervals, but the range of allowed values is still remarkably narrow. On the other hand, the scattering asymmetry factorḡ is virtually unconstrained. Similarly to the no-cloud case, there is excellent agreement between the MCMC and nested sampling results. The high-significance cloud detection is revealed in the Significance (σ) Figure 13. Same as Figure 10, for the 1-cloud case in Section 6.2. In this case, there is no ambiguity in model selection with a cloud clearly detected at ∼ 20σ significance even when the SNR of the input data is only 5. Bayes factor plot in Figure 13. The Bayesian evidence is calculated for the posterior distributions corresponding to the models 1c, 1c-m, 1c-c, and 2c. The Bayes factors favor the models with clouds relative to the ones without (blue circles), and the model with methane relative to the one without (yellow triangles). The cloud detection significance is > 10σ even when the data have a SNR of 5, showing that the cloud deck is required by the observations. The methane detection significance is similar to that in Section 6.1. Similarly, the retrieval cannot distinguish between a 1-cloud or a 2-cloud model (green stars), since a 2-cloud model can be reduced to a 1-cloud model as the gap between the 2 cloud decks becomes small and the optical depth of the top cloud becomes large. Figure 8, for the 2-cloud case in Section 6.3. The parameters correspond the the 2-cloud model (2c) in Section 3.1.2. The 1σ intervals obtained using nested sampling can be affected by possible bi-modal distributions (see also Figure 15). Two-cloud Case The final validation case consists of a spectrum generated using the 2-cloud model in Section 3.1.2. The input parameters for the original spectrum are listed in Table 4, and the simulated datasets are shown in the right panel of Figure 7. The retrieved marginal probability distributions and confidence intervals are shown in Figure 14. In this case, the uncertainty in the methane abundance does not shrink considerably before the SNR reaches a value of 20. The confidence interval for fCH4 extends over a factor of ∼ 30 (∼ 60 − 70 for nested sampling) when the SNR is 5-10, but drops to a factor of 2 when the SNR reaches 20. Similarly to the 1-cloud case, the surface gravity is not constrained by the data. The multi-dimensional correlation between fCH4, P, and g seen in Figure 15 (at SNR=10) shows the benefit in reducing the allowed range in g, via RV and astrometry measurements, which will then propagate into narrowing the allowed ranges in P and fCH4. For a SNR=20 dataset, the uncertainties in fCH4 and P are simultaneously reduced (Figure 15). In this case, the pressure at the top of the bottom cloud (P ) is also constrained to within a factor of ∼ 3. The scattering asymmetry factorḡ of the upper cloud and its albedoω are both completely unconstrained, while the uncertainty in the albedo of the lower cloud (ω 2 ) is only 1% even when the data has a SNR of 5. The MCMC algorithm places an upper limit on the optical depth of the upper cloud, which is consistent with the lack of constraints for the other upper cloud parameters, but imposes a very tight constraint on the bottom cloud albedo. Intuitively, as seen in the previous two examples, the parameters of the upper cloud can be constrained as long as this cloud is optically thick, while the properties of the lower cloud (its albedo) can be determined as long as the upper cloud is optically thin. However, especially at lower SNR (see Figure 15), the nested sampling algorithm identifies a second set of solutions, with an optically thick upper cloud, associated with a lower methane abundance and a deeper lower cloud. This result suggests that this degeneracy will not be broken unless the scatter in the data points is greatly reduced. Aside from this new mode identified by the nested sampling algorithm, the two Bayesian approaches are again in excellent agreement. The presence of the second mode can be further 2-cloud case SNR = 10 corr.len. = 25 nm 2-cloud case SNR = 20 corr.len. = 25 nm Figure 15. Sample 2-D marginal posterior probability distributions for SNR=10 and 20, and spectral noise correlation length of 25 nm, for the 2-cloud case in Section 6.3, using the 2c forward model. The red color map corresponds to distributions obtained using the MCMC algorithm, and the blue contours to nested sampling. The black lines show the real solution. investigated by starting the MCMC chains in this part of the parameter space. We have calculated the Bayes factors and compared the models 2c, 1c, 2c-c, and 2c-m. Similar to the 1-cloud case, methane and clouds are both detected at very high significance (σ > 4) even for a dataset with a SNR of 5, as shown in Figure 16. In this case we again cannot distinguish between a 1-cloud and a 2-cloud model, since the first is a special-case limit of the second (green stars). However, both the 1-cloud and the 2-cloud models are equally favored with respect to any cloud free model (blue circles, pink triangles). Importance of SNR and Spectral Noise Correlation Length We stress that the quoted significance of the detection itself has no other information on the confidence intervals associated with the model parameters. These confidence intervals, as well as possible correlation and multi-modality, are clearly affected by the SNR of the dataset. The change in the confidence intervals with SNR is shown in Figures 8, 11, and 14. Overall, while the presence of methane is clearly detected even at a SNR of 5, its abundance is well constrained (to within factors of 2-3) only at a SNR of 20. At lower SNR, the uncertainty in the methane abundance is mainly related to correlations with other models parameters, such as the surface gravity and the position of the cloud deck (P). This situation is improved in the case of a clear atmosphere, where the methane abundance and surface gravity are simulta- Significance (σ) Figure 16. Similar plot to Figure 10, for the 2-cloud case in Section 6.3. The color scheme has been modified to emphasize the case where a 2-cloud structure is assumed as default. The orange triangles correspond to the ratios Z 2c /Z 2c−m , the blue circles to Z 2c /Z 2c−c , the pink triangles to Z 1c /Z 1c−c , and the green stars to Z 2c /Z 1c . As in the previous examples, the methane and cloud are clearly detected even with a SNR=5 dataset. neously constrained. However, the presence of a cloud deck is easy to confirm even at a SNR of 5 (as shown by the Bayes factor plots). This suggests that when the presence of clouds is indicated by early observations, an attempt to further increase the SNR is justified in order to constrain the methane abundance. Our results do not indicate any influence of the spectral noise correlation length on the retrieval results. The uncertainties on the model parameters are similar (see Figures 8,11,and 14,and Tables 3 and 4). There is a slight bias towards higher values for the retrieved methane abundance in the no-cloud and 1-cloud cases for a spectral noise correlation length of 100 nm, but it is not clear whether this is an effect of the noise correlation length scale or of the particular noise realization in the simulated dataset. Multiple noise realizations for a given correlation length scale would be required to validate this effect. REALISTIC TEST CASES For the retrieval tests we used two types of input data, Solar System giants and model planets. We used the Solar System albedo spectra for Jupiter and Saturn from Karkoschka (1994), and a theoretical radiativeconvective equilibrium model for HD 99492 c. All of these objects have methane dominated optical reflection spectra. We have applied our albedo retrieval method to a set of 24 cases, comprising 6 combinations of SNR (5, 10, 20) and correlation lengths (25 and 100 nm), the same as for the validation cases. The Solar System-like planets are assumed to be at 25 pc from the Earth, while In the left panel, the positions of the cloud layers have been offset for clarity, with the gray regions overlapping to emphasize the fact the both P top and P bottom refer to the same cloud deck, while the blue regions correspond to the second cloud deck defined in Figure 5. The theoretical structure is shown in the right panel, with the region occupied by the cloud calculated using the radiative-convective equilibrium code. The pressuretemperature profile calculated by this code and kept fixed in the retrievals is shown in red in all three panels. The theoretical and retrieved CH 4 abundance is shown at the top. the distance to the HD 99492 c system is 18 pc. The retrievals use data between 0.6 and 1 µm to more closely match the projected bandpass of WFIRST (unlike the validation cases where we used the 0.4-1.0 µm bandpass). For each case we run the MCMC ensemble sampler with 24 walkers (see Appendix) per parameter, for a total of 3800 steps, and we select the last 400 steps for determining the posterior probability distributions. We also use the nested sampling algorithm for the spectra with noise correlation length of 25 nm. HD 99492 c We start by looking at the model planet HD 99492 c, as the real-world example most closely resembling our 1-cloud model. HD 99492 c is thought to be a gas giant with a mass of 0.36 ± 0.02 M Jup , and a semimajor axis of 5.4 ± 0.1 AU, orbiting a K2V star. However, its existence has been challenged recently due to high stellar activity (Kane et al. 2016). We first determined the pressure-temperature profile for HD 99492 c by computing a 1D radiative-convective Figure 14, for the HD 99492 c model in Section 7.1. In a realistic scenario, the "true" parameters values would not be known, and therefore are not shown. equilibrium model following the methods of (Cahoy et al. 2010) while accounting for clouds with the treatment of (Ackerman & Marley 2001). This code computes a selfconsistent cloud with vertically varying abundances and particle sizes of each condensible species. This theoretical structure is shown in the right-hand panel in Figure 17. We then input the resulting pressure-temperature profile into a fine-grid albedo code to produce an albedo spectrum comparable to the Solar System data. This high resolution spectrum is then converted to simulated data following the prescription in Section 4, for each chosen combination of SNR and noise correlation length. Figure 18 shows the summary of the retrieval results for the gas giant HD 99492 c, with the quantiles listed in Table 5. An example for the posterior probability distributions for the retrieval using the 2-cloud model is shown in Figure 19. In the 2-cloud scenario, the posterior is bimodal, similar to that found in Section 6.3, and we show the most important parameters for the two modes separately in the two panels. The notable difference is that for the mode with a low optical depth for the top cloud (τ ), the albedo of the bottom cloud (ω 2 ) is very well constrained, while for the mode with a high optical depth for the top cloud, the albedo of the top cloud (ω) is very well-constrained, to within ∼ 6%. This is easily understood, since in the case of low optical depth we can "see through" the top cloud, and the albedo of the bottom cloud surface is what determines the spectrum, while the opposite is true when the top cloud is optically thick. We also note that an optically thin top cloud favors a lower methane abundance, since now we integrate through the cloud, down to the bottom cloud, and thus see a greater column of atmosphere which can have a lower fractional CH 4 abundance. The position of the best fit parameter values for each mode was marked in green to emphasize that the best fit parameter combination is different from the set of median values of the marginal distributions, which are listed in Table 5. The range of spectra generated using random parameter sets from the posterior are shown in Figure 20. In Figure 21 we show both the covariance plot for the retrieval using the 1-cloud model, as the more representative for the planet's vertical structure, and the best-fit spectra for the different models and modes. In the covariance plot the black lines show the parameter values that 2-cloud retrieval high-! posterior 2-cloud retrieval low-! posterior are closest to the theoretical planet structure. We note that this 1-cloud retrieval solution resembles the high-τ mode of the 2-cloud posterior, only with a tighter correlation between P and g. In this case we find a lower bound for the pressure of the cloud surface, but a lack of constraints for g. Similar to the validation case, we can see that a tighter prior in g would translate into better limits on P (via correlation), and a narrower allowed range for fCH4. The best-fit spectra reveal the complete degeneracy of these solutions (red, blue and yellow lines overlapping). The differences between the retrieved and original spectra (black line) are due to a more comprehensive treatment of gas and cloud opacities in the original model. Additional constraints placed by available photometric points shortward of 0.6 µm will be investigated in future work. The degeneracy between the best-fit solution given by the 2-cloud and 1-cloud models is also apparent in Fig- Figure 21. Best-fit spectra and 2-D marginal posterior distributions for HD 99492 c (SNR=20, CL=25 nm), using a 1-cloud model. The 2-cloud best fit parameters for the two modes are indicated in green in Figure 19. The black lines on the left plot show the 1-cloud parameter values that best match the "theoretical model" on the right panel in Figure 17. 1-cloud retrieval ure 17, where the two cloud decks in the left panel overlap, within the error bars, and basically occupy the same vertical regions as the 1-cloud deck in the middle panel. This plot suggests that for a planet like HD 99492 c our simple cloud model can only provide a lower bound on the pressure at the top of the cloud deck (i.e. upper bound to the height above the surface) and a lower bound on the methane abundance (i.e. the methane abundance is inversely correlated to the cloud top pressure, such that the total CH 4 column is constant). Independent priors on the top cloud pressure (from equilibrium structure) and surface gravity (from radius and mass measurements) would help mitigate these uncertainties. Both Figure 17 and 21 show a retrieved CH 4 abundance that is significantly higher than the one used in the theoretical model. This is in contrast to the 1-cloud validation case, where the constraints on fCH4 are much closer to the real value. This difference may be due to the fact that the forward model spectrum exhibits relatively few CH 4 bands compared to the previous test cases, with not enough constraints on continuum level, which sets the cloud top, methane absorption and atmospheric scale height determined by gravity. The cloud treatment in the inverse modeling is also very simplified. While the full theoretical model for HD 99492 c does include cloud optical depth variations with wavelength and depth in the atmosphere, these are not taken into account by the forward model in the retrieval. We note a similar bias toward high fCH4 values in the case of Saturn below, which could be due to similar deficiencies in our simplified cloud model and will be investigated in future work. As before, we show the Bayes factors between different model choices in the top panel of Figure 22. The presence of methane and a cloud deck is confirmed at very high significance. The 2-cloud model is more disfavored relative to the 1-cloud model, likely due to the presence of additional unnecessary parameters. Jupiter Arguably, a Jupiter-like planet is the closest real-world case to our 2-cloud forward model. We have simulated data for a Jupiter-like planet at 25 pc from the Sun using the observed Jupiter spectrum from Karkoschka (1994). The results of our retrievals are shown in Figure 23. This plot shows that the parameters that are best constrained by the data are fCH4, P , andω 2 . We note the narrowing of the distributions and therefore the tightening of the constraints for SNR=20 (orange lines), also shown by the size of the confidence intervals in the bottom plot. The derived CH 4 abundance is consistent with the generally adopted value of (2.37 ± 0.57) × 10 −3 (or -2.625 in log) in Jupiter (Wong et al. 2004). However, the best constraint is only obtained at SNR=20 in our examples (see also Section 6.3), suggesting that future observations should aim to achieve this SNR level. Also, the derived single scattering albedo of the lower cloud,ω 2 , matches the observed value of 0.997 (e.g., Sato & Hansen 1979). The mean values of these parameters are sensitive to the particular noise realization of each simulated dataset. Unconstrained parameters are g andḡ, and an upper limit is derived for τ , showing that the upper cloud is likely Jupiter HD 99492 c Saturn Figure 22. Same as Figure 16, for the applications in Section 7. The plots correspond to HD 99492 c, Jupiter, and Saturn, from top to bottom. As in the previous examples, the methane and cloud are clearly detected even with a SNR=5 dataset. optically thin, again consistent with Jupiter's observed stratospheric haze properties. The confidence intervals are summarized in Table 6, and the range in spectra allowed by the posterior samples are shown in Figure 20. Although the MCMC algorithm strongly favors a single-mode posterior with an optically thin upper cloud, the nested sampling algorithm identifies two posterior modes, the second one having an optically thick upper cloud. This is reflected by the large confidence intervals shown in Figure 23 (black). The second, high optical depth mode, becomes favored by the nested sampling algorithm at SNR=20. Figure 24 shows posterior covariance plots for some selected parameters for SNR=20, and noise correlation length 25 nm Jupiter data, using both the 2-cloud and 1-cloud models. The black solid lines indicate the parameter values that correspond to currently adopted values for Jupiter (f CH 4 = 2.37 × 10 −3 and g = 24.79 m s −2 ), while the dashed black lines show the best fit parameter values retrieved using the MCMC algorithm. The retrieved values for fCH4, top cloud pressure (P ) and cloud albedo are close to the observed values. The constraints on fCH4 and P can be made even tighter by imposing better priors on surface gravity, following the correlation lines. The spectrum is not sensitive enough to the other model parameters, as shown by the large confidence regions. Therefore our initial guess or theoretical structure can lie far from the final best fit value. It is apparent that the nested sampling (blue contours) favors a solution that resembles the 1-cloud model, with a deep, optically thick cloud and unphysically low gravity (∼ 1 m s 2 ). Such low gravity solutions are also identified using the 2-cloud model. However, the 2-cloud model is still consistent with more realistic values of g, while the 1cloud model is not. Such arguments can be used to favor one model over the other in the absence of quantitative Bayesian evidence. The correlations at the top of left panel in Figure 24 show that a narrower allowed range in g for known RV planets both constrain the methane abundance to match the real value and strongly disfavor the second, optically thick mode. The spectra corresponding to these best-fit solutions are shown in Figure 25. This plot shows that the spectra are degenerate relative to these solutions at wavelengths between 0.6 and 1 µm, but physical arguments can be used to eliminate certain solutions. We note the need for wavelengthdependent continuum opacity, especially for using photometry data shortward of 0.6 µm. The Jupiter cloud structure as retrieved by our 2-cloud and 1-cloud models is compared to the theoretical vertical structure for Jupiter in Figure 26. The cloud and haze layers shown in the right panel of Figure 26 approximately match the positions described elsewhere in the literature (e.g., Simon-Miller et al. 2001;Sato et al. 2013). The hazes are likely to have a wavelength-dependent continuum opacity, unlike our simple cloud model, and our notation was chosen to emphasize that the upper haze layer is likely absorbing and the lower haze/cloud layer is likely bright (reflective) at the wavelengths relevant in our study. We note that the upper cloud roughly matches the position of a hydrocarbon haze in the upper layers of the atmosphere, and the lower cloud deck overlaps with the bright haze and ammonia/water ice clouds in the deeper atmosphere. This deep cloud is also identified by the 1-cloud model retrieval, but without the opacity contribution of the upper haze/cloud, the retrieved suface gravity of the planet would be unphysically small (g = 1 m s −2 , see Figure 24). The significance of the cloud and methane detection is shown in the middle panel of Figure 22. The methane is detected at high significance for all SNR, while the cloud detection becomes very strong only when SNR>10. Due to the degeneracy of the solutions (see Figure 25), the Bayes factor does not favor the 2-cloud vs. the 1cloud model except at very high signal-to-noise. However, based on the previous arguments related to the surface gravity, it is reasonable to select the 2-cloud model in this case, and we expect a more clear distinction to appear once independent constraints on the surface gravity are provided. We conclude that the two-layer cloud model is necessary for Jupiter, constraining the methane abundance to within factors of ∼ 20 at SNR=5 and factors of ∼ 3 at SNR=20, possibly much better when tighter limits on the surface gravity are available. The single scattering albedo of the lower cloud is constrained within 0.5% even at the lowest SNR. This gives us an indication for the composition of the lower cloud, since particles with high reflectivity are necessary to explain the large value ofω 2 . Saturn Our third and final case study is Saturn, which falls between HD 99492 c and Jupiter in terms of retrieval results. We use again data from Karkoschka (1994) to generate simulated observations using the method in Section 4. The summary plots for the retrieval results are shown in Figure 27, with the confidence intervals listed in Table 7. The posterior distribution for the 2-cloud retrieval is now clearly bimodal, with one mode corresponding to a low optical depth for the upper cloud, and the other to an optically thick upper cloud. The large confidence intervals plotted in the bottom panel of Figure 27 are due to this bimodality. The range of the possible spectra with parameters drawn from the posterior are shown in the right panel of Figure 20. For clarity, the two modes have been separated and the covariances of the most relevant parameters shown in Figure 28 (middle and right panels). In the left panel of Figure 28 we show the retrieved posterior distribution Best-fit spectra for Jupiter (SNR=20, CL=25 nm), retrieved using the 2-cloud and 1-cloud models. The legend indicates that the low optical depth fit is favored by the MCMC method, while the high optical depth fit is favored by nested sampling (see also Figure 24). The vertical green line indicates that the retrieval is performed only on data between 0.6 and 1 µm. for the 1-cloud forward model, with the black lines indicating the parameter values that correspond to the currently adopted properties of Saturn (f CH 4 = 4.5 × 10 −3 Figure 17 caption. The theoretical structure is shown in the right panel, with the cloud structure closely resembling available literature (e.g., Simon-Miller et al. 2001;Sato et al. 2013). The pressure-temperature profile is approximated as purely radiative in the top layers of the atmosphere (dashed red line). Best-fit spectra for Saturn (SNR=20, CL=25 nm), retrieved using the 2-cloud and 1-cloud models. The 2-cloud posterior is bimodal, with the low optical depth and high optical depth best fit solutions shown separately (see also Figure 28). 40 240 440 640 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 10 1 10 2 Figure 17 and 26 captions. The theoretical structure is shown in the right panel, with the cloud structure closely resembling available literature (e.g., Roman et al. 2013). each of the two modes. As seen in the case of HD 99492 c, the mode with low optical depth constrains the albedo of the lower cloud (ω 2 ), while the optically thick mode constrains the albedo of the upper cloud (ω). However, in contrast to HD 99492 c, the 1-cloud retrieval mostly resembles the low optical depth mode of the 2-cloud retrieval. In this case, the reflecting surface (P ) is found relatively high (10 −3 − 1 bar), with a position correlated with the methane abundance and g. The 1-cloud model also constrains the optical depth within a relatively narrow range of ∼ 0.1 − 1. The surface gravity g is unconstrained by both the 1-cloud and 2-cloud retrievals, but independent constraints would translate into narrower confidence intervals for both P and fCH4, as in the cases described above, especially considering the low optical depth mode. A more peaked distribution for fCH4 is only obtained for the 2-cloud mode of low optical depth (right panel), while in the other two cases only lower limits can be inferred. The methane abundance is overall consistent with measured values, but biased towards higher values in the high optical depth mode, because the entire cloud structure is then obscuring most of the atmosphere. Figure 29 shows the complete degeneracy between the 1-cloud retrieved solution and the two modes of the 2cloud retrieval. Photometry shortward of 0.6 µm could be helpful for constraining haze properties. Based on these data, we cannot distinguish between the two possible modes, and the presence of the second cloud is not required. The retrieved cloud structure using the 1-cloud and 2-cloud models is presented in Figure 30 and compared with the structure derived from the literature in the right panel (e.g., Roman et al. 2013). The lack of evidence for a second cloud is also suggested by the overlap of the 2-cloud structure in the left panel, similar to the situation for HD 99492 c. By contrast, the cloud optical depth is low in this case, and therefore the transition from a clear to a cloudy atmosphere is very gradual. Overall, the retrieved cloud structure strongly overlaps with the theoretical structure, and all solutions are consistent with highly reflective layers present in the atmosphere. This is supported by the Bayes factors in the bottom panel of Figure 22, where both methane and a cloud layer are detected with high significance for all SNR. The evidence for the second cloud is inconclusive, since these solutions are degenerate. We suggest that some evidence is provided by the tighter distribution in Figure 28, right panel vs. left panel, and a more relevant Bayes factor calculation would be between the 1-cloud model and each of the two modes of the 2-cloud model separately. SUMMARY AND CONCLUSIONS We have used a Bayesian retrieval method to quantify the confidence intervals on the atmospheric methane abundance and cloud structures of extrasolar giant planets, using a simple atmospheric model with either 1 or 2 cloud decks. Our results should be viewed in the light of the limitations inherent to space coronagraph observations. Notably, we are trying to reproduce complex atmospheric structures by using simple 1-dimensional model approximations and low signal-to-noise, integrated light data. The 0.4 − 1 µm and 0.6 − 1 µm wavelength ranges used in the retrievals have also limited diagnostic power, but may be supplemented by other follow-up observations. Nevertheless we find that reflected light spectra of the quality expected from a space-based direct imaging exoplanet mission is sufficient to place interesting constraints on important planetary atmosphere characteristics, particularly methane mixing ratio and, in some cases, cloud albedo. In particular, the presence of clouds and/or methane absorption is detected at high significance even for a SNR of 5. However, higher SNRs, additional degeneracy-breaking constraints (e.g. on g), and even more sophisticated cloud models will be needed to determine accurate abundances and extracting useful information about mass-metallicity relationships. The retrieval methods presented are powerful for determining correlations among parameters and identifying which ones are unconstrained by the data, demonstrating the value in the synthetic datasets, even at low signal to noise ratios. We find that using both MCMC and nested sampling algorithms can provide us with better insights on the posterior probability distributions for the model parameters, especially in highly non-gaussian and multimodal cases. We found that our retrieval methods could reliably infer methane abundances to within factors of ten of the true value when the models are a good match for the data (such as the validation tests), and can accurately constrain cloud scattering properties in specific cases, thus providing a clue to the cloud composition. Gravity, however, is not well constrained by optical spectra in the presence of clouds. Observing planets with known masses therefore removes an important source of uncertainty and allows much greater precision in the inference of atmospheric abundances. Furthermore, cases in which the cloud model was inadequate are readily apparent in the retrieval output. These limitations are particularly apparent in our realistic test cases, where the posterior probability distribution is ofter bimodal, and only a lower limit is inferred for the methane abundance. This prompted us to calculate the Bayesian evidence for a set of models for each simulated spectrum. This is a method to quantify the significance associated with the methane and cloud detection, and the assumed cloud model (1-cloud vs. 2-cloud) in each case. Although timeconsuming, this is a very powerful test that will become a necessity for interpreting future observations, as the complexity of our model atmospheres and understanding of planetary diversity is increasing. Our preliminary applications to realistic planets show that it is worthwhile to investigate different vertical cloud structures, such as the 1-cloud vs. the 2-cloud models. This can help us address degeneracies and identify unnecessary parameters. In summary, our first study on the characterization of extrasolar giant planets in reflected light found that retrieval methods using simple, gray cloud models can be applied to optical spectra of exoplanets to obtain insights on molecular abundances and cloud properties. We found that generally the retrieval results are equally sensitive to the particular noise realization as to the chosen spectral correlation length. Ongoing and Future Work For this initial study we made a number of simplifications to the analysis to make our task tractable and obtain a first look at parameter correlations. However future work should address these simplifications and their roles in the fidelity of the retrievals. Foremost among those that should be explored include: planetary radius uncertainty, thermal profile uncertainty, and orbital phase uncertainty. The second paper in this series (Nayak et al., submitted), addresses the radius and phase uncertainties. In addition the retrieval of more atmospheric abundances should be explored, particularly water and alkali gasses. We will also investigate the possibility of adopting a somewhat more general cloud model. In this work we have focused on retrieving atmospheric parameters of giant planets, nevertheless the methods we are developing-and eventually the experience in applying them to real extrasolar planet spectra-will inform future efforts to characterize the atmospheres of lower mass planets. While detailed investigation of retrieval methods for such planets awaits future studies, we note several general conclusions. Planets with relatively flat spectra or few absorption features are, unsurprisingly, challenging. The methane-dominated spectra we studied here are well suited to retrieval methods as multiple bands of varying strength populate the optical, permitting constraints on both cloud top pressure and abundance when well resolved (e.g., Figure 3). This may not be the case for many potential terrestrial planet atmospheres leading to greater uncertainties in cloud top pressure and absorber column abundances. Furthermore lack of useful constraints on gravity, through mass determination, substantially increases the uncertainty in retrieved atmospheric abundances. Thus giant planets, even cloudless ones with steep Rayleigh scattering slopes, though not the pale blue dots we ultimately seek, do provide useful insights into the methods and limitations of our future characterization of such worlds. a CL here is a shorthand notation for the spectral noise correlation length. b Numbers in parentheses show the nested sampling results. A. SAMPLING METHODS AND EVIDENCE CALCULATION In Bayesian inference, the allowed ranges of model parameters are given by the posterior probability distribution of the parameter vector θ, where P(θ) ≡ Pr(θ | D, M), L(θ) ≡ Pr(D | θ, M) is the likelihood, π(θ) ≡ Pr(θ | M) is the prior on model parameters, and Z ≡ Pr(D | M) is the Bayesian evidence. Here D and M denote the data and the model, respectively. Normalization of the posterior distribution requires that Z = L(θ)π(θ)dθ. The calculation of Z is not necessary for parameter estimation, and best-fit parameter values with associated confidence intervals are obtained from the un-normalized P(θ). In general, the posterior P(θ) is difficult or impossible to calculate analytically, and in practice the shape of this distribution is approximated by taking a large number of samples. The methods described below are optimized to sample more efficiently the regions of parameter space where L(θ) is large, such that a good approximation of P(θ) is obtained with a minimum number of samples. The Bayesian evidence Z is by definition model-dependent, and provides the information necessary for model selection. The evaluation of this multi-dimensional integral is also often difficult, and addressed by various approximations (Section A.1). A.1. Model selection In Bayesian inference, the probability associated with a given model M, given the data, is defined as Pr(M | D) = Pr(D | M)Pr(M) = ZPr(M). In our calculations of Bayesian evidence we have employed the approximations described below. In the Laplace-Metropolis approximation (Lopes & West 2004), Z is computed using the covariance matrix C of the posterior, or the minimum volume ellipsoid enclosing the posterior distribution Z L max (θ)(2π) n/2 √ det C, where n is the dimension of the parameter space, and L max (θ) is the maximum likelihood value. This approximation clearly breaks down when the posterior is multi-modal. The BIC estimate is a result obtained in the asymptotic limit for distributions in the exponential family, and gives the largest penalty to models with a large number of parameters. In this approximation where N D is the number of data points. In most cases, this offers a simple, order-of magnitude estimate for Z. Finally, the NLA computes the evidence using the equality where the last term contains a Lebesgue integral with Y = L(θ) −1 and measure M (y) This conversion to a Lebesgue integral has the clear advantage of replacing the n-dimensional integral by a 1dimensional one. This approach is also used by the nested sampling algorithm (Section A.3) where Z is computed as Z = 1 0 L(X)dX; X(λ) = L(θ)>λ π(θ)dθ. (A7) Since the final MCMC sample is distributed as the posterior probability P(θ), in Equation A5, M can be approximated as M (Y i ) ≈ 1 N N j=1 1 Yj >Yi for each L i , where 1 is the indicator function. With this approximation we have which is also known as the harmonic mean estimator (HME). This disadvantages of this estimator are well known in the literature (e.g., Raftery et al. 2007;Calderhead & Girolami 2009). Due to the presence of 1/L j terms this method is unstable for very small likelihood values that dominate the sum. The proposed solution is to restrict the integration space only to well-sampled regions of high likelihood. Therefore this method suffers from problems intrinsic to MCMC sampling. In addition, Calderhead & Girolami (2009) show that even in well-behaved scenarios, the HME can produce biased (lower) results. To avoid these issues, the nested sampled approach (Equation A7) is the preferred alternative to thermodynamic integration. The B XY factor can also be estimated directly using the reverse jump MCMC (e.g., Lopes & West 2004), or the Savage-Dickie density ratio (e.g., Trotta 2007). The reverse jump MCMC is essentially a chain moving between different models, and can be either slow to converge or inaccurate for a small number of samples. The last method can provide high accuracy for nested models, as long as the parameter priors are separable, which is not generally true for our atmospheric models. To draw the analogy with the frequentist approach, the Bayes factor for nested models can be shown to satisfy the relation (Trotta 2008;Sellke et al. 2001) where e = exp(1), and p is the p-value. Equivalently, this probability can be expressed as the number of standard deviations from the mean xσ, assuming a Gaussian distribution, p = erf(x/ √ 2). This upper bound is the significance σ value we refer to in our model comparison examples. A.2. Markov chain Monte Carlo MCMC methods are widely used in investigating multi-dimensional, non-gaussian and highly correlated posteriors, since they don't require any a priori assumption about the shape of the posterior probability distribution. The most common form is the Metropolis-Hastings algorithm, where the chain is created as a random walk towards the region of maximum likelihood. Each sample is generated from a proposal distribution centered on the current point, and accepted with a probability pr = min(1, L(θ )/L(θ)). If the new sample is rejected, the position of the chain remains unchanged. The chain is initialized by a first guess θ 0 , and after a burn-in period reaches a stationary state where the sample distribution reflects the shape of the posterior (more samples are drawn from high-likelihood regions). The un-normalized posterior distribution is simply the histograms of all the MCMC samples after the burn-in stage, and the marginal probability distributions for all parameters can be derived from it. Although much more efficient than just a simple Monte Carlo technique, MCMC still has a series of drawbacks: the convergence is not easily testable and can require a very large number of samples; due to its Markov chain nature, it is not easily parallelizable in this form; can be sensitive to the initial guess and get stuck in local minima; sample correlation can affect the final distribution. The affine-invariant MCMC ensemble sampler proposed by Goodman & Weare (2010) solves some of these problems. In this paper we use the version of this algorithm emcee implemented in Python by Foreman-Mackey et al. (2013) 1 . This algorithm uses multiple chains, or "walkers" run in parallel for a faster exploration of the parameter space. The K chains are initialized in a n-dimensional Gaussian distribution around the initial guess. At each step, the position dimensions of the parameter space. This makes the algorithm unfeasible for a large number of dimensions ( 10). However, at avery step new samples can be drawn in parallel, significantly increasing computational speed. In practice, we find that MultiNest can be run for a much shorter time than emcee to converge, mainly because emcee does not have a self-stopping criterion and is left to run long enough to cover the entire parameter space and obtain sufficient independent samples. Similar to MCMC, in some cases the acceptance rate is low for MultiNest, and therefore convergence is also slow.
2016-09-16T17:49:44.000Z
2016-04-18T00:00:00.000
{ "year": 2016, "sha1": "a6a452243db29c02c55cd0af0abca04392f590cc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1604.05370", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "042e4de66b8521c807759822195a3b44e5a9fede", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
13348447
pes2o/s2orc
v3-fos-license
Interactions among three species of cereal aphids simultaneously infesting wheat. Interactions among greenbug, Schizaphis graminum (Rondani), Russian wheat aphid, Diuraphis noxia (Mordvilko), and bird cherry-oat aphid Rhopalosiphum padi (L.) were examined on wheat plants (Triticum aestivum L., cultivar TAM 107). Nymphs were released on the plants as conspecific and heterospecific pairs of either first or fourth instars and evaluated for survival, developmental time, fecundity, intra-plant movement, and affinity to plant tissues. Survival from first instar to onset of reproduction averaged 90-100% across all pair combinations. Diuraphis noxia developed faster as conspecifics than in any heterospecific combination, and faster as conspecifics feeding on the same plant tissue than on different tissues. Fecundity of S. graminum was higher for conspecifics that developed on the same plant tissue than for those feeding separately. There was evidence of amensalism (one species was harmed while the other was unaffected) in that D. noxia experienced delayed development feeding in tandem with S. graminum, and reduced fecundity with both S. graminum and R. padi. Furthermore, S. graminum nymphs had reduced survival when their mothers matured on a same plant with R. padi. Both D. noxia and R. padi changed position on the plant more often when developing with S. graminum. Survival of second generation S. graminum nymphs was reduced when this species developed and reproduced in tandem with R. padi. Preferred feeding locations were S. graminum--primary leaf, D. noxia--tertiary leaf and R. padi--stem and these were not altered in any heterospecific combinations. Heterospecific aphids had no impact on fecundity or progeny survival in any species combination when fourth instars matured and reproduced on plants not previously exposed to aphid feeding, supporting the inference that systemic, aphid-induced changes in plant physiology mediated the effects observed when first instars developed and reproduced on the same plants. Introduction To the extent that certain herbivores can alter the physiology of their host plant, and hence its nutritional value, there exists the potential for complex interspecific interactions among herbivores that are mediated by the host plant. As aphids feed from plant phloem elements, they inject saliva that often brings about changes in plant physiology (Prado & Tjallingii, 1994), usually for their own nutritional benefit (Petersen & Sandström, 2001). When more than one aphid species feed on the same plant the net changes in plant physiology will be some function of their combined effects. For aphid species that form closely spaced colonies, individuals may benefit from faster development when feeding in groups (Qureshi & Michaud, 2005). However, alterations in plant physiology induced by aphid feeding could potentially have positive, neutral, or negative effects on heterospecific aphids attempting to colonize the same plant. Amensalism (one species is harmed or inhibited and the other is unaffected), commensalism (one species derives some benefit while the other is unaffected), mutualism (an association in which both species benefit), and antagonism (both species are negatively affected via an indirect mechanism) are all plausible outcomes of interspecific interactions among aphid species that co-infest a host plant. Thus, host plant-mediated interactions among aphid species, rather than any direct form of competition, may be key factors influencing patterns of aphid species co-occurrence or niche partitioning in nature. Three economically important aphids occur sympatrically in fields of wheat, Triticum aestivum L., throughout many regions of the USA where cereals are grown. These are the greenbug, Schizaphis graminum, the Russian wheat aphid, Diuraphis noxia, and the bird cherry-oat aphid, Rhopalosiphum padi (Homoptera: Aphididae) (Turney & Hoelscher, 1986;Schotzko & Bosque-Perez, 2000;Bosque-Pérez et al., 2002). It has been shown that S. graminum and D. noxia are able to alter the amino acid profile of phloem contents in susceptible wheat (cultivar Arapahoe) for their own benefit (Telang et al., 1999;Sandström et al., 2000;Sandström & Moran, 2001), although they likely do so in very different ways. Notably, Telang et al. (1999) did not find any such effect for D. noxia on resistant wheat, cultivar Halt. Havlickova (1986) reported that feeding by R. padi on young wheat plants cultivars Mironovskaya 808 and Slavia resulted in increased concentrations of free amino acids, sucrose, glucose, and some phenolic compounds in above-ground plant parts, but caused a reduction in the concentration of free amino acids and other compounds in the roots. In contrast, Gianoli & Niemeyer (1997 have shown that R. padi infestation triggers the induction of defensive chemicals in the first leaf of wheat seedlings and that this response can vary with the tissue infested. Thus, some aspects of antibiosis-based plant resistance to aphids may involve mechanisms that interfere with the processes whereby aphids attempt to alter host plant physiology for their own benefit. Here we refer to changes in host plant chemistry brought about by aphid feeding as 'plant induction', whether the consequences are positive or negative for the aphids. If plant inductive processes are species-specific among aphids, we also might expect plant antibiosis mechanisms to be quite specific to aphid species. Notably, wheat varieties resistant to greenbug (e.g. cultivar TAM 110) are not resistant to Russian wheat aphid, nor vice versa (e.g. cultivars Halt, Stanton). A variety of interactions have been reported for aphid species feeding on plants previously infested by conspecifics or different aphid species. For example, Messina et al. (2002) found that previous infestation of wheat by R. padi reduced the subsequent growth rate of a conspecific colony by 50%, but had no effect on a population of D. noxia. Similarly, biotype E of S. graminum had significantly improved fecundity on wheat cultivar Newton previously conditioned by D. noxia, but not on wheat conditioned by conspecifics (Formusoh et al., 1992), a finding indicative of commensalism. Qureshi & Michaud (2005) observed that developmental time was decreased for nymphs of S. graminum, and increased for nymphs of R. padi, relative to that of their respective mothers that completed development on the same plants of wheat cultivar TAM 107, whereas there was no such effect for D. noxia. Gianoli (2000) found that the reproductive rate of English grain aphid, Sitobion avenae, on tillering wheat plants was negatively affected by a previous infestation of R. padi, a result suggesting amensalism. Interactions among simultaneously occurring cereal aphids on the same plants have rarely been studied, but studies of other phloem-feeding Homoptera have revealed interactions ranging from synergism to antagonism. Alla et al. (2001) examined interactions between R. padi and the wheat leafhopper, Psammotettix alienus. They found that infestation by R. padi resulted in delayed development and increased mortality of P. alienus on the same wheat plants. Furthermore, in the presence of R. padi, P. alienus left their preferred feeding sites on the lower part of the plant and moved to upper plant parts. In contrast, Kidd et al. (1985) found a beneficial interspecific association between the grey pine aphid, Schizolachnus pineti and the spotted pine aphid, Eulachnus agilis on Scots pine, Pinus sylvestris. By feeding on the same shoots and needles as S. pineti, E. agilis was found to benefit from commensalism in terms of increased survival and faster growth, presumably by exploiting the plant induction brought about by S. pineti. Schizaphis graminum, D. noxia, and R. padi have all been observed to simultaneously infest wheat plants (Bosque-Pérez et al. 2002, JAQ unpublished). In the course of rearing these species in the laboratory, we often find various combinations of these species developing together on the same plants, to the point where prevention of cross-contamination among colonies is a constant challenge. However, contamination of D. noxia colonies by S. graminum and R. padi seem to occur most frequently, as though colonies of D. noxia were preferentially invaded by the latter species. We have previously shown that these three species prefer different feeding sites on the plant and vary in their tendency to move among plant parts in the course of development (Qureshi & Michaud, 2005). In the present study, we examined interactions among S. graminum, D. noxia, and R. padi by comparing their acceptance of TAM 107 wheat, frequency of intra-plant movement, developmental time, and reproductive performance as they developed and reproduced on the same plants in heterospecific and conspecific pairs. Stock Colonies Stock colonies of S. graminum (Biotype 'I') were established from individuals collected at Agricultural Research Center-Hays in western Kansas during 2003 and maintained on sorghum cultivar 'P 8500'. Similarly, colonies of D. noxia (Biotype 'I') were initiated from individuals collected at the Hays center and maintained on wheat, T. aestivum cultivar 'Tomahawk', for several years. A colony of R. padi was established from individuals infesting wheat in a greenhouse at the Hays center during fall, 2003. We used wheat cultivar TAM 107 (PI 495594), released by Texas A&M University in 1984 (Porter et al., 1987) as the host plant for rearing stock colonies and conducting all experiments as it represents an acceptable and suitable variety for all three aphid species (Qureshi & Michaud, 2005). Stock colonies of the three aphid species were maintained in isolation for more than ten generations at 20 ± 1º C under 'coolwhite' fluorescent lighting set to a photoperiod of 16:8 (L:D) in Percival I-36VL growth chambers. Wheat seeds were planted in soil 8 cm deep in metal trays (26 x 36 cm) and germinated in a greenhouse. Ten to 12 day old plants were infested with aphids and then transferred to their respective environmental chambers. New trays of wheat seedlings were introduced for each colony every 12-13 days and manually infested with plant clippings from the old tray. Trays were watered as required. Plants Nine to ten day old plants of TAM 107 wheat were used for these experiments. Plastic cones (2.5 cm diameter x 16 cm deep) were filled with soil and planted with three wheat seeds in each. After planting, cones were placed in plastic racks set in plastic trays filled with water for 48 h in the greenhouse. During this period, the cones drew up enough water from the tray to support plant growth throughout the experiment. Three to 4 days after germination, plants were thinned to leave a single seedling in each cone. Plants were grown for 9-10 days until they reached the 2-3 leaf stage, then cut to a height of 10 cm in order to facilitate repeated observations of aphids that are easily dislodged from tall plants. Experiments Adult apterae of all three species were collected from their respective stock colonies and placed on wheat seedlings (2-3 per plant) to reproduce for a period of 24 h, yielding a synchronous cohort of 12 ± 12 h old first instar nymphs. Seedlings in individual cones (replicates) were each infested with two first instar nymphs on the stem and then covered with a ventilated clear plastic cylinder (2.3 cm diameter x 31 cm tall) and placed back in the rack. The six experimental treatments consisted of three heterospecific pair wise combinations of nymphs with three conspecific pair combinations serving as controls. There were twenty replications of each treatment. The experiment was held in a growth chamber under the same environmental conditions as the stock colonies. Since the high light intensity that is optimal for maintaining plant quality also generates temperature gradients within the chamber, measurements of temperature (mean, minimum and maximum) and humidity were recorded daily using a digital temperature/humidity probe placed within a cylinder-covered cone on the same rack as the experimental replicates. The following data were recorded daily for each replicate: (1) presence or absence of the aphids on the plant (2) location of the aphids on the plant, (3) date of first reproduction by the aphids, (4) number of first instars nymphs of each species present, (5) total nymphs of each species present. Aphid locations on the plant were categorized as 'stem', 'primary leaf', 'secondary leaf', 'tertiary leaf', or 'flag leaf'. The number of first instar nymphs produced by each maturing aphid was tallied for eight days from the day of first reproduction. Since control treatments had two aphids of the same species reproducing together, total progeny were tallied for eight days from the first reproduction event and individual fecundity estimated by dividing by two. A second experiment was performed similar to the first except that experimental plants were infested with fourth instar aphids. The rationale was to replicate the first experiment using older aphids that would interact on the plant for only a short period prior to reproduction, in contrast to the first instar aphids that would interact throughout their developmental and reproductive periods. Our hypothesis was that species interaction effects mediated by the host plant and evident in the first experiment would be absent in the second experiment where there was less time for plant induction processes to occur. There were 18 replicates per treatment and the data on the number of first instars nymphs and total nymphs of each species present were recorded daily for each replicate. Data Collection and Analysis Survival of first instar nymphs was measured as the proportion that achieved reproductive age, whereas the survival of their progeny was estimated as the number alive on day eight divided by the total number of first instars produced during the eight days of reproduction. Developmental time was calculated as the number of days from inoculation of first instars until the first reproduction event. Fecundity was tallied as the number of first instars produced by a foundress over eight days of reproduction. The proportion of time aphids spent on various plant parts, or off the plant, was estimated by dividing the total number of days the aphid was encountered at each location by its total developmental time. Position changes were tallied whenever an individual aphid was discovered on a plant part different from that it had been on the previous day. The frequency of position change on the plant was then calculated by dividing the number of times an aphid changed position on the plant by its total developmental time. Survival of first instars to their first reproduction was analyzed and compared across treatments as binomial responses using GLIMMIX MACRO model and PROC MIXED (Littell et al. 1996) in SAS (SAS Institute 1999-2001) and data were transformed with logit link function. All the remaining variables were analyzed for differences across treatments with a one way ANOVA using PROC GLM in SAS followed by a least significant difference (LSD) procedure (Littell et al. 1996) for separation of means. Results Mean daily temperature was 21.23 ± 0.8º C during the first experiment. Survival of first instars to first reproduction averaged 90-100% across treatments and was not significantly different for any species when conspecific pairs were compared to heterospecific pairs (P > 0.05). The GLIMMIX MACRO model used to test survival indicated a good fit for the analyzed data sets because deviance values were close to χ 2 critical values and extra dispersion scale values were above 0.9 and close to 1.0. In the first experiment, when seedling wheat plants were co-infested with first instar nymphs of all species combinations, D. noxia nymphs took longer to mature in the presence of S. graminum than in the presence of conspecifics (F = 4.41; df = 2, 55; P = 0.043), whereas their development rate in the presence of R. padi was intermediate and not significantly different from the other treatments (Fig. 1). The developmental times of S. graminum and R. padi nymphs did not differ from those of conspecific pairs when they developed in any heterospecific combination (F = 2.84; df = 2, 56; P = 0.067) (F = 1.01; df = 2, 54; P = 0.372). Diuraphis noxia maturing in conspecific pairs had higher fecundity than those maturing in pairs with either S. graminum or R. padi (F = 5.78; df = 2, 55; P = 0.005), whereas the fecundity of S. graminum and R. padi maturing in heterospecific pairs did not differ from those maturing in conspecific pairs (F = 0.02; df = 2, 56; P = 0.977) (F = 1.31; df = 2, 54; P = 0.278), (Fig 2). We also used data from the first experiment to compare the developmental time and fecundity of conspecific aphid pairs that developed on the same plant tissue for at least five consecutive days with those that developed feeding on different plant tissues. Nymphs of D. noxia feeding consistently on the same plant tissue developed faster than those that developed on different tissues, whereas S. graminum pairs maturing on the same tissue had higher fecundity than those that fed and developed on different tissues (Table 1). All other comparisons of developmental time and fecundity were not significantly different between aphids that fed together versus separately. The survival of progeny produced by S. graminum foundresses maturing in conspecific pairs was not different from that of foundresses that matured in the presence of D. noxia, but was lower for progeny whose mothers had matured with R. padi (F = 5.03; df = 2, 56; P = 0.009), (Fig. 3). Progeny survival for D. noxia and R. padi maturing in conspecific pairs was not different from that observed in any heterospecific combination (F = 0.25; df = 2, 55; P = 0.780 and F = 0.25; df = 2, 54; P = 0.782, respectively). There was no difference among treatments in the frequency of position change on the plant for developing S. graminum nymphs (F = 1.59; df = 2, 57; P = 0.213), (Fig. 4). Diuraphis noxia changed position on the plant more often when developing in the presence of S. graminum than in conspecific pairs, but demonstrated an intermediate value in the presence of R. padi (F = 3.27; df = 2, 56; P = 0.045), (Fig. 4). Similarly, R. padi changed position more often in the presence of S. graminum than in the presence of D. noxia or a conspecific aphid (F = 3.43; df = 2, 58; P = 0.039), (Fig. 4). Developing S. graminum nymphs spent 80-85% of their time on the primary leaf whether they were developing with a conspecific (F = 52.68; df = 5, 114; P < 0.0001) with D. noxia (F = 72.95; df = 5, 114; P < 0.0001) or with R. padi (F = 39.20; df = 5, 114; P < 0.0001). The order of plant tissue preference for S. graminum was primary leaf > stem > secondary leaf = tertiary leaf = flag leaf. Nymphs of D. noxia were present on the tertiary leaf for 57-70% of observations, significantly more than on any other plant tissue whether developing as conspecific pairs (F = 15.92; df = 5, 114; P < 0.0001) with S. graminum (F = 19.00; df = 5, 114; P < 0.0001) or with R. padi (F = 26.42; df = 5, 108; P < 0.0001). The order of tissue preference for D. noxia in conspecific pairs was tertiary leaf > primary leaf = secondary leaf > stem = flag leaf, whereas with S. graminum it was tertiary leaf > primary leaf > secondary leaf = stem = flag leaf and with R. padi it was tertiary leaf > primary leaf = secondary leaf with primary leaf > stem = flag leaf and secondary leaf = stem = flag leaf. Rhopalosiphum padi nymphs spent 66-88% of their time on the stem, significantly more than on any other plant tissue whether developing in conspecific pairs (F = 276.80; df = 5, 114; P < 0.0001) with S. graminum (F = 24.68; df = 5, 114; P < 0.0001) or with D. noxia (F = 59.41; df = 5, 108; P < 0.0001). The order of tissue preference for R. padi was stem > primary leaf > secondary leaf = tertiary leaf = flag leaf. The two tissues on which a species spent more time in all three speciespair combinations (primary leaf and stem for S. graminum, tertiary and primary leaf for D. noxia, and stem and primary leaf for R. padi) did not differ among species combinations in any case (P > 0.05). The mean daily temperature was 22.32 ± 0.34º C during the course of the second experiment. Neither the fecundity of aphids transferred to experimental plants as fourth instars, nor the survival of their progeny, were significantly different among treatments for any aphid species, when aphids in conspecific and heterospecific pairs were compared (Table 2). Discussion Changes in plant physiology due to feeding by these aphid species have already been described in wheat (Havlickova, 1986;Prado & Tjallingii, 1994;Telang et al., 1999;Sandström et al., 2000;Sandström & Moran, 2001). All three aphid species appeared to initiate distinct, species-specific, plant inductive processes that were systemic to varying degrees. Consequently, interactions among these aphid species appeared to be mediated by the host plant, although direct interactions among the aphids themselves cannot be ruled out. It is unlikely that the observed effects could have resulted from exploitation competition as the pairs of aphids employed in these experiments would not be able to extract more than a small fraction of available nutrients from the phloem, nor effect any appreciable resource depletion in the plant. Although certain aphid species exhibit direct aggression towards other insects, usually predators, this is known only in social aphids with sterile soldier castes (Rhoden & Foster, 2002) and has not been reported for these cereal aphids to our knowledge. In these experiments, using only two aphids per plant, we were able to document amensalism by both S. graminum and R. padi on D. noxia, and by R. padi on S. graminum, with all other interactions essentially neutral (Price 1997). Of the three species, S. graminum can be considered the most aggressive in exploiting its host plant; it has the highest reproductive rate and causes rapid deterioration of plant quality via chlorosis (Gellner et al., 1990;Qureshi & Michaud, 2005). The higher fecundity of S. graminum that developed in conspecific pairs on the same plant tissue for at least five consecutive days compared to those that fed on different tissues (Table 1), suggests a benefit of conspecific group-feeding on reproductive rate that is localized within infested tissues. Schizaphis graminum is also known to significantly reduce translocation from the immediate vicinity of its feeding site without altering the integrity of phloem elements (Burd, 2002). Although both D. noxia and S. graminum feeding causes leaf chlorosis in wheat, the symptoms are slower to develop with D. noxia than with S. graminum and plants with uncontrolled colonies of D. noxia can survive considerably longer. In contrast, R. padi feeding symptoms appear much later than those of D. noxia or S. graminum, and uncontrolled colonies do not result in plant death, although plant productivity may be affected (Kieckhefer & Gellner, 1992). Thus S. graminum negatively impacted both development (Fig. 1) and reproduction (Fig. 2) of D. noxia, whereas R. padi negatively influenced only D. noxia reproduction and the nymphal survival of S. graminum. These results are consistent with plant induction by R. padi proceeding more slowly compared to S. graminum with more delayed effects on co-infesting aphid species. Species combinations Fecundity ( The fact that D. noxia developmental time was not significantly extended in the presence of R. padi confirms that a negative influence of S. graminum, rather than a lack of conspecific pair feeding benefits, was responsible for the delayed development. Furthermore, the higher rate of intra-plant movement by both D. noxia and R. padi in the presence of S. graminum (Fig. 4) is indicative of a response by the former species to plant induction by the latter. Diuraphis noxia causes specific changes in host plant architecture (leaf rolling) that result in the creation of a 'cryptic niche' (Burd et al., 1998), and the species has likely adopted sedentary feeding habits to capitalize on the benefits of feeding within this protected microhabitat (Telang et al., 1999;Qureshi & Michaud, 2005). The faster development of conspecific pairs of D. noxia nymphs that remained feeding on the same leaf for at least five consecutive days compared to those that fed on different leaves ( Table 1) is evidence of a benefit of group-feeding that also appears localized within the plant tissue, although D. noxia appears to have little effect on vein loading or phloem translocation at its feeding site (Burd, 2002). Consequently, it is conceivable that part of the cost paid by D. noxia developing in tandem with S. graminum was attributable to its increased frequency of intra-plant movement, either through a reduction in total feeding time, or because its own plant inductive processes were not localized on one part of the plant. This cost was apparently not paid by R. padi, possibly because this species is adapted to frequent position changes on the plant, even in the absence of heterospecific aphids (Qureshi & Michaud, 2005). Similarly, the higher fecundity of S. graminum that developed in conspecific pairs on the same tissue for at least five consecutive days compared to those that fed on different tissues (Table 1) indicates a benefit of group-feeding on reproduction in this species. The presence of R. padi not only reduced D. noxia fecundity, it also reduced the survival of second generation S. graminum nymphs, both indications that feeding by the former species induces defensive host plant responses with amensal consequences for co-infesting aphid species. Amensal interactions have been previously demonstrated between R. padi and S. avenae (Gianoli, 2000) and R. padi and the wheat leafhopper, P. alienus (Alla et al., 2001). These effects may result from R. padi feeding triggering the localized release of defensive compounds in the wheat plant (Gianoli & Niemeyer, 1997; that it seeks to avoid by frequent position change on the plant. Developing S. graminum nymphs, being relatively sedentary feeders, may have suffered from plant responses to R. padi feeding by not responding to their induction with position change on the plant. Interestingly, R. padi was the only aphid species to avoid any measurable negative impact in both heterospecific combinations and even displayed a trend toward higher fecundity after maturing in the presence of D. noxia (Fig. 2), although the effect was not significant because of high within-group variances in reproduction. Thus R. padi, perhaps the most generalist feeder of the three species and the only host-alternating aphid (Leather & Lehti, 1982), appeared to be the least negatively impacted in heterospecific interactions. Although inter-specific interactions mediated by the host plant were clearly evident in this study, many of the host plant inductive processes seemed localized to some degree within plant parts. These aphid species differ significantly with respect to their preferred feeding locations on the plant (S. graminum: primary leaf, D. noxia: tertiary leaf, R. padi: stem) and these preferences were not altered by the presence of heterospecific aphids. Disparate feeding locations may reflect some degree of niche partitioning among these aphid species that share a range of host plants. The fact that there were no significant differences in fecundity or progeny survival among aphids transferred into conspecific or heterospecific pairs as fourth instars (Table 2) suggests that a period of more than several days of co-infestation by pre-reproductive aphids is required before any heterospecific impact can be realized. Thus interspecific Means within columns were not significantly different (P > 0.05, PROC GLM, LS MEANS interactions among these aphid species through altered plant physiology might not be an increasing, monotonic function of density as envisioned for classical interspecific competition (Faeth, 1992).
2014-10-01T00:00:00.000Z
2005-04-22T00:00:00.000
{ "year": 2005, "sha1": "3360b8e45ad5bdea63071aa60b426c8e1d85febf", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/jinsectscience/article-pdf/5/1/13/18147829/jis5-0013.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3360b8e45ad5bdea63071aa60b426c8e1d85febf", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
16622765
pes2o/s2orc
v3-fos-license
The Swedish Model of Public Outreach of Linguistics to secondary school Students through Olympiads What is it that we want to achieve with our Linguistic Olympiads and how do the contests vary in dierent countries? The Swedish Olympiad has been running for 7 years now and is primarily focused on public outreach - spreading linguistics to secondary school students. The contest involves not only a test but also lectures, school visits and teaching material. The eort is put on promoting the interest of linguistics to students through fun material and good contact with teachers of languages. This presentation contains an overview of the Swedish version of Olympiads in Linguistics as well as some concrete examples of workshop material on linguistic problems for secondary school students. Introduction This paper presents an overview of the way the Olympiad in Linguistics is run in Sweden and how one can go about public outreach of linguistics to secondary school students. This paper and presentation intends to provide useful tips on how an olympiad of this kind can be organised and also discuss how we in the linguistics community can spread linguistics to potential students and the non-academic world. The Swedish Olympiad in Linguistics (''Lingolympiaden'') started in 2006 as a project within Young Scientists Stockholm. The winners of the Swedish contest have participated in the International Olympiad in Linguistics (IOL) since 2007 and in 2010 Sweden was host country for IOL. There are a couple of things that set the Swedish contest apart from most other olympiads of linguistics around the world: 1. the contest takes place at the involved universities themselves, not in classrooms in secondary schools 2. the contest is a whole-day event with contest in the morning and lectures in the afternoon 3. there are fewer participants that many other countries participating in IOL 4. the contest has a focus on public outreach rather than finding the best competitors for IOL 5. we engage in other public outreach activities such as school visits, lectures and group assignments for secondary school students 6. the primary audience is secondary school students that study languages, humanities and social science rather than the natural sciences The aim of Lingolympiaden is to spread linguistics to secondary school students in a fun and educational way. While there are many extra curricular activities for students of the natural sciences: such as the other olympiads, summer schools, exhibitions etc there are relatively few alternatives for students of social sciences. Lingolympiaden intends to fill that void, showing that science does not only consist of the natural sciences and that linguistics is a fun and exciting discipline. Lingolympiaden is run as a collaborative project between the Young Scientists Stockholm (YSS) and the Linguistics departments at Stockholm University and Lund University. YSS is a youth volunteer organisation devoted to spreading the interest of science and technology to the youth of Sweden (youth <26 years old). Lingolympiaden is funded by the universities, the county council of Stockholm, YSS and The Royal Swedish Academy of Letters, History and Antiquities. The different organisers aims and contributions The university contributes with rooms and staff for creating and coordinating the problem set as well as correcting.The incentive for the universities to be involved in this is primarily to make secondary school students aware that linguistics exists and potentially acquire more students. YSS coordinates the entire project, contacts schools, applies for funding etc. YSS also make school visits, talking about linguistics and problems from the olympiads with secondary school students. The goal of YSS is to encourage the youth of Sweden to be interested in science and technology and pursue college studies. YSS receives funding from the County Council of Stockholm for this. The Royal Swedish Academy of Letters, History and Antiquities funds Sweden's participation in the International contest. The academy is interested in promoting humanities and the social sciences to the youth and as one of the very few enterprises that actually does this Lingolympiaden receives their funding. As all of the involved parties in Lingolympiaden are interested in promoting linguistics to secondary school students rather than obtaining the highest scores in IOL our contest is more geared towards sparking interest than finding the best problem solvers. Lingolympiaden is more focused on finding secondary school students that are interested in languages and make them interested in social science and linguistics rather than making students that are already interested in natural science and programming interested in languages. Target audience and contact Lingolympiaden aims to reach secondary school students that are interested in languages and linguistics and encourage them to become interested in studying social sciences, especially linguistics, at university. It has become apparent that students who are interested in the other olympiads and students that are very enthusiastic about natural sciences and programming seem to find the contest even if there has been little effort to reach them. It is much harder to reach secondary school students enrolled in programmes of social sciences and humanities as they are generally less used to there being extra curricular activities for them to be involved in. That being said, Lingolympiaden is of course open to all students (including students of non-theoretical programmes), students from natural science or programming backgrounds are more than welcome -it is just that efforts are put where they seem to be needed (and wanted) the most. Lingolympiaden would also like to reach more students who come from families with little higher education, but as these students are less likely to choose theoretical secondary school programmes and also less likely to have passionate and driven teachers this has proved a very hard task indeed. The primary means of contact are teachers of languages at secondary schools. In 2011 the Swedish government initiated a new secondary school programme of humanities that includes a basic course in linguistics as well as more classes in modern languages. We can take no credit for this, however, Lingolympiaden has established a stable and good contact with the Swedish Language Teachers Association as well as with teachers involved in this new secondary school programme. This has been a very fruitful connection, not only in that the contest reaches the students but also because the teachers have benefited from this in their education. Lingolympiaden also provides teaching materials in form of slides for classes in general linguistics, an IPA tutorial and adapted versions of old problems from the contest. Teachers and students are also able to ask questions about linguistics over email and Lingolympiaden gives advice on literature and sites (WALS, Ethnologue, Universals Archive, MultiTree, Omniglot, LangDoc, UN-ESCO Atlas of the World's Languages in Danger, The Linguistics Podcasts by LinguistChris, The Five Minute Linguist-podcast etc.) that are useful to the teachers in their education. In addition to this Lingolympiaden visits secondary schools, giving lectures in linguistics and holding workshops on Olympiad problems for a nominal fee. These services are much appreciated by the teachers and students. Maintaining an active and good relation with the teachers is crucial in recruiting contestants to Lingolympiaden as well as a perfect opportunity to provide our first and foremost goal, to increase the interest in linguistics among the secondary school students. The problems of Lingolympiaden Lingolympiaden aims to include problems from computational linguistics, general linguistics (primarily grammatical typology), field linguistics, phonetics and psycho linguistics. The staff at the universities are encouraged to construct problems on their area of expertise. Since there are many Ph.D. students who work on language descriptions and linguistic typology at Swedish universities there have been many problems on minor languages such as Kuot of Papua New Guinea and interesting grammatical phenomena such as hodiernal tense. The problem set of the Swedish contest has featured adapted and translated problems from other olympiads and other sources (Speculative Grammarian for example), but as the staff at the universities improve with every year there are less and less of these external problems. The problem constructors are advised to take into consideration that the problems are to be solvable by secondary school students with no linguistic training, but at the same time the problems shouldn't be based on pure logic alone. This is a tricky balance and it is impossible to make sure that no-one has an unfair advantage. The problem set for the Swedish olympiad is dependent on who within the university staff has the time to spend construction and testing problems. While the attention is to cover all areas within linguistics it is rare that staff from all sections are able to contribute. This is not a problem that is unique to the Swedish contest, more international collaboration will hopefully improve this situation. Translating Swedish problems into English, Russian, Bulgarian, Italian or German is very feasible. Adaptation of problems into teaching material -some examples When visiting at school it isn't always possible to use problems from the Swedish or International olympiads directly as they were constructed -many problems need to be modified to better suit the conditions of the workshop. When conducting workshops it isn't the aim to test the students but rather to discuss different approaches to the problem and encourage their interest. Here are some examples of the problems that have been used in workshops at secondary schools. Lingdoku by Trey Jones Lingdoku is an IPA version of the Japanese number game Sudoku. This is very useful type of problem since most students are already familiar with the premises of Sudoku, it gives them a nice introduction to the variables necessary to define consonants and it is great for illustrating different approaches of problem solving. Since students enrolled in the new humanities programme are required to learn IPA this problem is also much appreciated by teachers. Lingdoku focuses only on pulmonic consonants and two variables, manner and place. The adaptation used in Lingolympiadens workshop has 4*4 different symbols to place within the grid without repeating the same manner or place in the 4 2*2squares or any row or column. The Lingdoku table will have some values filled in and by regulating these one can change the time necessary to solve the problem and also change focus on what part of the problem solving we want to emphasise. Drehu & Cemuhî by Ksenija Giljarova This problem comes from IOL-6 (2008 in Slanchev Bryag). It has proven to be a highly useful and appreciated problem to run with students. It is quite illustrative of the kind of problems one might encounter and it is not solvable by only applying straightforward logic. The problem consists of two sets of words in two related languages with translations into Swedish in a scrambled order. There are also a few morphemes that are translated are given as related in the two languages but with no Swedish translation. The method of solving it involves applying several ways of comparing the languages, the students have to be able to identify that the words have developed differently in the two languages and that concepts like horizon, border, wall and beach share a certain semantic component (''boundary''). It serves as inspiration and proof to students that their knowledge, experience and general feel for language and linguistics is directly applicable to solving concrete linguistic problems. PSG Web Laboratory by Torbjörn Lager The Linguistics department at Gothenburg University have developed a simple online tool for creating context-free phrase structure grammars and testing them out. It was developed as an aid in a course on generative grammar and CFGs but it has quickly spread to other departments. It is accessible through any web browser, no passwords necessary. The tool is very useful when one wants to illustrate the concepts of competence and performance as conceived by Chomsky, how syntactic trees can be applied and, if time permits, what CFGs are. This exercise does require a bit more effort as many students are unfamiliar with formal language, but it is quite easy to make it fun since the freedom in defining the lexicon is limitlesssomething that has proven to be quite amusing. Arongo, Arongo -why have you forsaken me? Patrick Littell constructed a problem about the language on the island of Manam for NACLO in 2008. This problem deals with different ways of expressing location in space, turns out that the language of the island of Manam has a rather unusual system. The students are given a map of the island and some sentences in the language describing the location of certain houses on the map. This problem has been adapted into a small role playing game for the Lingolympiaden workshops. The students are playing a group of field linguists who arrive to the island to study the indigenous language. There is only one person, Arongo, on the island with whom they have a common language (Tok Pisin). He was supposed to meet him when they arrived, but since they were late due to a storm he has returned to his home. They have to locate him by deciphering what the natives are saying to them, i.e. the sentences from the the original problem. They do not receive all sentences at once, they get them in small batches along with other clues (in step 2 they learn that there is a volcano). This gives them smaller amount of text to process at the same time, which makes it look less overwhelming. They also get to share their ideas and conclusions with each other and test their hypothesis on new data. Thus, learning how to approach a complex problem. If there is plenty of time there is also the possibil-ity of discussing other problems that field linguists might encounter. Conclusions The aim of the organisers and nature of the secondary school system determine the shape of the Olympiad of Linguistics in a specific country. In the case of Sweden the olympiad is primarily focused on reaching secondary school students with the message that linguistics is fun and they should pursue it further. An Olympiad more geared towards making students of natural sciences and programming interested in languages might look different (but not necessarily). Lingolympiaden makes use of the expertise competence of the staff at the universities in the problem construction but also for lectures and teaching material, the universities use Lingolympiaden as a way of reaching potential students. YSS, the County Council of Stockholm and the Royal Academy of Letters support Lingolympiadens efforts to encourage the the youth of Sweden to be interested in social sciences and humanities in general and linguistics in particular. Teachers of modern languages and the new humanities programme use Lingolympiaden to improve their classes and provide extra-curricular activities for their students. And lastly: the students of secondary schools in Sweden use Lingolympiaden to try out what linguistics is all about and whether it is fun. It is their interest and their curiosity that is at the core of Lingolympiaden and other projects by YSS. Future As the contest grows there will hopefully be more opportunities to communicate with students and teachers of secondary schools and become more involved in their education. Lingolympiaden is a small contest aimed at providing an all-day outing at the university for secondary school students, but if there is interest there might be local contests at the schools themselves as well. The Swedish government is right now considering whether or not to support Lingolympiaden in the same way that they support the Swedish Olympiad of Informatics, Lego Robot contest, Chemistry Olympiad etc. If they do, that will be a great leap towards acknowledgement of the value of social and human sciences alongside the natural sciences. Linguistics is an interdisciplinary scientific field that covers classical humanities, social and human sciences, natural sciences like neurology and biology, and computational disciplines. There is no reason why a Olympiad of Linguistics should be excluded from support of Olympiads of Science.
2014-07-01T00:00:00.000Z
2013-08-01T00:00:00.000
{ "year": 2013, "sha1": "1196b69116dcb28ef57fb58757206a1f12d04c89", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "1196b69116dcb28ef57fb58757206a1f12d04c89", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Political Science" ] }
5871630
pes2o/s2orc
v3-fos-license
First Cultivation and Characterization of Mycobacterium ulcerans from the Environment Background Mycobacterium ulcerans disease, or Buruli ulcer (BU), is an indolent, necrotizing infection of skin, subcutaneous tissue and, occasionally, bones. It is the third most common human mycobacteriosis worldwide, after tuberculosis and leprosy. There is evidence that M. ulcerans is an environmental pathogen transmitted to humans from aquatic niches; however, well-characterized pure cultures of M. ulcerans from the environment have never been reported. Here we present details of the isolation and characterization of an M. ulcerans strain (00-1441) obtained from an aquatic Hemiptera (common name Water Strider, Gerris sp.) from Benin. Methodology/Principal Findings One culture from a homogenate of a Gerris sp. in BACTEC became positive for IS2404, an insertion sequence with more than 200 copies in M. ulcerans. A pure culture of M. ulcerans 00-1441 was obtained on Löwenstein-Jensen medium after inoculation of BACTEC culture in mouse footpads followed by two other mouse footpad passages. The phenotypic characteristics of 00-1441 were identical to those of African M. ulcerans, including production of mycolactone A/B. The nucleotide sequence of the 5′ end of 16S rRNA gene of 00-1441 was 100% identical to M. ulcerans and M. marinum, and the sequence of the 3′ end was identical to that of the African type except for a single nucleotide substitution at position 1317. This mutation in M. ulcerans was recently discovered in BU patients living in the same geographic area. Various genotyping methods confirmed that strain 00-1441 has a profile identical to that of the predominant African type. Strain 00-1441 produced severe progressive infection and disease in mouse footpads with involvement of bone. Conclusion Strain 00-1441 represents the first genetically and phenotypically identified strain of M. ulcerans isolated in pure culture from the environment. This isolation supports the concept that the agent of BU is a human pathogen with an environmental niche. Introduction Buruli ulcer (BU), the third most common mycobacteriosis in humans after tuberculosis and leprosy is an indolent, necrotizing disease of skin, subcutaneous tissue and, occasionally, bones [1]. BU has emerged in recent times as an increasingly important cause of morbidity around the world, and has been reported in 30 countries, mostly in tropical areas [2]. This disease is caused by Mycobacterium ulcerans which is peculiar among pathogenic mycobacteria because it produces a potent necrotizing exotoxin, mycolactone, which is a major virulence factor [3]. Although incompletely understood, the epidemiology of BU strongly associates the disease with wetlands and especially slowflowing or stagnant water [4][5][6]. Indeed, there is indirect evidence that M. ulcerans is an environmental pathogen transmitted to humans from its aquatic niches; however, modes of transmission are unclear [7]. The initial hypothesis that predatory aquatic insects, including Naucoridae and Belostomatidae, were involved in transmission [8] was later reinforced by reports that the salivary glands of Naucoris were colonized by M. ulcerans when fed on infected grubs, and that bites of infected Naucoris transmitted the pathogen to mice [9]. The observation that non-infected humans exposed to aquatic environments in BU endemic areas have higher titers of antibodies to salivary proteins of Naucoridae and Belostomatidae than BU patients in the same areas [10] shows that these water bugs bite humans in natural settings. However, Naucoridae and Belostomatidae are carnivorous insects that normally feed on other aquatic insects, snails, and small fish and only bite humans incidentally [11]. Thus, the significance of biting by M. ulcerans-colonized aquatic insects in the transmission of BU to humans is unknown, and other forms of transmission, including skin trauma, have been considered [12]. Since the discovery of IS2404 [13], an insertion sequence with more than 200 copies in M. ulcerans [14], multiple studies have detected IS2404 in environmental aquatic samples, indicating that M. ulcerans is probably present in such samples, and supporting the concept that the etiologic agent of BU is an environmental pathogen. IS2404 was found in samples of water and detritus from swamps in Australia [15,16,17], and in aquatic plants [18], insects (Belostomatidae, Naucoridae, Hydrophilidae), crustaceans and mollusks (Bulinus sp. and Planorbis sp.), and fish (including Tilapia sp.) in western tropical Africa [8,9,18,19,20]. More recently IS2404 was detected in mosquitos in Australia [21]. However, the recent discovery of IS2404 in aquatic mycobacteria other than M. ulcerans requires re-evaluation of the use of IS2404 PCR for the detection of M. ulcerans DNA in the environment [22,23] and emphasizes the importance of the isolation of M. ulcerans from environmental sources. Numerous extensive studies have failed to isolate M. ulcerans in pure culture from the environment, even in highly endemic areas of BU, e.g. in Uganda [24], the Democratic Republic of Congo [4,5,25] and West Africa [19]. Two cultures from salivary glands of wild aquatic insects (Naucoridae) collected in BU endemic areas of Côte d'Ivoire were positive for IS2404 and were considered to be related to M. ulcerans; however, no phenotypic characteristics of these isolates were reported other than their virulence for mice [9]. In 2004, Marsollier et al. obtained IS2404 PCR positive cultures from two samples of aquatic plants collected in a BU endemic area of Côte d'Ivoire [18]. One IS2404 positive BACTEC culture inoculated into mice revealed infection compatible with M. ulcerans. The culture was, however, contaminated by Mycobacterium szulgai and M. ulcerans could not be obtained in pure culture even after passages through mice. As briefly reported previously [26], a pure culture of M. ulcerans (isolate 00-1441) was obtained from an aquatic insect from Benin. In that report no description was given of the methods employed for the isolation of M. ulcerans 00-1441 and of the phenotypic and genetic characteristics of the isolate. Here we present the detailed results of the isolation and characterization of strain 00-1441, establishing that this mycobacterium is an African type of M. ulcerans with high virulence for mice. Strain 00-1441 represents the first well characterized M. ulcerans strain isolated in pure culture from an environmental source. Environmental specimens Collection and in vitro culture. Four aquatic specimens collected in a BU endemic area of Benin and one in Togo that were part of a larger study on the frequency of mycobacteria in the environment (Portaels et al., manuscript in preparation) were cultured in vitro and inoculated into mouse footpads. The specimens were transported from the field to the laboratory in sterile tubes at 4uC and kept frozen until they were identified by an entomologist. Types of specimens collected and their origins are indicated in Table 1. The specimens were thoroughly diced with sterile disposable scalpels in a sterile mortar. They were further homogenized with a sterile pestle and suspended in 2 ml phosphate buffered saline (PBS) (Oxoid, Hampshire, England; pH = 7.360.2). The mortars and pestles were used only for environmental specimens. Decontamination was performed by treatment of the suspensions for 15 minutes with an equal volume of aqueous 2.0% sodium hydroxide (NaOH) containing sodium citrate (1.45% final concentration) and N-acetyl-l-cysteine (0.5% final concentration). The suspensions were centrifuged for 20 minutes at 3,800 g and fractions of the resulting sediments were used for Ziehl-Neelsen staining (ZN), culture and PCR. Primary cultures were performed at 32uC by inoculating the sediment on Löwenstein-Jensen (LJ) solid medium and in BACTEC 12B broth (Becton Dickinson Microbiology Systems, Sparks, MD, USA) supplemented with PANTA and 1.25% egg yolk [27]. Growth index (GI) was measured weekly for 3 months with a BACTEC 460 TB instrument (Becton Dickinson, Sparks, MD, USA). When the GI reached 999, ZN was performed, and BACTEC cultures were tested by IS2404 PCR [28]. Inoculated LJ tubes were incubated for 12 months and observed every 2 weeks. PCR on decontaminated specimens and BACTEC cultures. DNA was extracted from decontaminated specimens and BACTEC positive cultures as previously described with minor modifications [29]. Briefly, 250 ml suspensions were treated with equal volumes of lysis buffer L6 (5 M GuSCN, 5 0 mM Tris, pH 6.4, 22 mM EDTA, 2% Triton X-100) followed by 50 ml of proteinase K (20 mg/ml). This mixture was incubated overnight at 60uC with gentle shaking. To capture DNA, 40 ml of diatomaceous earth stock solution (10 g diatomaceous earth obtained from Sigma Aldrich Chemie GmbH, Steinheim, Germany) in 50 ml of H 2 O containing 500 ml of 37% (wt/vol) HCl was added to the suspension and placed in a shaker incubator at 37uC for 2 hours to avoid sedimentation of the diatomaceous earth. Author Summary Mycobacterium ulcerans infection, or Buruli ulcer, is the third most common mycobacteriosis of humans worldwide, after tuberculosis and leprosy. Buruli ulcer is a neglected, devastating, necrotizing disease, sometimes producing massive, disfiguring ulcers, with huge social impact. Buruli ulcer occurs predominantly in impoverished, humid, tropical, rural areas of Africa, where the incidence has been increasing, surpassing tuberculosis and leprosy in some regions. Besides being a disease of the poor, Buruli ulcer is a poverty-promoting chronic infectious disease. There is strong evidence that M. ulcerans is not transmitted person to person but is an environmental pathogen transmitted to humans from its aquatic niches. However, until now M. ulcerans has not been isolated in pure culture from environmental sources. This manuscript describes the first isolation, to our knowledge, of M. ulcerans in pure culture from an environmental source. This strain, which is highly virulent for mice, has microbiological features typical of African strains of M. ulcerans and was isolated from an aquatic insect from a Buruli ulcer-endemic area in Benin, West Africa. Our findings support the concept that M. ulcerans is a pathogen of humans with an aquatic environmental niche and will have positive consequences for the control of this neglected and socially important tropical disease. The pellets were washed with 900 ml of L2 buffer (5 M GuSCN, 50 mM Tris, pH 6.4) [29] followed by 900 ml of 70% ethanol and 900 ml of acetone. The pellet was then dried at 55uC and resuspended in 90 ml TE (10 mM Tris, 1 mM EDTA, pH 8). Tubes were centrifuged and 50 ml of the DNA extract transferred to a new tube. The following procedures were employed: For the first PCR run, 5 ml of the DNA extract was added to 45 ml PCR reaction mixture containing 20 pmol of each primer (PgP1 and PgP2), 1U of Taq DNA polymerase (Roche Molecular Systems, Brussels, Belgium), 200 mM concentrations of each deoxyribonucleotide triphosphate, 1.5 M MgCl 2 , 0.1% X-100, and 10 mM Tris-HCl (pH 8.4) and overlaid with mineral oil. Cycling was as follows: denaturation at 94uC for 5 min; amplification for 40 cycles at 94uC for 45 sec, 64uC for 45 sec and 72uC for 45 sec and a final extension at 72uC for 10 min. For the second PCR run, 0.25 ml of the first-run product was amplified in a 25 ml reaction mixture with primers PgP3 and PgP4. Cycling conditions were similar to the first run except that amplification was reduced to 25 cycles. Amplified DNA (7 ml) was then submitted to electrophoresis in 2% agarose gel and detected by ethidium bromide staining and UV transillumination. Mouse footpad inoculation. The three IS2404 PCR positive BACTEC cultures (Table 1) were inoculated (0.03 ml) into the left hind footpad of three 8 week old female mice (strain NMRI). Mice were sacrificed after 6, 9 or 12 months ( Table 2). Entire feet were placed into 10% formalin for histopathological analysis, or were used for microbiological analysis as described previously [30]. For specimens 97-1455 and 98-443, mouse footpad suspensions (harvested after 9 months) were also passaged (P1) into 3 other mice. A second passage (P2) was done from P1 after 12 months. No passage was performed for specimen 98-447 because only one mouse survived for 9 months and the entire foot of this mouse was used for histopathologic analysis. Histopathologic analyses. Histopathologic analyses of mice feet were made at the indicated times after inoculation of positive BACTEC vials and, in some cases, after passage of inoculated footpads to other mice. The entire feet were intact at this stage. Specimens were decalcified with a solution containing 4% concentrated hydrochloric acid and 4% concentrated formic acid in distilled water or with a commercial solution (Thermo, Shandon TBD-1 Rapid Decalcifier, Chesire, UK) for a period of 45 min. Multiple longitudinal sections 1.5-2 mm thick were cut, processed routinely and sectioned at 4 mm. Sections were stained by hematoxylin-eosin, ZN, Grocott's methenamine-silver and Brown-Hopps Gram methods [31]. Identification of the mycobacteria. Mycobacteria cultivated directly from the aquatic specimens and the M. ulcerans isolate 00-1441 cultivated from a mouse footpad (see Table 2) were identified as described previously [32]. Analysis of isolate 00-1441 16S rRNA gene sequencing and phylogenetic analysis. The isolate 00-1441 was also identified by partial analysis of the 39 end region as described previously [33] as well as the 59 end region of the 16S rRNA gene by Eurogentec (Liège, Belgium) with an automated nucleic acid sequencer (Applied Biosystems, Foster City, CA, USA). Mycolactone extraction and analysis. Acid soluble lipid containing mycolactones were extracted from M. ulcerans strains 1615 and 00-1441 with chloroform-methanol 2:1 followed by back extraction with ice-cold acetone as previously described [34]. Lipids were resolved by silica thin-layer chromatography using a chloroform-methanol-water (90:10:1) solvent system and visualized by charring with anisaldehyde in 10% sulfuric acid. Mass spectrometric (MS) analysis of the mycolactone extracts was performed as previously described [34]. Ten milligrams of the dried acetone soluble lipid (ASL) extract were re-suspended in 1 milliliter of methanol and subsequently filtered prior to MS analysis. Using a Cole Palmer 74900 series syringe pump, the methanolic extract was perfused into an ion trap ESI Bruker-Esquire mass spectrometer at dry temperature of 300uC, gas flow of 5 liters/min and nebulizer pressure of 15 lb/in 2 . Mouse footpad inoculation. The present study was conducted under the guidelines and approval of the Research Ethics Committee of the Life and Health Sciences Research Institute (Braga, Portugal). Isolate 00-1441 was grown on LJ medium at 32uC for approximately 2 months, recovered from LJ slants, diluted in PBS to a final mass concentration of 1 mg/ml, and vortexed vigorously using 2-mm glass beads. In all the experiments, the number of acid-fast bacilli (AFB) in each inoculum was determined by the method of Shepard and McRae [35], using ZN staining (Merck, Darmstadt, Germany). The suspensions revealed more than 90% viable bacilli as assessed with the LIVE/DEAD BacLight Kit (Molecular Probes, Leiden, The Netherlands). Eight-week-old female BALB/c or NMRI mice were obtained from Charles River Laboratories (Barcelona, Spain) and were housed in specific pathogen-free conditions with food and water ad libitum. Both strains of mice were infected in the left hind footpad with 0.03 ml of a suspension containing 5.4 log 10 AFB of M. ulcerans 00-1441. As an index of lesion development, footpad swelling was measured over time with a caliper. Bacterial proliferation was evaluated in footpad homogenates of infected mice at selected time points, as previously described [30,36]. Histological analysis of the feet was carried out as described above. Infection of murine bone marrow-derived macrophages. Bone marrow-derived macrophages (BMDM) from BALB/c mice were used as mouse primary macrophages and were prepared as previously described [36]. Briefly, both femurs were removed under aseptic conditions. Bones were flushed with cold Hank's Balanced Salt Solution (HBSS, Gibco, Paisley, UK). The resulting cell suspension was centrifuged and resuspended in Dulbecco's Modified Eagle's Medium (DMEM, Gibco) supplemented with 10 mM HEPES (Sigma, St. Louis, MO), 1 mM sodium pyruvate (Gibco), 10 mM glutamine (Gibco), 10% of heat-inactivated fetal bovine serum (Sigma) and 10% L929 cellconditioned medium (Complete DMEM [cDMEM]). To remove fibroblasts or differentiated macrophages, the cells were cultured at 37uC in a 5% CO 2 atmosphere for a period of four hours in cell culture dishes (Nunc, Naperville, IL) with cDMEM. The nonadherent cells were collected with warm HBSS, centrifuged and distributed in 24-well plates at a density of 5610 5 cells/well and incubated at 37uC in a 5% CO 2 atmosphere. L929 cellconditioned medium was added 4 days after seeding and medium was renewed on the seventh day. After 10 days in culture, cells were completely differentiated into macrophages. Twelve hours before infection, macrophages were incubated at 32uC in a 5% CO 2 atmosphere and maintained at 32uC until the end of the experimental infection. For macrophage infectivity assays bacterial suspensions were prepared as described above and further diluted in cDMEM to obtain the selected multiplicity of infection (MOI) of 1:1 (bacteria/ macrophage ratio). Cells were incubated for 4 hours at 32uC in a 5% CO 2 atmosphere and then washed with warm HBSS to remove non-internalized bacteria and re-incubated in cDMEM for eight days. Genotyping methods. Genotyping methods developed to analyze the diversity among M. ulcerans and M. marinum strains from different geographical areas were applied to isolate 00-1441. IS2404 restriction fragment length polymorphism (RFLP) and PCR restriction profile analysis (PRPA) were performed as described previously [37,38]. Environmental specimens Results of ZN staining, culture and PCR studies for the 5 aquatic specimens are shown in Table 1. Table 2 shows the results of the mouse footpad inoculation with the BACTEC suspensions (98-447, 97-1455 and 98-443) that were positive by IS2404 PCR after inoculation with the aquatic specimens. Specimen 98-447: Histopathologic analysis of one mouse sacrificed after 9 months revealed a few well formed granulomas with minimal necrosis around blood vessels, nerves and in muscle. There were large numbers of beaded AFB in the granulomas. Specimen 97-1455: Of the three mice inoculated with this BACTEC culture (P1 in Table 2), two were sacrificed 9 months after inoculation. The histopathologic analysis of the footpad of one mouse showed marked necrosis with a mild granulomatous response, inflammation of periosteum and many large clumps of AFB in necrotic areas. The footpad homogenate of the third mouse was positive for AFB and the culture on LJ was positive for mycobacteria (isolate 99-2832) but was lost due to contamination by nonacid-fast bacteria; however, PCR performed on the contaminated culture was positive for IS2404. This mouse footpad was inoculated in vitro and passaged twice into two groups of three mouse footpads. The second (P2 in Table 2) and third passages (P3 in Table 2) were negative for AFB by ZN staining and by culture. Specimen 98-443: This homogenate of a Gerris sp. aquatic insect ultimately produced the M. ulcerans isolate 00-1441 after culture in BACTEC (positive for IS2404) inoculated in mouse footpads (P1) and followed by two other mouse footpad passages (P2 and P3). Following the first mouse inoculation (P1), one animal died after 1 month and the other two were sacrificed 9 months after inoculation. Histopathologic evaluation of the footpad of one of these mice showed granulomatous changes with minimal necrosis around blood vessels and nerves. There were large numbers of scattered, short, beaded AFB in the granuloma. ZN stain and culture were negative for the footpad of the third mouse inoculated in vitro. The suspension obtained from the third mouse was used to reinoculate 3 other mice (P2). One P2 mouse died after 6 months and the other two were sacrificed 12 months later. The histopathologic study of one footpad showed minimal nonspecific inflammation. The other mouse footpad was negative for AFB and by culture and was used for a third passage (P3) into 3 mice. Two of the P3 mice were sacrificed after 6 months. One footpad used for histopathologic study showed nonspecific inflammation. The other footpad was ZN-negative but gave a positive culture on LJ (5 colonies) after 2 months incubation at 32uC. The isolate (00-1441) was further analyzed and identified as M. ulcerans (see below). The remaining P3 mouse, sacrificed after one year, did not reveal any histopathologic changes. Characterization of isolate 00-1441 Phenotypic characterization. The phenotypic characteristics of M. ulcerans isolate 00-1441 and those of some M. ulcerans geographic subgroups are given in Table 3. Isolate 00-1441 has the same phenotypic characteristics as M. ulcerans strains belonging to the African subgroup [33], i.e., it was scotochromogenic, did not grow on LJ containing 250 mg/ml hydroxylamine and was acid phosphatase positive. 16S rRNA gene sequencing and phylogenetic analysis. The 59 end of 16S rDNA sequence was 100% identical to that of M. ulcerans [41]. Sequencing results of the 39 end of 16s rDNA for 00-1441 and the different M. ulcerans types are given in Table 3. Isolate 00-1441 was characterized by a G at position 1248 shared by M. ulcerans type 1, type 2, type 3, and Mycobacterium shinshuense, a C at position 1289 typical of M. ulcerans type 1 and type 2 and CTTT at positions 1450-1452 unique to M. ulcerans type 1 [33]. However, a point mutation was found at position 1317, with a T instead of a C typical for all M. ulcerans types and Mycobacterium marinum. Mycolactone analysis. Thin-layer chromatography of ASLs showed that isolate 00-1441 produces mycolactone A/B as a major lipid species (data not shown). Mass spectroscopy provided definitive evidence for mycolactone production as evidenced by sodium adducts at 765.9, representing the intact mycolactone molecule, and the hydrolysis product showing the core ion at 447.3 (Fig. 1). Enrichment for the intact ion using ion trap for positive ions between m/z 755-775 revealed a major peak consistent with intact mycolactone A/B (Fig. 2). Fragmentation pattern of this species following MS-MS demonstrated the presence of the characteristic mycolactone fragmentation pattern with core mycolactone at 429.4 and fatty acid side-chain at 359.3 (Fig. 3). Infection of murine bone marrow macrophages As previously described for virulent M. ulcerans strains [36], isolate 00-1441 showed cytotoxic activity against BMDM infected at an MOI 1:1 as deduced at day 4 post-inoculation from the occurrence of mycolactone-associated cytopathic signs [42] namely, cell rounding, shrinkage and detachment of the macrophages (Fig. 4). Mouse footpad inoculation. Footpads of three NMRI mice showed swelling beginning 7 days after inoculation of 00-1441. Isolate 00-1441 was also virulent for BALB/c mice. The proliferation of bacilli, assessed by AFB counts, and the level of pathologic changes, evaluated by footpad swelling and emergence of ulceration, were monitored throughout the experimental period of infection of BALB/c mice footpads. As shown in Fig. 2, swelling became apparent during the second week of infection; ulceration was observed after the fourth week post-inoculation. AFB counts in footpad homogenates increased significantly from 5.15 log 10 to 7.50 log 10 between days 0 and 27 post-inoculation (Fig. 5). Observation of serial footpad sections of BALB/c mice showed an acute neutrophilic inflammatory response in the subcutaneous tissue early after infection with 00-1441 (data not shown). In the course of the second week of infection, evidence of dermal edema was found (Fig. 6A) along with a mixed inflammatory infiltrate containing mononuclear cells and neutrophils ( Fig. 6A and B), surrounding the necrotic center of the lesion (Fig. 6B). By weeks 3 to 4, the necrotic focus expanded progressively, invading healthy tissue and numerous AFB were observed in areas co-localizing with leukocytes, predominantly of the mononuclear type (Fig. 6C). In more advanced stages of infection, and concurring with ulceration, the subcutaneous tissue of BALB/c and NMRI mice revealed extensive necrotic acellular areas with clumps of free bacilli (Fig. 6D). An important histopathological finding regarding footpad infection by strain 00-1441 in NMRI and BALB/c mice was the presence of AFB in the bones (Fig. 7A) and bone marrow (Fig. 7B) of the feet. Extensive destruction of the bone was observed with erosion of the cortex and replacement of marrow by inflammatory cells (Fig. 7C and D). These results indicate that M. ulcerans strain 00-1441 is highly virulent for mice. Genotyping results. The IS2404 RFLP banding pattern of 00-1441 was identical to that of African M. ulcerans isolates [37]. Using PRPA, 00-1441 also yielded a profile similar to that of African M. ulcerans isolates [38]. Comparison of MIRU-VNTR profile based on 6 loci showed that 00-1441 has a typical Atlantic African genotype [40]. Discussion The prevailing concept that BU is associated with wetlands, especially slow-flowing or stagnant water, implies that M. ulcerans is an environmental pathogen transmitted to humans from particular aquatic niches. Historically, the presence of M. ulcerans in aquatic samples, including water, mud, aquatic plants, aquatic insects, aquatic mollusks, crustacea and small fish, has been inferred from the detection by PCR of the insertion sequence IS2404, highly represented in the genome of M. ulcerans [14]. All previous attempts to isolate fully characterized M. ulcerans from environmental samples, however, have failed, and recent evidence [22] indicates that IS2404 positivity alone is inadequate to establish the presence of M. ulcerans in environmental samples. M. ulcerans 00-1441, isolated from a Hemiptera (Water Strider, Gerris sp.) collected from a swamp in a BU endemic region (Zagnanado, Benin), represents the first fully characterized culture of the agent of BU from an environmental source. Isolate 00-1441 was identified as M. ulcerans by the following criteria: [36] and humans [43]. Additionally, 00-1441 had been previously found to have a mycolate profile pattern similar to that of M. ulcerans, with three types of mycolates, a-, methoxy-, and ketomycolates [26]. Moreover, 00-1441 and the predominant African type share identical profiles for IS2404-Mtb2 PCR [44], and microsatellite VNTR analysis [45]. Based on nucleotide substitutions at the 39 end 16S rRNA gene [33], isolate 00-1441 is an M. ulcerans type 1 strain (an African type). The mutation found at position 1317 (a T instead of a C) has not been found previously. Indeed the 39 end of the 16S rRNA gene of all M. ulcerans strains analyzed in 1996 [33] and of all other mycobacterial species has a C at position 1317 [Blast search on the nucleotide collection (nr/nt) database (NCBI) using the nucleotide sequence of the 39end 16S rRNA gene (nt 1244-1461) of M. ulcerans type 1]. In a recent study on 75 M. ulcerans isolates from 17 different countries including 10 African countries (Angola, Benin, Cameroon, Congo-Brazzaville, Côte d'Ivoire, Democratic Republic of Congo, Ghana, Nigeria, Togo and Uganda), a few isolates from patients originating from the Zou and Ouémé valleys in Benin presented a T instead of a C at position 1317 (Portaels et al., in preparation). Interestingly, strain 00-1441 was isolated from the region (Zou Department) where some of these patients lived. The aquatic specimens analyzed in the present study likely contained very few mycobacteria since direct smear examination after decontamination was negative for all specimens and primary cultures positive for mycobacteria other than M. ulcerans (Table 1) produced only 1 to 3 colonies. Moreover, despite the very high sensitivity of the IS2404 PCR [14], detection of IS2404 in the decontaminated specimens was negative indicating that less than 10 mycobacterial cells were present in each suspension [28]. Culture in BACTEC allowed multiplication of the rare mycobacteria present in the inocula since three of the five BACTEC positive cultures were positive by IS2404 PCR. Our previous attempts to detect M. ulcerans in more than 1000 environmental specimens by culture have revealed numerous environmental mycobacteria belonging to species frequently cultivated from the environment [5]. However, other than the results of Marsollier et al. [9,18] and the present study, all attempts to culture M. ulcerans from the environment have failed. As discussed elsewhere [19], there are several possible explanations for the difficulty in culturing M. ulcerans from environmental specimens, namely: (i) These specimens are heavily contaminated with other microorganisms, [5,13,18,24]. This is primarily because the generation time of M. ulcerans is longer than that of most other slow-growing mycobacteria that are abundant in the environment [18,19]. In the present study, successive passages in mice of BACTEC cultures may have eliminated mouse avirulent environmental mycobacteria [30] co-existing in the specimen, allowing multiplication of M. ulcerans. (ii) All decontamination methods currently available for the isolation of M. ulcerans from contaminated environmental specimens have a detrimental impact on the viability of this pathogen [27]. (iii) Since M. ulcerans is sensitive to elevated temperatures [46][47], temperature during transportation of environmental specimens to the laboratory is critical, particularly in tropical areas where ambient temperatures often exceed 32uC. (iv) As is the case in the present work, environmental specimens used in attempts to isolate M. ulcerans may contain very few bacilli. (v) Additionally, in the environment M. ulcerans may be living in a viable but nonculturable (VBNC) state. This state may represent a survival adaptation to overcome adverse conditions, but the organism retains secluded cultural viability and virulence capability [48,49]. Most pathogenic bacteria of humans are known to enter the VBNC state, including those in aquatic environments [48,50,51]. The recuperation of culturability in bacteriological media by mycobacteria in the VBNC state may require a suitable resuscitation medium [52] and BACTEC may serve to resuscitate the VBNC M. ulcerans. Additional experiments are required to test for a VBNC state in environmental M. ulcerans. The strain analyzed in the present study was isolated from a Hemiptera (Gerris sp.). Gerris sp. belongs to the worldwide family of the Gerridae. They are elongate insects with very long mid and hind legs (Fig. 8). The latter allow them to move rapidly on water surfaces to catch their preys. They live on the surface of quiet waters and are unable to walk on the ground, but can fly from one pond or river to another [53]. Several publications have suggested that Hemiptera (Naucoridae, Belostomatidae) may play a role in the transmission of BU to humans [1,8,9]. The successful cultivation of M. ulcerans from another family of aquatic Hemiptera (Gerridae) extends the range of hypothetical hemipteran transmitters. Like other aquatic Hemiptera, Gerridae are aggressive predators of other aquatic organisms such as insects and small fish. However, there are no reports of Gerridae biting humans (Dethier M, personal communication) and these insects may be only passive, incidental and transient reservoirs of M. ulcerans without an obvious role in the transmission of BU to humans or other mammals. MS analysis of ASLs confirmed that isolate 00-1441 produced mycolactone A/B. The virulence of M. ulcerans is largely due to the presence of the toxic macrolide, mycolactone [3]. It is now recognized that there is a family of mycolactones produced by M. ulcerans and other related mycobacterial species. Each mycolactone has a distinct structure and mass. However, all isolates of M. ulcerans from Africa produce mycolactone A/B [34]. The demonstration of mycolactone A/B in Gerridae isolate 00-1441 presented here provides additional evidence that this strain is similar to virulent strains isolated from patients throughout West Africa. Like other mycolactone A/B producing M. ulcerans strains [36], strain 00-1441 proliferates extensively in mouse footpads and produces intense footpad swelling. Moreover, in previous mouse footpad inoculation studies on 11 isolates of M. ulcerans from patients in Benin, 5 of which were from bones of patients with M. ulcerans osteomyelitis [54], no changes were noted in the bones of the mice feet (Portaels F and Meyers WM, unpublished observations). However, feet of NMRI and BALB/c mice inoculated with 00-1441 showed striking destruction of bone. These data regarding mouse infection suggest that strain 00-1441 is highly virulent for mice. Additional experiments in mice and ex vivo are required to compare the virulence of strains sharing the same ''T'' for ''C'' 16S rRNA gene polymorphism at position 1317 and identically treated i.e., after several passages in mice. Such experiments are underway and will be presented in another publication. In the present study, the main steps followed to cultivate M. ulcerans in pure culture from an aquatic insect are summarized in Fig. 9. Other methods may also be applied such as cultures from salivary glands of wild Naucoridae [9] or aquatic plants [18], or other culture procedures such as the Mycobacteria growth Indicator Tube (MGIT) system [55], or other decontamination methods [27]. The growth of M. ulcerans in liquid media can also be confirmed by applying VNTR analysis to the IS2404 positive liquid cultures to differentiate M. ulcerans and other IS2404 positive mycobacteria [23]. This was not done because the technique had not yet been developed when the present study was undertaken. In conclusion, for the first time a genetically and phenotypically identified M. ulcerans has been isolated in pure culture from an environmental source, reinforcing the concept that the agent of BU is a human pathogen with environmental aquatic niches.
2014-10-01T00:00:00.000Z
2008-03-01T00:00:00.000
{ "year": 2008, "sha1": "bede570138243936d3694de23d35d4ce5e9c9992", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0000178&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b4a7e5b9136b02b9989946335e7f25266a8c89c1", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
264983506
pes2o/s2orc
v3-fos-license
Robotic world models—conceptualization, review, and engineering best practices The term “world model” (WM) has surfaced several times in robotics, for instance, in the context of mobile manipulation, navigation and mapping, and deep reinforcement learning. Despite its frequent use, the term does not appear to have a concise definition that is consistently used across domains and research fields. In this review article, we bootstrap a terminology for WMs, describe important design dimensions found in robotic WMs, and use them to analyze the literature on WMs in robotics, which spans four decades. Throughout, we motivate the need for WMs by using principles from software engineering, including “Design for use,” “Do not repeat yourself,” and “Low coupling, high cohesion.” Concrete design guidelines are proposed for the future development and implementation of WMs. Finally, we highlight similarities and differences between the use of the term “world model” in robotic mobile manipulation and deep reinforcement learning. Introduction The term "world model" (WM) has been used to signify many different concepts.Over 30 years ago, the term was used to denote a software component that internally represents the state of the world (Triggs and Cameron, 1991).Already in the 1970s, Shakey the robot had such a component (Nilsson, 1984).More recently, the deep learning community has adopted the term to describe internal simulators that an agent uses to simulate the development of the world (Taniguchi et al., 2023). Given the generality of the term "world model, " it has essentially become a homonym for quite distinct concepts.The aim of this review article is to consolidate and refine the terminology for WMs and their subcomponents and classify existing approaches.We believe this is essential to enable researchers to clarify what type of WM they are referring to and facilitate the motivation for the design and implementation of WMs. The main focus of this article is on WMs for physical robots that execute actions to perform tasks.Within this scope, a concise definition is given by Bruyninckx (2021): "world model: the information the robot has about the world around it, and that needs to be shared between several activities." Here, "about the world around it" implies that the world is constrained to the local physical world in which the robot executes its actions."The information the robot has about the world" implies that the WM reflects the external physical 10.3389/frobt.2023.1253049 FIGURE 1 A WM is an internal representation that reflects relevant parts of the physical world in which the robot operates.It is shared by multiple sensors, planners, and actors. world, and sensory data are, thus, important to gather this information.The "activities" are internal processes such as planning and decision making.This implies that the internal WM must be a representation rich and accurate enough to select goal-directed actions and perform tasks. From a software engineering point of view, "shared between several activities" in the aforementioned definition is important.Classically, the robot's activities are categorized into three types: sense, plan, and act (Gat et al., 1998).This implies that a WM is a software component that integrates information from multiple sensors, planners, and actor components and provides consolidated information about the state of the world to the same components.Information is exchanged with a WM at its interfaces, which are generically categorized into two types: tell and ask (Tenorth and Beetz, 2013).As described in Section 4, information is added or modified through the "tell" interfaces and queried through the "ask" interfaces, as shown in Figure 1. We argue that storing and sharing information among components is the main motivation for implementing WMs.More concretely, we propose that the software engineering principles of "Design for use, " "Do not repeat yourself, " and "Low coupling, high cohesion" should guide which states and operations should be included in a WM, and even whether one is required in the first place.The aforementioned point also highlights the software engineering perspective on WMs taken in this paper.However, we also describe the more recent deep learning perspective on WMs and show how our design principles apply to and motivate WMs in a deep learning context also. There are early (Angelopoulou et al., 1992) and more recent (Landsiedel et al., 2017) reviews on robotic WMs.The nomenclature and some of the common concepts of world modeling were presented by Belkin et al. (2012).We consider the main contributions and added value of this review to be 1) to propose a terminology for common subcomponents of a WM; 2) to identify typical properties and design dimensions of WMs, as well as their underlying rationale; 3) to provide an up-to-date overview of concrete WMs in robotics, as well as their classification along the design dimensions identified; and 4) to elucidate principles and guidelines for design decisions about the organization and implementation of these components. An in-depth motivation and a model-based approach of WMs are provided in the insightful and ongoing "Building Blocks for the Design of Complicated Systems featuring Situational Awareness" (Bruyninckx, 2021).Our review provides a softwareengineering perspective and aims to be more concise by focusing on WMs. The rest of this review is structured as follows.We first bootstrap our terminology using the Kalman filter as a simple and well-known example in Section 2. To provide an intuition of robotic WMs used in real-world environments, and to demonstrate the applicability of our terminology to such complex WMs as well, we introduce two more use cases in Section 3. Based on that, we elaborate on the main components of a world model-its boundary, state representations, and operations-in Sections 4-6, respectively.Using the terminology introduced, we then conduct case studies and classify 15 works on WMs in Section 7. We then describe the underlying principles for the design and implementation of WMs in Section 8 and conclude with Section 9. World model language bootstrapping Before turning to complex robotic WMs, we bootstrap our terminology using a simple, well-known illustrative example: the Model of the Kalman filter illustrated as a WM, consisting of a state (which holds a mutable x and a constant parameter θ), operations (using A, B, and H), and a boundary. Kalman filter (Welch and Bishop, 1995).Kalman filters iteratively estimate the mutable state x ∈ ℝ n by incorporating information of the control inputs u ∈ ℝ l and measurements z ∈ ℝ m with the following model1 : Figure 2 reinterprets these formulas in terms of the WM template in Figure 1. As a WM, the state, consisting of a mutable x and a constant parameter θ, is meant to reflect the real world.The multiplications with the matrices A, B, and H are operations applied to the state to integrate/extract information.The v and w could be seen to originate from θ, which is used for the operations. The WM boundary is represented by the tell/ask interfaces; everything inside of it is considered to belong to the WM.Information can be provided to the WM via the tell interface, while the ask interface is used to extract information. Based on the aforementioned analyses, we define the components of a WM as follows. Definition 1: A WM is segregated by a boundary and internally consists of a state and operations. Introductory examples of robotic world models Having bootstrapped our terminology with the Kalman filter example, we now apply it to more complex robotic WMs.This demonstrates that the terminology is applicable to different robotic WMs.Two systems from different domains are analyzed to provide a first intuition of what a WM is in a real robotic system.Section 7 lists 15 case studies using our complete terminologies that are described in Sections 4-6. NeBula system The NeBula system (Agha et al., 2021) is a framework for autonomous robotic exploration of unknown extreme environments.The framework was deployed on a team of multiple different mobile robots, which participated in the DARPA Subterranean Challenge2 and completed a task of reaching, detecting, recognizing, and localizing artifacts in various subterranean environments.Figure 3 shows a partial depiction of the architecture; this figure is adapted from Figure 6 from the study by Agha et al. (2021) according to our WM concept shown in Figure 1. The inferred world belief plays the role of the WM in this framework.It integrates the information acquired through the perception pipelines and provides relevant information for planning activities. The state inside the WM consists of the maps of the environment as well as the robot pose.Different state representations are employed depending on the information to represent; the point clouds and occupancy grids are used for the geometry, while the graph-based representation (called the information roadmap) is used for coverage and traversability. The simultaneous localization and mapping (SLAM) components convey a major part of the information to the WM.After preprocessing the raw sensor inputs and asking the most recent state from the WM3 , the perception components first infer the local map as well as the robot pose and then propagate the results into the global scene.The planning components also convey the information to the WM; they estimate the traversability and the hazard of the surrounding area based on the most recent map queried from the WM. FIGURE 3 Architecture of the NeBula system [adaptation of Figure 6 from the study by Agha et al. (2021)]. resolution, continuous data but also provides abstracted knowledge through semantic annotations.The framework has been deployed on different mobile manipulation robots and enabled them to perform pick-and-place tasks in indoor environments.Figure 4 shows a partial depiction of its architecture and has been adapted to our template from Figure 6.1 from the study by Cavallo et al. (2022). As a central knowledge processing system, KnowRob maintains a belief state of the world; i.e., it serves as a WM.It employs the Resource Description Framework as a representation and stores data as subject-predicate-object triples.The ask interfaces are called computables, which are used to answer queries.Queries can ask not only to extract information from the state but also to compute new information not stored in the state using an ontology as prior knowledge. Using sensor data, the perception framework RoboSherlock (Beetz et al., 2015) tells new states to the WM based on the most recent semantic map and the object knowledge.The interface is triggered by a high-level task description as a command. The task executive CRAM (Winkler et al., 2012) and the motion planner Giskard (Fang et al., 2016) also ask the most recent belief state, the semantic map, and the object knowledge for planning.The executed task knowledge is conveyed to the WM, allowing the system to learn from episodic memories. World model boundary In Section 3, we demonstrated the applicability of Definition 1 to more complex robotic use cases.Sections 4-6 elaborate on the concepts of each constituting part of a WM: a boundary, a state, and operations. The boundary of a knowledge base is generically defined by two types of interfaces: tell and ask (Tenorth and Beetz, 2013).The tell interface is used for providing new information, e.g., experiences and beliefs based on new sensor data, to a robot's memory, while the ask interface is used for querying inferred knowledge, e.g., to enable informed decision making during task execution.In this section, we first elaborate on these tell/ask interfaces FIGURE 4 Architecture for the REFILLS project, where KnowRob (Tenorth and Beetz, 2013) is used as the WM [adaptation of Figure 6.1 from the study by Cavallo et al. (2022)]. from the perspective of robotic WMs and argue (in Section 4.3) that the boundary is not inherent to the system but a design decision. Tell interface The tell interface is the only way to pass information to the WM.It defines where and which data can be passed to the WM.It does not necessarily provide direct access to the state. Since components can provide information only through the tell interface, it offers a lot of very different inputs to the WM.Depending on the design, it may take low-level data such as raw images or very high-level data such as Planning Domain Definition Language (PDDL) symbols (Fox and Long, 2003).We identified the following types of information that are typically passed to the WM (examples are given in the parentheses): • Sensor data (images) • Interpreted sensor data (object poses) • Planning and simulation data (PDDL symbols) • Action data (expected outcomes by task execution) • Data from other knowledge bases (experience representations) • Data from outside (human operator knowledge, enterprise management system) A WM does not necessarily provide all of the aforementioned information; depending on its requirements, only a subset of them is usually provided on the tell interface. It should be noted that the state usually stores only relevant data abstracted through the tell interface.This implies that not all information provided on the tell interface can be retrieved again on the ask interface. Ask interface The ask interface is the only way to obtain information from the WM.It defines which data can be retrieved from the WM while not necessarily providing direct access to the state. Since components can query information only through the ask interface, it also provides very different outputs from the WM.Information that can be retrieved from the WM includes the following types (examples are given in the parentheses): • Geometric information (transformations between objects) • Physical information (load data at an end effector) • Qualitative information (which objects are on the table?) • Semantic information (which is "my" cup?) The ask interface often enables information to be retrieved that is not directly passed via the tell interface, because the WM itself can derive new information from what it is told and has stored in its state by applying operations (see Section 6).This is already the case for the minimal example of the Kalman filter, where x can be asked, but only z and u are told. Design dimension of the world model boundary When components tell/ask information to/from the WM, multiple different operations could be combined and applied on the information along the dataflow ("operations" are explained in Section 6).Therefore, given a robotic system, there are numerous ways to draw a line to segregate the operations within and outside of the WM, as shown in Figure 5.This means that the tell/ask interface can be defined at any locations, and where to set them is a design decision of the robot architecture.Design guidelines for this aspect are discussed in Section 8.2. State representation in world models Within the WM, we make a distinction between the state of the world and the operations that update this state.This interpretation is best understood in the context of object-oriented programming (OOP), where an object has an internal state represented by its instance variables, which are typically private to avoid direct access.This internal state is what we refer to as the WM state in this article.The tell/ask interface defines the public member functions the WM should implement to provide access to its state.Within the WM, operations can modify the internal state (see the gray boxes in Figure 5) but cannot be called directly from outside the WM, which are, thus, represented by private member functions.Properties of the WM state are presented in this section, and those of the WM operations are presented in Section 6. State representation: robot or environment, concrete or abstract In mobile robotics, we consider a physical mobile robot operating in a physical environment4 .Together, they constitute the physical world that the robot has to reflect in its WM. In many WM representations covered in this review, there is a (partial) separation between the state representation of the robot itself and its environment.This is possible because the robot and the environment are physically separated.This increases reusability, as separating the environment and robot models means that the latter can be easily reused in different environments. The state itself is always virtual in a sense that it is an internal representation of the real world.However, there is a separation between the concrete part that reflects the physical reality (e.g., a cup on a table) and the abstract part that does not (e.g., a grasp pose for this cup).The design goal of reusability again motivates this separation; concrete aspects will be valid for different tasks or scenarios, while some abstract aspects can only be used for specific cases. Figure 6 illustrates examples of different parts of the states according to the robot/environment and concrete/abstract aspects.A robot with a manipulator whose task is to grasp a cup often needs to model the manipulator, its tool center point (TCP), the cup, and its grasp.The manipulator is a concrete property of the robot, whereas the TCP of the manipulator is set abstractly by designers to make task programming easier.The cup is a concrete object in the environment, whereas potential grasps for the cup are specific to the manipulator to perform the task. Making these distinctions explicit, both conceptually and in the software, makes WMs more modular and reusable for different robots, environments, and tasks. State representation: permanent or transient, given or estimated For all practical purposes in robotics, gravity on Earth can be assumed to be permanent.Walls in houses generally stay where they are.Furthermore, for most robots, the length of their rigid body links will not change 5 .Assuming that such aspects of the world will not change, i.e., that they are permanent, has many advantages.It means that such aspects can be relied on even in different situations once they are given by designers or estimated by the robots.Once the gravity on Earth is provided or obtained, the robots on Earth can always measure the mass of the grasped object from the force measurement.Once the poses of the walls are obtained, the robots can always localize themselves even if humans change the layout of furniture drastically.On the other hand, transient parts of the state are assumed to potentially change and, thus, must be estimated and updated. It is a natural design decision to represent the permanent and transient aspects as constant parameters and a mutable state 10.3389/frobt.2023.1253049 FIGURE 5 Two examples of different WM boundaries (dashed green lines) that could be defined for a WM. FIGURE 6 Examples of parts of the state according to the terminologies robot/environment and concrete/abstract.within the WM, respectively.However, the mutable state could also represent variables that are assumed permanent.This is typically the case if those values are calculated during the runtime of the robot.If, for instance, a robot navigating a room creates a map of it at runtime, the poses of the walls should be part of the mutable state; even though they are assumed to be permanent in the real world, their estimated values can be updated with better estimates as the robot progresses.In such cases, the mutable state should have an internal structure to distinguish the permanent and transient aspects so that the robot can exploit the aforementioned advantages. For almost all robotics applications, data are given as prior knowledge to the robot WM during its initialization or during the operation.For instance, a mobile robot might be given an initial map before it starts navigating.Such aspects that do not need to be estimated and, instead, are determined offline can be provided to the robot, e.g., in a static file.On the other hand, it is quite likely, especially for autonomous robots, that they need to estimate the (mutable) state of the world by themselves.This is crucial for the robots to operate in dynamic environments where objects move or in partially unknown environments where only a limited amount of information can be given a priori. FIGURE 7 RoboCup (Kitano et al., 1997) examples for parts of the state according to the terminologies permanent/transient in the real world, constant parameters/mutable state in the WM, and given as prior knowledge or estimated at runtime. Figure 7 illustrates examples of different aspects of the representation categorized according to the concepts and terminologies introduced so far.A robot whose task is to play football in any football field-as in RoboCup (Kitano et al., 1997)-often needs to model the field and the pose of the ball.Since all football fields share a general rectangular layout with two goals on either side, such permanent knowledge is provided by the designers.If the actual dimensions of the field to be played on are known, they can be given and encoded as constant parameters.If the exact dimensions are not known beforehand, an initial value can be provided as part of the mutable state, which is then refined and updated through estimation.The pose of the ball must be part of the mutable state and continually estimated. State representation: time As discussed in Section 5.2, the mutable state of the WM is expected to change over time.Therefore, in this section, we highlight time as an important dimension of the mutable state 6 .It is crucial to distinguish two separate types of time here: 1) the real-world time, which refers to real-world events or changes and 2) the belief time, at which the robot holds a certain belief as part of its WM state.A state can model these two time concepts independently.Modeling time is often useful for error analysis, recovery, and learning-based automatic improvements. To describe how these two time concepts are different and can be modeled, let us consider a mobile manipulation task for a robot with navigation and manipulation capabilities (see the green rectangle on the top of Figures 8-11).The robot is first initialized next to an object placed on a table (at t i ).Then, the robot grasps the object with the gripper (at t g ) and navigates to a target location.However, during navigation, the object slips and is dropped on the floor (at t l ).The robot does not observe this accident, and as a result, the placing task fails (at t p ).The robot has two opportunities to provide information to the WM: when the robot grasps the object and when the robot tries to place the object.The former is provided by an action component that executes the grasping skill, and the latter is provided by a perception component that detects an object in the gripper. State without time We first highlight that a WM does not always need to model time.For many use cases, it is sufficient to have a state that only holds the most recent estimate of its belief about each relevant fact of the real world.In the example given in Figure 8, the belief about the object location changes over time when new information is provided, but only the most recent belief is stored and maintained. Although it simplifies the model, it loses the past information, and thus, it could not guess the object location at t p anymore. State with belief time When the state changes, it is possible to model when the change in the belief happens.In the example given in Figure 9, the state changes similarly to Figure 8, but each belief stored is associated with time.This concept enables the state to model when the robot thought about the real world.Due to this advantage, the robot in this example could query its past belief, e.g., "when the robot held a belief at t l , where was the object then?" (the answer is "in the gripper"). State with real-world time Another time concept is about changes in the real world.In the example given in Figure 10, the state stores the information reflecting the concrete events; the object is estimated to be on the table in [t i , t g ] and at somewhere along the robot trajectory in [t g .This concept enables the state to model when the real world changed.By taking this advantage, for example, poses of a constantly moving object can be estimated by inter/extrapolation. State with both belief and real-world time Since these two time concepts are independent of each other, they could be modeled together, as shown in Figure 11.In this case, the state keeps its belief at different moments, each of which models the time in the real world.To understand this easier, it should be noted that changes in the state over time are additionally modeled compared to Figure 10; for instance, the most recent belief given in Figure 11 matches the state given in Figure 10.The combination of these two time concepts enables the WM to answer complicated questions, e.g., "when the robot held a belief at t l , where was the object at t g , and is that belief still up-to-date?"(the answer is "the object was believed to be in the gripper but it was updated afterward at t p , and it is now actually believed to be somewhere along the robot trajectory"). Literature analyses Time is not represented in the studies by Nilsson (1984), Lomas et al. (2011), Leidner et al. (2014), Milliez et al. (2014), Dömel et al. (2017), andLehner et al. (2018).Some WMs such as those studied by Triggs and Cameron (1991), Blumenthal et al. (2013), Foote (2013), Tenorth andBeetz (2013), Beetz et al. (2018), Schuster et al. (2018), Schuster (2019), andAgha et al. (2021) represent time in the state.However, the conceptual distinction of the belief time and the real-world time is not always considered explicitly.To the best of our knowledge, no WM in the literature employs both the belief and the real-world time together.The concepts introduced in this section would help WM designers be more conscious of and consistent with which time aspects they implement in the state. World model operations While the previous section described the design dimensions of the WM state, this section does so for the operations that manipulate the state.The input arguments of an operation are information from the tell interface and/or the state itself and/or the output of other operations; examples were previously given in Figure 5 in gray.The output of an operation is returned through the ask interface, included in the state, or passed as an input to another operation (see also Figure 5). General classes of operations defined in this section are used in Section 7 to analyze different WM implementations and in Section 8.2.2 to discuss the rigorous design of WMs. Basic operations Based on the aforementioned generic definition, a set of basic operations can be defined.All operations can be categorized to either one of them or a combination of them. Add First, we introduce a set of operations that can write into the state7 .The add operation adds input data to the state.Therefore, after executing this operation, the input is part of the new state, meaning that this operation implements a direct write-access to the state.It should be noted that this is the minimal requirement to the add operation.Depending on the design of the WM, the add operation might be more complex.For instance, sanity-check operations might be integrated to guarantee that the new state conforms to certain rules (e.g., there must be no redundancy in the state). Remove The remove operation removes input data from the state.After execution of this operation, the data are deleted from the state and not contained in there anymore.Thus, as the add operation does, the remove operation also has a direct write-access to the state.It should be noted that the remove operation might further modify other parts of the state than the input depending on the design.For example, if the state is represented as a tree and the remove operation is applied to a node, it might remove all children of the node as well. Modify The modify operation replaces previously stored information with new information.This is technically equivalent to the combination of the add and remove operations.Semantically, however, the modification of a world state is more than a combination of the add and remove operations.Since the state reflects the real world, a modification of the state implies that either the world has changed or the belief about the world has been incorrect. Depending on the design, the modify operation might be the only operation which is allowed to write to the world state.For instance, a controller usually has a vector of a constant dimension as a state.All measurements (inputs) modify the state by replacing outdated measurements and do not allow the removal/addition of an element from/to the vector. Get From here, we introduce a set of operations that can read from the state8 .The get operation returns a subset of the state as an output.Therefore, this operation implements a direct read-access to the state without modifying it. Abstract The abstract operation reduces information from an input and/or the state to an output by abstracting the information.Since information is lost through this operation, it is not possible to reconstruct the original information from the result. For example, we consider the geometric relation between two objects; it is possible to abstract a qualitative relation, e.g., on, as an output, using their pose transformation as an input and a threshold of the distance as a parameter.However, even with the information that one object is on another, it is not possible to reconstruct the pose transformation between them. Transform The transform operation transforms an input and/or the state to an output.The difference to the abstract operation is that no information is lost, and thus, there exists an inverse operation.For example, we consider a situation where a robot localizes an object pose with a movable camera, and we want to obtain the object pose with respect to a robot base.To calculate it, a transform operation can be used based on the object pose relative to the camera as an input and the pose of the camera relative to the robot base from the state.The transform operation is not limited to geometric transformations; a unit conversion (e.g., to transform data from millimeter to meter) is another example. Compound operations If a WM chooses its boundary close to its state, the aforementioned basic operations are sufficient to implement the tell/ask interface.For many WMs, however, compound operations are necessary to integrate the information from the tell interface into the WM state.Likewise, compound operations enable the ask interface to retrieve information that requires interpretation based on the WM state.These compound operations can be composed of basic operations.The basic operations can be chained to allow a more complex interpretation of the data (pipeline concept).With this concept, the output of a basic operation is used as an input of the next operation.Instead of piping the data through operations, basic operations can also be used to make decisions (decider concept).In this case, the output of the basic operation is used not to modify the incoming data but to determine how to integrate them or how to extract further information.In the following, we show two examples of such compound operations, one on the tell interface and the other on the ask interface. Tell interface-object detection with a camera: The first example is to integrate new information of the object detection (see Figure 12A).When a perception component estimates a pose of an object with a camera, the pose is first estimated in the camera coordinate frame and given to the tell interface.This information could be interpreted either 1) as a detection of a new object instance or 2) as a refinement of the pose of a known object instance.This decision can be made, e.g., by calculating if the distance to the nearest known object exceeds a certain threshold.Therefore, the basic transform and abstract operations are piped through, and then, either the add or modify operations are decided to be triggered. Ask interface-checking if the cup is visible: Another example is to query if a unique, specific cup is visible to the robot or not (see Figure 12B).The computation to answer this query could be a sequence of steps with increasing computational effort.First of all, it checks if the cup already exists in the state.If yes, the pose of the cup needs to be checked if it is within the field of view of the camera (using the transform operation beforehand).If this is also yes, the last step is to check if the line of sight to the cup is occluded by other objects or not.If either one of the aforementioned checks (using the abstract operations) returns "no, " the following (computationally expensive) operations can be skipped and could answer "no" at the ask interface. Case studies The two use cases in Section 3 aimed at providing concrete examples to illustrate what a robotic WM is.This section now interprets 15 WM implementations-including the two aforementioned-by applying the terminology introduced so far.Our aims are to: 1) show that our concepts are general enough to be applicable to WMs from very different domains; 2) motivate the need for a common terminology as the terminologies used now are different in each work; 3) provide an overview of four decades of work on robotic world models; and 4) classify these works (see Table 1). Shakey the robot Already in the first mobile, autonomous robot, Shakey (Nilsson, 1984), the WM was a central component.Based on this WM, the robot could plan its actions independently and represent the state of itself and its environment. WM boundary The WM boundaries are defined by four functions.On the tell interface, there are three functions, ASSERT, DELETE, and REPLACE.The ask interface is implemented by the function FETCH.The interfaces of the Shakey WM have variable distance to the WM state.Some information can be added directly to the state, e.g., a robot position, but for others, compound operations are necessary, e.g., locations of movable objects. 10.3389/frobt.2023.1253049 FIGURE 11 WM state with a time concept of its belief and the real world. State representation In Shakey, the state is represented by formulas of first-order logic.The state, thus, consists of statements which link entities from five different entity classes (Faces, Doors, Rooms, Objects, and Robot) to other entities or values via various predicates.The applicable predicates depend on the entity class.The state models the environment as well as the robot information, and it models concrete as well as abstract information (e.g., problematic locations).The state consists of parameters and mutable states that are either given or estimated by the system.The state does not represent time. Operations The tell/ask interface is similar to the basic operations add, remove, modify, and get.For some information, these can also be used directly to implement the interface.However, in addition to the state, there are axioms that define the operations on the state.For example, adding a location for an object can either add a new statement to the state, or if a location is already stored, modify the existing one, since the rule exists that an object can only be at one location.For some parts of the state, the interface is, therefore, realized by compound operations. Oxford World Model The Oxford World Model (Triggs and Cameron, 1991) is proposed as a geometric database of an autonomous navigation robot for a factory environment. WM boundary One of the main motivations of the Oxford World Model is to encourage a clean and modular internal software architecture where different components communicate via the WM.Therefore, the WM boundary is set so that the WM provides a uniform inter-module interface. State representation The following types of information are stored in the state: a) a factory layout and route maps; b) object positions and their 3D models; c) sensor features; d) caches of local or temporary data; and e) tasks and scheduling information.It represents the environment aspect for both the concrete part (such as poses and 3D models) and the abstract part (such as sensor features).The factory layout and route maps are permanent but stored into the state together with other information.Each type of these data is attached with a common header containing a timestamp.Although this time is described to model a "time-of-last-modification" of objects, it is not explicit if this models belief time or real-world time. Operations Basic operations such as adding and removing objects are executed via indices.In addition, generalized compound operations are provided, i.e., to perform a specific modification on every object having the same object type, or to get objects satisfying a specific spatial condition. ROS TF The Robot Operating System Transform Library (ROS TF) (Foote, 2013) provides a standard way of maintaining coordinate frames and transforming data on the open-sourced middleware ROS (Quigley et al., 2009).The main motivation is to free the programmers from the necessity of calculating the transformation between frames. WM boundary Due to its general, low-level scope as a use case, the WM boundary is set around the state.Operations needed for transform computations are included within the WM boundary, and many components require such a calculation interface with the ROS TF.More complex operations (such as how to utilize obtained transforms) are expected to be implemented on external components telling/asking the WM. State representation The ROS TF uses a tree structure; nodes represent a frame (a coordinate system), and edges represent a 6-degrees of freedom (DOF) relative pose.Since the main purpose is to calculate the transformations between frames, they do not distinguish the robot/environment, concrete/abstract aspects explicitly. For efficiency, it distinguishes the mutable state and constant parameters by using different representations, /tf and /tf_ static, respectively.While the /tf maintains transformations with the real-world timestamps and represent changes in the real world, the /tf_static expects that any transform is static and considered to be valid at any time. Operations The basic get and add operations, as well as the transform operation, are implemented.For sending (adding) information, the sendTransform operation is used.For querying ( getting) a transformation between two frames, the lookupTransform operation is used, in which the transform operations are chained.Using the spherical linear interpolation (SLERP) algorithm (Shoemake, 1985) (for orientations) and a linear inter/extrapolation (for positions), lookupTransform approximates the changes of frames given a timestamp. AIMM system DLR's Autonomous Industrial Mobile Manipulator (AIMM) (Dömel et al., 2017) focuses on performing transportation tasks autonomously in partially unstructured factory environments with its mobile platform and manipulator. WM boundary The AIMM employs a navigation system, an object detector, a motion planner, and a flow-control system.They must share the world state, and thus, the WM boundary can be drawn between them.The object detector provides information about the pose, category, size, and detection quality of objects at the tell interface.The motion planner, on the other hand, queries information about rigid body kinematics such as free space and collision at the ask interface.The flow-control system exchanges information with the WM at both the tell and ask interfaces.The ask interface is typically used for making decisions depending on the current state.Once the flow-control system executes a skill, the tell interface is used to reflect the high-level effects expected by the execution. State representation The representation focuses on providing not a detailed geometric description but a topological-level description of the relation between different types of objects.For this purpose, the WM state employs a tree representation.Each node represents an object with labels specifying its types, and edges connecting nodes represent a 6-DOF transformation 9 .Nodes representing the robot have a type Robot, which are distinguished from the environment aspects.The object ownership information (such as which object is grasped by the manipulator), which is abstract, is modeled by edges constructing the parent-child relations.Other abstract aspects, e.g., grasps and approaches for manipulation, are modeled as child nodes of a concrete node with a type RigidBody.9 Although this is similar to the ROS TF (see Section 7.3), the important difference to the ROS TF is that this WM aims to represent the world in a quasi-static, object-based manner with a higher level of abstraction. Since components require only the most recent state, time is not modeled. Operations There are basic operations having the write access to the state, such as to add a new node, remove an existing node (and its children), modify the property of a node or the 6-DOF pose of an edge, and to reassign a node to another node. To have the read access to the state, there are not only basic operations to simply get a node (or its properties) but also compound operations.One of them is to query a pair of nodes satisfying certain conditions by utilizing the tree structure.By specifying types and/or properties of nodes as well as a maximum/minimum number of hops between them, only a relevant subset of the state is effectively extracted.For instance, the motion planner queries only Robot and RigidBody objects under the branch of the current scene node. WM boundary The WM boundary can be drawn on the input side between typically stateless low-level perception components, such as stereo matching, and the modules of the SLAM system itself that contain the state related to local and global localization and mapping, such as the keyframes of visual odometry, the state vector of a local reference filter, the pose graph for global optimization, the local obstacle maps for navigation, and the global 3D maps for exploration planning and multi-agent coordination.Thereby, the SLAM WM receives preprocessed sensor data from proprioceptive sensors such as inertial measurement units (IMUs) and exteroceptive sensors such as stereo cameras via its tell interface.On the output side, the WM boundary can be drawn between the SLAM system providing pose and map information via its ask interface and Frontiers in Robotics and AI 15 frontiersin.orgthe typically higher-level components consuming these, such as path planning, exploration planning, mission coordination, or visualization modules. State representation Sensor properties, such as noise models, sensor calibrations, and certain environment characteristics, such as the local gravity constant, are assumed to be permanent in the real world during the system runtime and, thus, provided as parameters to the WM.The state variables contain information about the concrete aspects of the world, such as robot poses and 3D geometry, as well as abstract annotations, such as submap poses, which are mainly used for SLAM-internal purposes.Real-world time is used in the local estimation both for sensor data acquisition timestamps as well as state prediction timestamps, which are important to relate measurements and estimates to events in the environment and to the measurements and estimates by other robots.As the SLAM is concerned with moving agents, non-static poses are always accompanied by real-world timestamps.For the resulting maps, timestamps are, however, not necessarily required as the consuming modules are only interested in the most up-to-date model of the environment available. Operations Most complex operations integrating measurements or estimates into the state variables are triggered by the tell interface.In contrast, most operations triggered by the ask interface are simple get operations to return parts of the state, accompanied by minor abstractions and transformations.However, expensive complex computations, such as global map compositions, could also be triggered on demand to abstract or/and transform state variables and modify caches for intermediate results.Perception data are typically pushed into the WM, and the WM itself not only pushes out new state estimates (e.g., poses and maps) to consumers but also provides pull-based interfaces, e.g., in inter-robot communication to provide missing data to other agents demanding them. LRU2 DLR's Lightweight Rover Unit 2 (LRU2) robot (Schuster et al., 2017;Schuster et al., 2020), being a mobile manipulator, uses two separate WMs, one for navigation, similar to the LRU1 (see Section 7.5), and an additional one to support mobile manipulation tasks, similar to the AIMM (see Section 7.4).The information required for manipulation tasks are substantially different from that used by navigation and exploration components.Therefore, following the "Low coupling, high cohesion" principle (see Section 8.1.3)as well as the guideline given in Section 8.2.1.2,a WM focusing on quasi-static scene representation is additionally employed.Regardless of the difference of the application domain (from the factory automation to the planetary exploration), the AIMM WM can be employed with ease due to its generally designed boundary. There is a unique data exchange on the WM boundary between these two WMs.The navigation WM queries the real-world dynamics of manipulated objects at the ask interface of the AIMM WM.For example, when LRU2 manipulates a box with fiducial markers, the navigation WM requires to be informed of the changes accordingly since objects with the markers are used as landmarks for the SLAM system. NeBula system As described in Section 3, NeBula is an autonomy solution for navigation, mapping, and exploration of unknown extreme environments (Agha et al., 2021). WM boundary The purpose of the system is similar to the LRU1 (see Section 7.5), and thus, the WM also has many similarities, which can also be implied by the "Design for use" principle (see Section 8.1.1).Similar to the LRU1, the WM boundary can be drawn at the output of the low-level, stateless sensor-fusion component HeRO and at the input of high-level planning components. State representation The state models belief over the internal robot state such as its pose, as well as the external surrounding environment state such as maps and hazards.The state is an abstracted representation of not only spatial information but also temporal information.This allows for maintaining probability distributions over various state domains.As a unique representation of this system, the state includes the hierarchical belief representation called the Information Roadmap (IRM) for making decisions on where to explore, taking uncertainty into account.The IRM is a graph representation containing concrete information such as occupancy and abstract information such as coverage and traversal risks. Operations Similar to the LRU1, complex operations are mostly for integrating inputs from the odometry source and perception results (such as loop closures) at the tell interface.As one of the compound operations, factor graph optimization employs a method to identify and discard incorrect loop closures that can arise while working in environments with poor perceptual conditions.At the ask interface, the operations are mostly basic ones to get part of the information from the state. KnowRob system As described in Section 3, KnowRob is a knowledge processing system specifically designed for autonomous robots to perform manipulation tasks in various environments (Tenorth and Beetz, 2013;Beetz et al., 2018).Its primary function is to maintain a belief state of the world, serving as a WM that helps the robot understand and interact with its surroundings.It features a modular architecture that interfaces components such as the perception framework RoboSherlock (Beetz et al., 2015;Bálint-Benczédi et al., 2019), the task executive CRAM (Beetz et al., 2010), and the motion planner Giskard (Fang et al., 2016). WM boundary The WM boundary of KnowRob is not tightly confined around the belief state.Instead, it encompasses various kinds of state representations and operations that interact with the belief state.Information can be asserted through tell interfaces to the belief state from different components such as RoboSherlock (Beetz et al., 2015;Bálint-Benczédi et al., 2019) or CRAM (Beetz et al., 2010).The ask interfaces are also queried by RoboSherlock (Beetz et al., 2015;Bálint-Benczédi et al., 2019), CRAM (Beetz et al., 2010), and Giskard (Fang et al., 2016). State representation KnowRob predominantly employs a sophisticated symbolic and semantic representation of the belief state, which encompasses both the robot and its environment in a cohesive manner.The semantic concepts of ontology can be regarded as parameters, while the instantiated data within the belief state can be considered mutable.Moreover, time plays a significant role, particularly in the context of episodic memory knowledge.However, no explicit distinction is made between the real-world occurrence timestamps and the robot's belief update timestamps.Additionally, the belief state does not discriminate between concrete and abstract information explicitly, but both representations are collectively represented in the knowledge graph.Nonetheless, the system features a component that primarily focuses on symbolic abstract data, as well as an inner WM that represents the concrete state through a physics simulation. Operations On both the tell and ask interfaces, operations are used to query the belief state and assert information to it.The operations that query the knowledge base range from basic operations to compound ones, depending on the reasoner (ProLog, SPARQL, … ) used.Additionally, further compound operations are implemented as computables that can query the belief state. Rollin' Justin The WM of the humanoid robot Rollin' Justin serves as a fundamental element in the hybrid planning and reasoning methodology, as detailed by Leidner et al. (2014). WM boundary The WM boundary for both the tell and ask interfaces can be tightly drawn close to the world state.For example, the localization component updating the robot's position within the world state based on a pre-computed environmental map communicates through the tell interface.Another perception component leverages fiducial markers to adjust the geometric pose of detected objects via the tell interface during execution.The hybrid planner functions as both a planning and acting module, utilizing the tell interface to modify the symbolic state representation during execution while also querying via the ask interface the current symbolic world state for task planning purposes.The motion planner similarly queries via the ask interface the geometric world state, thus enabling the hybrid planning approach. State representation The state representation adopts an object-centric perspective of the environment, associating both geometric and symbolic information with each object.Within the hybrid planning approach implemented on Rollin' Justin, PDDL predicates are necessary for high-level task planning, while the geometric state representation facilitates geometric planning.The object database (Leidner et al., 2012;Leidner, 2017) serves as a knowledge repository containing information on object affordances and appearances, embodying a combination of assumed permanent parameters in the form of an object taxonomy and transient, mutable state, such as objectspecific abstract grasp frames.The world representation component (Leidner, 2017) comprises the mutable state variables and is instantiated from this knowledge. Operations State information access relies primarily on basic operations.The aforementioned components directly add, remove, or modify both geometric and symbolic state representations.Within the world representation component, geometric and symbolic representations are modeled independently.However, various external approaches exist to ensure consistency between these representations (Bauer et al., 2018;Bauer et al., 2020;Lay et al., 2022). WM boundary The goal of this WM is to centralize knowledge from the planning, perception, control, and coordination components.The WM boundary is set quite away from the state.As described in the following sections, the WM takes raw sensor data at the tell interface and caches intermediate data.On the ask interface, for instance, components can query collision models of the entire scene. State representation The state uses a directed acyclic graph (DAG) to represent a full 3D scene.In addition to a unique ID and a list of attributes (which are abstract), each node holds a bounding collision geometry to represent the concrete, environment aspects.Any geometric data are assumed to be permanent and stored as parameters.Poses are represented not by the edges but by nodes called transform.Since these transforms tend to vary over time, each transform node stores its data with associated timestamps to represent the real-world time.The time concept is utilized for operations for tracking and prediction. The state stores not only the final resulting data but also raw sensor data and intermediate data created by operations. Operations The unique IDs are used for providing basic operations such as setData to add information and getData to get information. 10.3389/frobt.2023.1253049 As an example of compound operations, the RSG WM provides an operation to calculate spatial information.Given a starting node and a targeted node, the operation queries a path connecting these two nodes.Based on the transform nodes along the path, it connects transform operations in a pipeline manner.Since each node can have multiple parent nodes, it is possible that multiple paths are discovered.In this case, different policies could be employed, e.g., either to use the path containing the most recent timestamp or to use semantic tags to decide a most reliable path. World model for multi-robot teams Roth et al. ( 2003) proposed a robotic WM for a team of robots under high communication latency.Since the application domain is to let the team of robots play soccer in RoboCup (Kitano et al., 1997), the WM is quite specific to this domain.Nevertheless, it still addresses unique challenges common to any dynamic environments for a multi-robot setup. WM boundary The WM takes two sources of information at the tell interface: vision and communication.The vision component detects all relevant objects in the world by their colors and produces information about pose in the local coordinate frame of the robot.The communication is possible but with only limited bandwidth and high latency.Therefore, the robots neither synchronize data streams nor have a centralized external server, and instead, they communicate only relevant data on an as-needed basis.At the ask interface, planning components query information for decisionmaking activities. State representation Each robot maintains its individual world state and a global world state.The individual one contains the robot, concrete aspects such as the robot's position and heading, as well as the environment, concrete aspects such as positions of teammates and opponents.As parameters, an a priori map of the field is given initially and utilized for localizing the robot.The global world state consists of the environment, concrete aspects such as the location of the teammates and the ball, as well as the environment, abstract aspects such as if a robot is the goal keeper and if a robot saw the ball. Operations As in the study by Stulp et al. (2010), the individual state is updated by three different policies: either from vision information, from communicated information, or combination of both.The updates to the location of the robot and the opponents come directly from the vision information.The location of the teammates is given by the communicated information.The ball position is estimated by using both sets of information via a Kalman filter. SPARK system SPAtial Reasoning and Knowledge (SPARK) (Milliez et al., 2014) is a WM that maintains a state of the world to enable a robot to plan and act, especially collaborating with human workers.The goal is to generate symbolic facts from the geometric state representation, allowing for the creation of collaborative plans and generation of efficient dialog. WM boundary The WM gathers geometric information from perception components at the tell interface and provides symbolic information about the robot's belief about human beliefs at the ask interface.The symbolic information is utilized by the human-aware symbolic task planner and the execution controller (Warnier et al., 2012). State representation In the state, it maintains the robot pose, the quasi-static objects in the environment, and the positions of humans including their hands, head, and shoulders.The state is mostly concerned with the concrete aspects, and only minor abstract features are represented for checking the identity of humans and objects.No time is modeled, and the state assumes that the most recent belief holds true if no further information is available. Operations There are several complex operations employed in SPARK.The first example is to track and manage object locations in the environment.Using the decider concept, it combines basic operations and either modifies the pose of the object, removes the object from the state, or keeps the state without any change.An important compound operation is to estimate human beliefs, e.g., to compute the visibility and reachability of an object from a human perspective. Deep learning: introducing world models The term "world model" was first introduced to the deep learning community by Ha and Schmidhuber (2018), where the WM refers to a compressed spatial and temporal representation of the environment.The work focuses on learning latent spaces as abstractions of the input sensor space so that it can be used as an internal simulator, which builds on very early work on learning such models (Schmidhuber, 1990). WM boundary In the architecture proposed by Ha and Schmidhuber ( 2018), the world model boundary can be drawn around the vision (V) and the memory (M) networks.The V network serves as a gateway for sensory input, consuming RGB images, which can be considered the tell interface.The M network takes as input the embeddings from the V network as well as the actions from the control (C) network.Since the world model boundary in their paper is drawn around the V and M networks, this propagation of actions from the C network back to the M network can also be considered a tell interface.The embeddings from both the V and the M network serve as input to the C network, and one can interpret this as the ask interface. State representation The state consists of the network embeddings of both the V and M networks and is mutable. Operations Both the V and M networks can be viewed as operations.The V network can be interpreted as an operation that abstracts the input 3D image into a latent state representation.Another abstraction happens in the recurrent structure of the M network. Deep learning: DayDreamer Wu et al. ( 2022) introduced a WM that learns a forward model of the environment, predicting the environment dynamics.The neural network is trained based on a replay buffer of past sensory inputs to the architecture. WM boundary The boundary can be drawn tightly around the neural network where sensory information such as RGB, depth, proprioceptive joint readings, and force sensor information are passed as input to the WM.This sensory input is passed to an encoder network which can be viewed as a tell interface.The actor-critic algorithm learns a policy using the latent representation of the encoder network provided through a ask interface.There is another ask interface in the form of a decoder network that decodes the same latent representation to human-readable information. State representation The state representation is represented as the latent encoding of sensory information and predictions from previous states based on the learned forward model.Therefore, the state is mutable. Operations The different components in the neural network architecture can be interpreted as operations.The encoder network abstracts the sensory input and adds information to the latent state representation.The dynamic network acts as a complex operation that applies kind of a projection, i.e., the forward model to the latent state representation, and the reward network uses the latent state representation to predict future rewards.The encoder network operates as an abstraction, taking the sensory input and adding to the latent state representation.Acting as another operation, the dynamic network performs an additional abstraction in the form of a projection to future dynamics of the environment.Finally, the reward network adds an additional layer of abstraction to the latent state representation to forecast future rewards. Deep learning: vision Another recent interpretation of the "world model" was given by LeCun (Author anonymous, 2022;LeCun, 2022) in a "proposal for a modular, configurable architecture for autonomous intelligence." It should be noted that this work presents a vision for modularization of deep learning modules for autonomous machine intelligence, which has not been implemented.We include it nevertheless, as it highlights a trend toward the formalization of WMs, as shown in Figure 1. The WM is one of the six main modules of a configurable architecture for autonomous intelligence.The WM "is a kind of simulator of the part of the world relevant to the task at hand." LeCun, thus, emphasized the use of WMs as internal simulators more than using them to reflect the actual state of the outside world.Indeed, the state itself is not stored in the world model module but in a module called "short-term memory." In our terminology, the "short-term memory" would rather be included in the WM boundary. 8 Discussion: world model design As highlighted by the case studies, the design space for WMs is huge.This raises many practical questions for their design and implementation.Which types of queries will the planning components need in order to generate their plans?Which aspects of the world are to be considered transient or permanent, and which of the latter do we know in advance?Should derived intermediate states be cached any time new information is provided or only when a query is made that requires the update?What should be the internal organization of the state?A list, a tree, a relational database, or an ontology? These questions are difficult to answer generally.They vary based on the robot's tasks, sensors, and other system components.In this section, we offer guidelines for these design considerations. Principles from software engineering First, as a basis for the design guidelines, we highlight the following software engineering design principles that we consider relevant to WM design. Principle: design for use Since a WM provides information to other software components, its state should be rich enough to produce the information relevant to these components.This does not mean, however, that the WM should store as much information as possible; there is no reason to store data in the state that will never be used during any tell/ask requests.In the extreme case, if there are no components to ask any information from a WM, there is no reason to have a WM at all, for instance, in the subsumption architecture (Brooks, 1986). The design of the WM is determined, thus, not by what the designer assumes is relevant but what the system actually requires.This is the "Design for use" (DfU) principle, which is also used to guide the design of ontologies (Smith et al., 2004).This principle is applicable not only to the contents of the state but also to the operations and the boundary as well. Principle: do not repeat yourself As shown in Section 6, deriving the state of the world from multiple sources involves many operations, such as abstractions from low-level sensor data to higher-level representations, or transformations to convert the data into another form.If different software components implement these operations separately, repetition and redundancy are inevitable.This hinders the reuse of implemented functionality in another component, and furthermore, it makes debugging activity tedious due to the redundancy (Blumenthal et al., 2013).This is applicable not only to the operations but also to the state elements.For example, in the NeBula system (Agha et al., 2021), both the local and long-range motion planning components need the current robot pose, and it would be highly redundant if each consumer implemented its own SLAM algorithm to infer this current pose. A dedicated WM component should bundle such state elements and operations in one place and enable other components to update and query it so that they must not re-implement duplicated data or redundant computations (Blumenthal et al., 2013).This is the "Do not repeat yourself " (DRY) principle in software engineering, which is formulated as "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system" (Hunt and Thomas, 1999). Principle: low coupling, high cohesion The DRY principle helps decide whether a shared WM is required for a robot or not.The "low coupling, high cohesion" (LCHC) principle helps decide how many.This principle states that components in a component-based system should have few couplings between different components and high cohesion within a component (Yourdon and Constantine, 1979) as a tight coupling of components would cause invasive changes on them when modifications are needed (Heim et al., 2015). As an example of applying the principle, our LRU2 robot (Lehner et al., 2018;Schuster et al., 2020) (see also Section 7. 6) is a mobile manipulation platform that navigates based on SLAM and path planning components and manipulates objects using visual pose estimation and sampling-based motion planning.As manipulation is only performed when the robot is not moving, the tasks of navigation and manipulation and, thus, the decision-making components for executing these tasks, are entirely decoupled.It is thus reasonable to have two distinct WMs because the components for navigation and manipulation require disjunct queries on the WM and use different hardware (mobile base vs. articulated arm) and software (the path planner in the 2.5D geometric space for the robot base vs. the motion planner in the configuration space for the manipulator).As the WMs for navigation and manipulation are disjunct, merging them into one WM would lead to a component with low cohesions; therefore, a separation into two is preferable according to the LCHC principle. If the task specification is changed to require the robot to be able to grasp objects whilst moving, this would require a reevaluation of having two WMs.Since the two WMs have to be synchronized quite frequently for such a use case, having two will lead to high coupling of the WMs and, thus, will be considered to be a non-optimal design decision.It is suggested that the two WMs should be merged into one. Guidelines for world model implementations The principles introduced in Section 8.1 are applicable to all aspects of the WM: the state representation, the operations, and the boundary.However, these principles sometimes contradict each other, and a trade-off between them must be found.Therefore, the aim of this section is to clarify where such conflicts exist and to concretize more specific guidelines for the implementation of WMs.We propose the following guidelines based on our experiences obtained during the development of the AIMM WM (Dömel et al., 2017), which is described in detail in Section 7.4.Although the guidelines are considered applicable to the WM approaches reviewed in Section 7, there might be robotic systems where the guidelines are not always applicable or not feasible to follow in practice.Nevertheless, the guidelines are our suggestions for supporting design decisions and not strict regulations a WM must always satisfy. Implementation: state representation 8.2.1.1 Reducing redundancy If information in a state can be generated by applying operations on other parts of the state, such information is redundant.Since redundancy is a violation of the DRY principle, it should not be cached into the state in principle.The redundancy often makes maintenance difficult; if a part of the state is modified, the corresponding, redundant part must be updated accordingly to avoid information mismatch.However, this guideline often contradicts the "minimize calling frequency and computational load" guideline (Section 8.2.2.2).Thus, the trade-off is between reducing redundancy and computational load. In the AIMM WM (Dömel et al., 2017), intermediate information is not cached into the state to avoid redundancy since the operations are called mostly in an event-driven manner. Keeping abstraction levels homogeneous If a single WM must contain information at different levels of abstraction, multiple abstraction levels within the state must be simultaneously kept up-to-date.Having multiple representations within a state, however, makes it difficult to keep the state consistent if the different levels of abstraction represent the same aspect of the world.Ideally, abstraction operations are, thus, computationally inexpensive so that they can be called regularly on demand.This is a specific case of the "reduce redundancy" guideline (Section 8.2.1.1). In the AIMM WM (Dömel et al., 2017), edges of its tree representation are used only for geometry and not for symbols.If necessary, symbols can be computed via operations without high computational load. Choosing the abstraction level as high as possible, as low as necessary The appropriate level of abstraction for the state representation is determined by the DfU principle.If the abstraction level is too low, high-level action information cannot be integrated at the tell interface.On the other hand, if the abstraction level is too high, e.g., using only symbolic predicates, a motion planner could not query metric information about the geometry of the world at the ask interface.Thus, an appropriate level of abstraction must be chosen and employed; concretely, it should be as high as possible but as low as necessary. In the AIMM WM (Dömel et al., 2017), the abstraction level to describe a topological relation between different types of objects is employed so that the sensing components can tell information about detected objects, the planning components can ask collision-free spaces, and the acting components can tell expected effects of skill execution. Decreasing dependency among state elements Even if a state has a homogeneous level of abstraction without redundancy, a modification in one part of the state may require updates to other parts.For example, we assume a state has a list of object poses with respect to the world coordinate frame.If an object A is placed on another object B in the real world, and if the pose of the object B changes, the state has to update the pose of both objects A and B. Such dependencies among state elements should be avoided as much as possible since they increase the number of operation calls (see Section 8.2.2.2).In this example, employing a tree as a data structure of the state could be one of the solutions [as the AIMM WM does (Dömel et al., 2017)]. Using simple time representations Finding appropriate representations for time in a WM is challenging.As described in Section 5.3, there are two kinds of time.Furthermore, the time in the real world is by nature continuous, while the state and its modification are usually associated with discrete time.Due to such complexity, careless introduction of the time representation could easily increase the model complexity without providing practical advantages.Following the DfU principle, the time representation should be employed only according to the actual necessity from the application.For many WMs, no explicit modeling of time is needed.Instead, they often assume that the current world state represents the current real world. If modeling of time is required, special attention must be paid to which type of time is represented and how the discretization of time is handled in the model.It is also important to decide carefully which aspects of time to represent in the state and which to implement by operations.For example, the time interval during which an object is at a certain position can be modeled directly in the state.Alternatively, for example, we can store into the state only the moments at which the position of the object has changed and let an operation determine the position at certain points of time by intra/extrapolation.This guideline suggests using a simple representation, and which is simpler depends, thereby, strongly on the application. In the AIMM WM (Dömel et al., 2017), no time needs to be modeled since having access to the most recent belief about the most recent world satisfies the requirements from components interfacing with the WM. Implementation: operations 8.2.2.1 Modularizing and reusing As discussed in Section 6, operations can be combined into compound operations.If some operational routines are common to multiple operations, they should be modularized and reused, following the DRY principle. In the AIMM WM (Dömel et al., 2017), basic operations are provided as a Python library, allowing for implementing compound operations reusing the basic ones as a module. Minimizing calling frequency and computational load Calling operations causes computational load.Thus, keeping the call frequency low, especially for computationally expensive calls, is important to achieve better performance. As shown in Figure 13, when operations are triggered, information flows can be decided by either the information source (producers) or their dependents (consumers) (Bainomugisha et al., 2013) 10 .Producers can push information any time it is available, or consumers can request (pull) the information when they need it (Bainomugisha et al., 2013).Which way is preferable depends on the use case, i.e., the design decision should be made by the DfU principle.In general, we see that the flow of task-specific information that is only relevant to one or a few consumers is best implemented via a pull and the flow of general information that is relevant to many consumers via a push. It should be noted that the push and pull are implementation models of how operations are triggered and how the tell/ask interfaces are called.Although it is counter-intuitive, for example, the ask interface can be implemented by the push approach, where the WM periodically publishes information to other components.Similarly, the tell interface can be implemented by the pull approach, where the WM triggers a perception component in a polling manner to update the state periodically. As mentioned in Section 8.2.1.1,caching intermediate results in the state causes data duplication.However, it can contribute to minimizing the computation load by operations, as shown in Figure 14A.Without caching, the operations f1, f2, and f3 must be called every time information flows from the producer (at the top) to the consumer (at the bottom).If the results of these functions are cached in the state (gray boxes), they must only be called in case information is out-of-date. Caching also enables push and pull information flows to be decoupled, as shown in Figure 14C.This means that operations can be implemented to either push or pull information, depending on their frequency and computational cost.We consider, for instance, that f1 and f2 are computationally cheap and not called very often.Then, any time new information is provided by the producer, it is opportune to immediately push by calling f1 and f2 and then to store the intermediate result in the state.If f3, on the other hand, is computationally expensive, the state would be pushed no further.Rather, f3 would only be applied to the cached, pushed results of f1 and f2 upon request by one of the consumers, and the "pulled" result of f3 would be cached as well. In the AIMM WM (Dömel et al., 2017), as written in Section 8.2.1.1,no intermediate information is cached into the state.Since the WM represents semi-static environments, components basically push information at the tell interface and pull information at the ask interface. Implementation: boundary 8.2.3.1 Keeping the boundary abstracted from components If the WM boundary is set too close to the producer and consumer components, operations only specific to them will be included within the WM.Since such component-specific operations are not useful for other components, this is against the DfU principle.This is especially critical when the same WM is used for different robotic systems because of differences in software/hardware components. The AIMM WM (Dömel et al., 2017) keeps the generality of operations exposed at the boundary, and thus, the same WM can also be used for our planetary exploration rover LRU2 (Lehner et al., 2018) (see also Section 7.6). Setting the boundary where operations are reusable The aforementioned guideline could lead to an extreme design to have the WM boundary right around the state.However, this leads to reuse of operations; i.e., it is against the DRY principle and Section 8.2.2.1.Thus, the WM boundary should be set so that the operations common and useful for multiple components are included within the WM. Conclusion In this review, we provided an overview of four decades of work on robotic WMs and classified concrete implementations in Sections 3, 7.The main contributions are to make design dimensions of WMs explicit (Section 5, 6) and to highlight the underlying principles that guide decisions on these WM design dimensions (Section 8.1) as well as the trade-offs that may be necessary when they are in conflict (Section 8.2). We learned that the WM does not exist.Even when following the same guiding principles, good WM designs highly depend on the robotic system, the environment in which it operates, and the tasks it is expected to perform.The principles may well lead to a design where one robot has more than one WM, for instance, one for navigation and another for manipulation, as highlighted in Section 8.1.3. A current trend is that the deep learning community is increasingly embracing the design of modular neural architectures and has also identified the WM as a necessary component within such designs.This trend-which we believe to be driven by the "low coupling, high cohesion" principle-is very much aligned with the stance in this review, even if the underlying implementation of states Frontiers in Robotics and AI 22 frontiersin.organd operations as deep neural networks is quite different from the robotic WMs focused on in this review. In Section 4, we saw that different WM boundaries lead to quite distinct interpretations of what a WM is and what it should encompass.However, all of these interpretations can well be referred to as a "world model, " and it is, thus, essentially a homonym for different concepts.By refining the terminology for WMs and their subcomponents and classifying existing approaches, our aim is to enable researchers to clarify what type of WM they are referring to and facilitate the motivation for their design and implementation. One of the future research challenges we foresee is how to design a WM for future robotic systems to perform tasks in more realistic, less controlled scenarios.System complexity will increase by employing more novel and diverse hardware/software components, and new requirements will be posed to WMs.We expect the principles and guidelines discussed in Section 8 to promote the invention of novel, unforeseen WM designs. FIGURE 2 FIGURE 2Model of the Kalman filter illustrated as a WM, consisting of a state (which holds a mutable x and a constant parameter θ), operations (using A, B, and H), and a boundary. FIGURE 8 FIGURE 8WM state without a time concept. FIGURE 9WM state with a time concept of its belief.The beliefs are stored in the state associated with time. FIGURE 10WM state with a time concept of the real world.The belief, modeling changes in the real world with time, changes over time, but only the most recent belief at t p is stored in the state and shown in the figure. FIGURE 12 FIGURE 12Examples of compound operations.(A) Integrating an object detection result into the state via the tell interface.(B) Querying if a unique, specific cup is visible to the robot through the ask interface. Blumenthal et al. (2013) proposed a Robot Scene Graph (RSG) WM as a central, complete geometrical 3D representation of autonomous robots. FIGURE 14 (A) Information flow without caching intermediate data in the state.(B) State caching results of operations.(C) Caching allows for decoupling of push and pull. TABLE 1 (Continued) Classification of representative robotic world models. System coupling means how many components communicate with the WM.Interface types are considered to be many if a wide range of different, heterogeneous data are provided/received at the tell/ask interfaces.Perspective types categorize the referenced work by a perspective they describe the WM from, i.e., as a component within a system or as a framework.
2023-11-04T15:14:24.096Z
2023-11-02T00:00:00.000
{ "year": 2023, "sha1": "9175f0c93b4cb98466cff72586ea1e20cba1f085", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/frobt.2023.1253049/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "07cad83628271e5f7347eee256937c573ddf369d", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [] }
247458096
pes2o/s2orc
v3-fos-license
Psychotic Disorder as the First Manifestation of Addison Disease: A Case Report Introduction Addison disease is a relatively uncommon endocrine disease resulting from adrenal insufficiency. Psychiatric symptoms are among its rare primary and particularly isolated clinical symptoms. This report presents a case with adrenal insufficiency manifested by the psychotic syndrome. Case Presentation A 28-year-old Iranian female with a history of immune thrombocytopenic purpura (ITP) and asthma since childhood presented with a 13-month history of progressive depression with insomnia and nightmare symptoms. After being prescribed haloperidol, clomipramine, and clonazepam for eight months, abdominal pain and weight loss due to anorexia started. Her physical examination showed skin hyperpigmentation in the elbow, knee, ankle, and buccal mucosa. Physical examination and initial laboratory tests suggested adrenal insufficiency. Addison disease was confirmed according to the laboratory tests and abdominal CT. The symptoms were significantly improved using intravenous hydrocortisone treatment. The patient remained calm and had a normal sleep without depressive symptoms or psychosis after 72 hours of treatment. During one year of follow-up, the patient was in good general condition without psychological symptoms. Conclusions This report shows that psychotic disorder can be the first manifestation of Addison disease. Therefore, physicians should be informed about the neuropsychiatric symptoms of adrenal insufficiency, especially when the patient lacks a family or personal history of psychiatric illness. Introduction Primary adrenal insufficiency or Addison disease (AD) is a rare illness resulting from the failure of glucocorticoids with or without deficits in mineralocorticoids and adrenal androgens. The incidence rate of AD is about 35 to 120 cases per million people. Its symptoms depend on the level of stress, glucocorticoid, mineralocorticoid, androgen deficiency, and t comorbidities. Diagnosis in the symptomatic period is not difficult; however, it is challenging in the initial phase. Symptoms are not specific at onset and may include generalized weakness, lassitude, anorexia, fatigue, vertigo, chronic malaise, preferential salty meals, orthostatic hypotension, weight loss, and muscle and joint pain (1). AD is a relatively uncommon disease; however, its prevalence has grown in the last three decades. Based on an expressive registry, autoimmune adrenalitis is the commonest etiology of AD (2). Skin hyperpigmentation and neuropsychiatric manifestations, like irritability, apathy, cognitive impairment, depressive symptoms, sleep disorders, and delusions, are observed during adrenal insufficiency. Psychiatric symptoms and isolated clinical manifestations are rare at the onset of AD (3). Instead, they are commonly accompanied by the cardinal signs of adrenal insufficiency associated with the disease severity. Although this problem is essential, several physicians, such as endocrinologists and psychiatrists, are not informed about the correlation between such diseases. This report presents a case with adrenal insufficiency manifested by the psychotic syndrome. and nightmare symptoms. After a specialized psychiatric consultation, haloperidol, clomipramine, and clonazepam were prescribed. After eight months, the patient's symptoms worsened, and abdominal pain and weight loss due to anorexia started. Because of GI symptoms, the psychologist requested a gastroenterologist consultation. She was an active teacher in primary school, formerly without any problem. Due to tiredness and inability to teach, the patient had applied for retirement. The patient complained of amenorrhea and lost 10 kg weight in eight months. During the first visit, she was cachectic and agitated. She had mild dehydration with an arterial blood pressure of 85/40 mmHg, a respiratory rate of 18/min, a blood glucose level of 90 mg/dL, and a pulse rate of 120/min (Table 1). Her physical exam showed hyperpigmented elbow, knee, ankle, and buccal mucosa in the mouth ( Figure 1). Physical examination and initial laboratory tests suggested adrenal insufficiency. A specialized laboratory test was performed to confirm the diagnosis (Table 1). Laboratory test and abdominal CT confirmed AD ( Figure 2, Table 2). Therefore, she was admitted to the ward due to her general condition and examination. A complete blood count test was performed ( Table 2). The percentages of neutrophils, lymphocytes, monocytes, and eosinophils were 38%, 51%, 3%, and 8%, respectively. Mean corpuscular volume (MCV) was normal; therefore, we did not check ferritin since there was normocytic normochromic anemia. Hormonal tests were conducted due to the patient's menstrual irregularity (amenorrhea). This was performed to check that amenorrhea is not because of a pituitary problem. According to the results (Table 2), the serum level of cortisol was low (1 ng/mL), and plasma ACTH level was very high (355 pmol/mL), showing that the patient had primary adrenal insufficiency (primary adrenal disease), characterized by menstrual irregularities. The patient also was suffering from hypokalemia due to anorexia and vomiting. The exact information about her vomiting duration and frequency was not recorded, but she vomited several times a day. To correct hypokalemia, we started intravenous hydrocortisone (100 mg every eight hours) and normal saline (3 L/day). The symptoms significantly improved on the third day. The patient remained calm and had a normal sleep without depressive symptoms or psychosis after 72 hours of treatment. The patient's depression medication was discontinued based on psychiatrist consultation. The patient was discharged with hydrocortisone (30 mg/day) and fludrocortisone tablets (0.1 mg/day). During one year of follow-up, she was in good general condition without psychological symptoms and continued her work as a teacher. The patient's menstrual dis-order disappeared during this period, and she gained 6 kg weight. Discussion In the current case report psychiatric disorder was a significant symptom of chronic adrenal insufficiency. This case showed no severe clinical manifestation and cardinal symptoms of this metabolic disorder, like the Addisonian crisis. Klipel, in 1899, for the first time reported AD psychiatric manifestations defined as "Addisonian encephalopathy" (4). Limited similar case series (n = 25) published from the 1940s to 1950s indicated a 64 -85% correlation between AD and psychiatric diseases. Nonetheless, nowadays, this association is not widely considered (2). The majority of case studies of AD associated with psychiatric symptoms represented that male patients are slightly more influenced than females, and such symptomatology commonly can start before treatment and diagnosis of adrenal insufficiency. Depression, as the most common disorder, is accompanied by adrenal insufficiency, with commonly mild mood manifestations, motivation reduction, and behavior change. On the other hand, catatonia, psychosis, delirium, disorientation, and memory deficit are not commonly observed. Psychosis correlates with Addisonian crisis and severe AD symptoms, and merely nine case reports showed a correlation between a psychotic symptom and the primary clinical picture of adrenal insufficiency. The cause of psychiatric disorders in AD has not been well addressed, and there are many theories suggested in some case reports. Hyponatremia is linked to brain swelling and encephalopathy, leading to memory, consciousness, and thinking disorders. Decreased glucocorticoid, which can be found in the brain, particularly the hippocampi, causes memory deficit and frontal circuit dysfunction with reduced processing speed, executive function, reasoning, and thinking. A reduction in cerebral glucocorticoid stimulation increases neural excitability, leading to an improvement in sensory information detection. Also, proopiomelanocortin (POMC) as an anterior pituitary hormone is produced secondary to reduced glucocorticoid. In addition, elevated endorphin levels are associated with psychosis and hallucinations (4). In the case of adrenal insufficiency diagnosis, each cause is associated with inadequate low cortisol generation. Serum cortisol levels are elevated in the early morning (about 6 AM) and range from 10 to 20 mcg/dL (275 to 555 nmol/L) compared to other times during the day. The low serum cortisol level (lower than 3 mcg/dL or 80 nmol/L) in the early morning suggests adrenal insufficiency. The patient is diagnosed with primary adrenal insufficiency (pri- In primary adrenal insufficiency, plasma ACTH at 8 AM is high (over 4000 pg/mL; 880 pmol/L). In patients with mineralocorticoid deficiency, besides cortisol deficiency, plasma renin level or activity should be assessed, while aldosterone concentrations are low, with elevated serum potassium and reduced serum sodium concentrations. On the contrary, in secondary or tertiary adrenal insufficiency, plasma ACTH levels are low or low normal ( Table 1). The normal level of ACTH at 8 AM ranges commonly from 20 to 52 pg/mL (4.5 to 12 pmol/L) in the two-site chemiluminescent assay (5,6). Previous studies have shown that AD is associated with other autoimmune diseases; thus, a medical his-tory of coeliac disease, thyroid disease, vitiligo, or atrophic gastritis increases suspicion of another autoimmune diagnosis and the probability of another autoimmune polyendocrine syndrome (APS). divided concentrations (total concentration: 10 to 12 mg/m 3 /day) is considered the glucocorticoid of choice to manage chronic primary adrenal insufficiency. Glucocorticoid dose relieves symptoms of glucocorticoid deficiency. Most patients with primary adrenal insufficiency finally need mineralocorticoid replacement using fludrocortisone. Dehydroepiandrosterone (DHEA) therapy is suggested for females with an impaired sense of well-being or mood instead of optimal glucocorticoid and mineralocorticoid replacement when needed. The main points investigated in this study are: (1) Keep in mind that psychiatric disorders, such as insomnia, agitation, and depression, can be the manifestations of AD, but they may rarely be the first manifestations; (2) Adrenal insufficiency should be considered while treating a psychiatric patient (with depression and psychosis) with no family or personal history of psychiatric illness or cases without failed medical treatment for depression; (3) Addison disease is an autoimmune disease that can be seen with other autoimmune diseases (type 1 diabetes, autoimmune thyroiditis, celiac disease, premature menopause, Graves' disease, pernicious anemia, and Sjögren's disease). In conclusion, we reported a rare case of psychiatric disorder as a significant symptom of chronic adrenal insufficiency. This case indicated that physicians should be informed about the neuropsychiatric symptoms of adrenal insufficiency. This should be considered while treating a psychiatric patient (with depression and psychosis) with no family or personal history of psychiatric illness or cases who failed medical treatment for depression. Psychiatric disorders have a high prevalence, and adrenal insufficiency is a rare disease; therefore, an appropriate workup is recommended in exceptional cases such as the presence of some clinical manifestations of AD.
2022-03-16T15:29:57.430Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "fe64928960860023371d286a93a817326fd02b35", "oa_license": "CCBYNC", "oa_url": "https://brief.land/ijem/cdn/dl/822771f2-a392-11ec-ac98-9362db7569df", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "52a568c875e78938d327a85d178886f2ee59e1d2", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
52802194
pes2o/s2orc
v3-fos-license
Regulating the prostate ![][1] TGF-β signaling (brown) keeps stem cells dormant but alive in the proximal prostate. Prostate cancer and benign prostatic hyperplasia result from excessive proliferation of cells in this organ, likely due to deregulation of stem cells. Salm et al. ([page 81][2]) find that Text by Rabiya S. Tuma rabiya@nasw.org T h e d e g r a n u l a t i o n t w o -s t e p egranulation of mast cells involves a two-step process, report Nishida et al. on page 115. First, antigen stimulation triggers microtubule polymerization and granule translocation to the cell surface in a calcium-independent process. Second, the granules fuse with the plasma membrane in a well-characterized calcium-dependent process. Mast cells are so full of granules that degranulation was thought to occur through granule-to-membrane fusion and granule-to-granule fusion, without the need for granule transport. However, inhibition of microtubule polymerization blocked degranulation. In response to antigen stimulation, Nishida et al. found that tubulin staining increased and fluorescently tagged granules translocated to the cell surface before exocytosis. Removal of calcium from the culture medium prevented granule fusion to the membrane but had no effect on microtubule polymerization or granule movement, suggesting that the steps are distinct. When the team stimulated mast cells deficient for Fyn or Lyn tyrosine kinase signaling proteins, the cells had reduced degranulation, as previously reported. However, only the D Sun and survival with Hif1 ␣ tabilized Hif1 ␣ helps cells to survive in the face of low oxygen. Now, Busca et al. (page 49) report that elevated cAMP, which occurs in melanocytes downstream of UV irradiation from the sun, also leads to higher Hif1 ␣ . This should help cells survive the sun's insults, but may also aid in melanoma development. Oxygen-dependent Hif1 ␣ regulation occurs primarily at the posttranslational level. But Busca et al. report that cAMP acts as an independent stimulant of Hif1 ␣ expression by increasing transcription of the Hif1a gene (which encodes Hif1 ␣ ) in melanocytes. The elevated cAMP up-regulates expression of MITF, a key melanocyte-specific transcription factor. When the team performed a series of reporter gene experiments using MITF and Hif1a constructs, they saw that MITF was necessary and sufficient to turn on Hif1 ␣ expression. Furthermore, MITF bound directly to Hif1a promoter DNA, based on ChIP analysis. Significantly, cells that had elevated cAMP were resistant to cell death triggers, but only when Hif1 ␣ was present. Although cAMP-MITF activation of Hif1 ␣ was not detected in other cell types-presumably due to the absence of MITF-the scientists think similar transcriptional regulation of the Hif1a gene likely occurs elsewhere. If that is true, researchers may have one more tool to turn off Hif1 ␣ 's pro-survival role in tumor cells. R e g u l a t i n g t h e p r o s t a t e rostate cancer and benign prostatic hyperplasia result from excessive proliferation of cells in this organ, likely due to deregulation of stem cells. Salm et al. (page 81) find that cells in the proximal region of the prostate, where the stem cells reside, respond differently to TGF-␤ relative to distal cells in mice. A regulatory teeter-totter between TGF-␤ expression, which inhibits cell proliferation and pro-growth P TGF-␤ signaling (brown) keeps stem cells dormant but alive in the proximal prostate. cytokines, maintains stem cell quiescence in the healthy organ. The researchers looked first at androgen-expressing animals. They found that cells in the proximal region of the prostate had more TGF-␤mediated signaling than did cells in the distal region. This should keep the stem cells quiescent, but the cells still survive, probably because of the high expression of anti-apoptotic Bcl-2 in the region. The distal region normally has lower TGF-␤ signaling, but androgen withdrawal reversed the pattern. Now the high TGF-␤ in the distal region induced apoptosis, whereas the lowered TGF-␤ signaling in the proximal region allowed stem cells in this area to respond to progrowth cytokines. The balance between TGF-␤ and pro-growth cytokines likely maintains the quiescent state of stem cells in a healthy prostate, and deregulation may lead to prostate disease. A similar balancing act may also be present in the epidermal and hematopoietic stem cells already known to be regulated by TGF-␤ . Fyn mutant cells showed a disruption in microtubule polymerization and granule movement, suggesting that Fyn signaling directs microtubule polymerization. The scientists are now looking for proteins that link granules to microtubules. They reason that such molecules might provide a relatively specific target for drugs aimed at blocking unwanted histamine release from mast cells. Degranulation relies on transport by an induced microtubule network (green, right).
2019-03-22T16:10:25.046Z
2005-07-04T00:00:00.000
{ "year": 2005, "sha1": "814df839f643919e5947f7c2b27310122e543d56", "oa_license": "CCBYNCSA", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2254846", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "d363fcf316f2cf42f215b67ca08250227130ade3", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
225723781
pes2o/s2orc
v3-fos-license
Dimensions and skeletopy of kidneys in two populations of Cerdocyon thous (Linnaeus, 1766) Article history Cerdocyon thous is the South American canid with great geographic distribution. To the south of Ecuador, two isolated populations have been identified, living in different average temperatures and food availability. The objective was to measure the length, width, thickness and volume of the kidneys, length of renal vessels, and verify the renal skeletopy in two populations of C. thous. Kidneys and renal vessels were measured from 34 cadavers collected on highways in the Brazilian territory. From the Atlantic Forest biome (latitude 22o), 14 specimens (seven males and seven female) were analyzed, and from the Pampa biome (latitude 29o), 20 specimens (eight males and twelve female). On average, in the right antimere the kidneys measured 49.9  25.2  24.4 mm, had a volume of 16.5 cm3, the renal artery measured 21.3 mm and the renal vein 19.4 mm. In the left antimere, the kidneys measured 49.3  24.4  22.8 mm, with a volume of 14.6 cm3 and the artery and vein measured 21.0 mm and 28.4 mm, respectively. The right kidney was always cranial and predominantly positioned ventrally to vertebrae L1–L3, while the left one was positioned ventrally to vertebrae L2–L4. There was no difference in the comparison between sexes or antimeres. Most renal dimensions were significantly higher in the specimens from the Pampa biome, possibly due to the body size and type of diet. This result warns of interpreting diagnoses that depend on the evaluation of the normal renal dimensions in this species. Received 23 March 2020 Accepted 28 April 2020 INTRODUCTION Cerdocyon thous, also known as "crab-eating fox," is the wild canid with the largest distribution area in the South American continent. With great capacity to adapt to habitats in the Neotropics, C. thous inhabits areas of closed or open vegetation (COURTENAY; MAFFEI, 2004;HUNTER, 2011;KASPER et al., 2014). Two disjointed dispersion areas are recognized in South America: one to the north and the other to the south of Ecuador. To the north of Ecuador, the dispersion area is distributed in Colombia, Venezuela, Guyana and Suriname; to the south of Ecuador, it is found in Brazil, Paraguay, Argentina, Uruguay and Bolivia (COURTENAY; MAFFEI, 2004). C. thous has a body mass of 5-9 kg and can measure up to 1.2 m from the snout to tail (KASPER et al., 2014). An omnivore, it eats fruits, insects, crustaceans, small vertebrates and eggs (HUNTER, 2011). Although its conservation is not threatened, C. thous suffers pressure from hunting by rural producers to obtain the skin, is run over by cars, and is infected with diseases transmitted by domestic dogs (HUNTER, 2011;KASPER et al., 2014). The kidneys are paired, sublumbar retroperitoneal organs located ventrally to the hipaxial musculature and laterally to the aorta artery and caudal vena cava. Each kidney has two surfaces (dorsal and ventral), two poles (cranial and caudal), and two margins (lateral and medial) (NICKEL; SCHUMMER; SEIFERLE, 1979). The medial surface contains the renal hilum that defines a space, the renal sinus, which contains the ureter, blood vessels, lymphatic vessels, and nerves (NICKEL; SCHUMMER; SEIFERLE, 1979). The renal artery has a dorsal position in relation to the renal vein. In dogs, renal vein duplicity is more common than the renal artery (EVANS; LAHUNTA, 2013). The high occurrence of C. thous in nature and its high frequency in zoos and private collections reflects in the hundreds of annual veterinary services to individuals of this species (COURTENAY; MAFFEI, 2004;SILVA et al., 2014). Consequently, several studies report diagnoses of diseases (FREDO et al., 2015;RIBEIRO;VEROCAI;TAVARES, 2009), biochemical test values (MATTOSO et al., 2015SILVA;NOGUEIRA;SANTANA, 1993), ultrasound evaluation (SILVA et al., 2014, and surgical procedures (PICCOLI et al., 2017) to the kidneys of C. thous. Knowledge of the renal dimensions is fundamental for diagnosis in nephrology and urology (BARR; HOLT; GIBBS, 1990;KONDE et al., 1984). Although there are reports on kidney measurements in domestic LAHUNTA, 2013;NICKEL et al., 1979;STOCCO et al., 2016) and wild carnivores (EVANS; SOUZA et al., 2018), no studies report the normal dimensions of the kidneys of C. thous. In anatomical terms, one study describes the sectoral intrarenal arterial branching in one female specimen of C. thous (MENEZES et al., 2011) and another reports on one case of duplicity of the left renal artery (PEÇANHA et al., 2020). In addition, the C. thous size has significant intra-specific variation, tending to be larger in populations farther south of the South American continent (BUBADUÉ et al., 2016;MARTINEZ et al., 2013). Therefore, it seems plausible to assume that the normal average dimensions of the kidneys may differ in different populations of the same species. The aim of this study was to establish kidney and renal vessel measurements and renal skeletopy in individuals from two isolated populations of C. thous, to subsidize procedures in wildlife medicine, and to clarify if the size of the kidneys varies depending on the population's habitat. MATERIAL AND METHODS Thirty-four adult specimens of C. thous were collected dead on highways of the Atlantic Forest biome (State of Rio de Janeiro, Brazil), under the authorization of the Ethics Committee on Animal experimentation (protocol 018/2017), and of the Pampa biome (State of Rio Grande do Sul, Brazil), under the authorization of IBAMA/SISBIO (number 33667). From the Atlantic Forest biome, 14 cadavers were collected (seven males and seven females) and from the Pampa biome, 20 cadavers (eight males and twelve females). The collection sites of the specimens of the Atlantic Forest biome were around latitude 22º and those of the Pampa biome, 29º, about 1.520 km in a straight line ( Figure 1). Source: adapted from Google Maps ® After collection, the specimens had their thoracic cavities opened, the thoracic aorta identified for cannula placement, and fixed by intravascular injection of a 50% formaldehyde solution. Next, the latex solution stained with red pigment was perfused in the vascular bed and the cadavers were immersed in polyethylene tanks with a 10% formaldehyde solution to finish the process of fixing and conserving the specimens. After at least seven days of fixation, the cadavers were washed in running water and the peritoneal cavity was opened for dissection and for measuring the kidneys and renal vessels (Table 1). A digital caliper (0-150 mm, 0.01 mm resolution, accuracy ± 0.02 mm, Eda ® ) was used to obtain measurements of craniocaudal length, lateromedial width, and dorsal-ventral thickness in both kidneys. Renal volume was calculated by applying the equation for the volume of an ellipsoid stipulated by Barr (1990), where Volume (V) = length (L)  Width (W)  Thickness (T) x 0.523. The lengths of the renal artery and vein were also measured in both antimeres, taken between the abdominal aorta or caudal vena cava to the renal hilum. Only kidneys and renal vessels in perfect macroscopic conditions were included in the present study and, therefore, in some cadavers it was not possible to measure both kidneys and vessels bilaterally. To determine skeletopy accurately, the poles (cranial and caudal) of each kidney were marked with radiopaque pins before cadavers were radiographed. The radiographic images were obtained in a lateral projection with the Phillips ® brand device, Aquilla Plus 300 model. Radiographs were taken in a Kodak ® Direct View Computerized cassette system, exposure of 40 KV, 200 mA in 0.1 s and saved in DICOM ® format. After viewing with Radiant Dicom Viewer ® software version 1.6.8, the files were converted to JPEG format. The position of both poles projected to the vertebrae was noted. The BioEstat 5.3 ® software was used to obtain descriptive statistics data (standard deviation and arithmetic mean) and unpaired Student's t-test to compare measurements between specimens of the two biomes (Atlantic Forest  Pampa), between sexes and antimeres, considered significant when p < 0.05. DISCUSSION The ellipsoid volume of the kidney of the C. thous, estimated between 14 cm 3 in the left kidney and 16 cm 3 in the right kidney, was bigger than of L. gymnocercus (12 cm 3 in the left kidney and 11 cm 3 in the right kidney) (SOUZA et al., 2018) and also bigger than in the domestic cat (9-12 cm 3 in the left kidney and 9-11 cm 3 in the right kidney) (STOCCO et al., 2016). Such comparisons seem pertinent to the size of the species: C. thous tends to be smaller than a medium-sized dog, but slightly bigger than L. gymnocercus and a lot bigger than domestic cats and M. p. furo. Among the data of the present study, probably what is most striking is the fact the kidneys of the specimens of C. thous from Pampa biome are significantly bigger in practically all the dimensions than those of individuals from the Atlantic Forest biome. Previous studies have shown that the skull size of C. thous specimens is inversely proportional to temperature-related variables; it means that in populations to the south of Ecuador, the greater the latitude the greater the size of individuals (MARTINEZ et al., 2013). This finding is consistent with the results of the present study, where specimens of C. thous collected near latitude 29º (Pampa) have kidneys significantly bigger than specimens collected near latitude 22º (Atlantic Forest). The collection areas are about 1.500 km away and differ in vegetation and annual average temperatures. The Pampa biome has more challenging periods of low rainfall than the Atlantic Forest, which is known to be humid (PENEREIRO et al., 2018). Perhaps, this relationship between the selective pressure of the environment on kidney morphology can play a role in understanding the differences found in renal dimensions between the two populations. Based on the geographic distribution previously proposed (CABRERA, 1931), it is possible the two populations analyzed in the present study constitute distinct subspecies: C. t. entrerrianus at Pampa biome and C. t. azarae at Atlantic Forest biome. In agreement with the comparison of skull dimensions performed in a previous study (MARTINEZ et al., 2013), the kidneys also did not differ between individuals of the same subspecies. It has been recognized three heterogeneous groups of C. thous when it comes to the morphometric patterns of the skull, and the specimens of the present study were also in distinct groups (MACHADO; HINGST-ZAHER, 2009). When analyzing skulls of C. thous, one study added that the skull of the specimens to the south had a larger fixation area for the temporal muscle, and thinner and sharper extra molar or premolar teeth (BUBADUÉ et al., 2016). These characteristics indicate a greater bite force and greater drilling capacity of the prey, suggesting preference for carnivorous diet in individuals to the south. The authors speculate that the sympatry of C. thous with another canid of similar size, L. gymnocercus, creates a demand for competitive adaptations that would not occur in other regions. Higher protein intake influences renal hemodynamics and increases the glomerular filtration rate, increasing the volume of the glomeruli and resulting in increased volume and renal mass in several mammal species, including rats, dogs and humans (FINCO, 1999;HAMMOND;JANES, 1998;SCHRIJVERS et al., 2002). Thus, it can be speculated that the kidneys of C. thous populations to the south are bigger not only in proportion to body size but also in adapting to a diet with a higher protein content than individuals in lower latitudes. The confirmation of this hypothesis would depend on a meta-analysis of the diet of these groups and comparative renal histomorphometry or serum biochemistry exams, or both, which are beyond the scope of this study. The significant differences found in the renal dimensions between individuals of the same species from different biomes generate one more aspect to be considered in the cautious interpretation of the findings of imaging and necropsy examinations in veterinary practice. In addition to geographical origin, age and diet may eventually interfere with renal dimensions, and should be considered while interpreting. The sample group of the present study is composed of young free-living adults, while captive animals can have a differentiated diet and live substantially longer. Renal dimensions did not differ significantly between antimeres in C. thous, as reported in domestic dogs (SAMPAIO; ARAUJO, 2002) and New Zealand rabbits (SANTOS-SOUSA et al., 2015), and different from human species, whose left kidney is significantly bigger (FERNANDES et al., 2002;MANDARIN-DE-LACERDA, 1989). In L. gymnocercus, the left kidneys had a tendency to be longer and wider than the right ones, although the difference was not significant (p = 0.07 and p = 0.06, respectively) (SOUZA et al., 2018). In domestic cats, only males had the left kidney bigger (STOCCO et al., 2016). Most studies that present renal dimensions in dogs have methodology based on imaging methods. Some suggest indexes where the kidneys of male dogs are significantly bigger than of the females (LEE; LEOWIJUK, 1982;LOBACZ et al., 2012;MARESCHAL et al., 2007;ARAUJO, 2002;SOHN et al., 2016). In C. thous, no significant difference was found in the renal measures between sexes, when all specimens were compared and when separated by biomes. The absence of differences in renal dimensions between sexes was also documented in L. gymnocercus (SOUZA et al., 2018), New Zealand rabbits (SANTOS-SOUSA et al., 2015) and in human species (CHEONG et al., 2007;EMAMIAN et al., 1993). In C. thous, the skeletopy of the cranial pole of the right kidney predominated at L1 level, while the left kidney in L2, that is, the right kidney, has a more cranial position. This positioning makes the kidneys of C. thous, especially the left one, free of costal coverage which would facilitate palpation of the abdominal wall during physical examination and its visualization in imaging diagnostic tests. This positioning was identical to what was described in domestic dogs and cats, including the right kidney located at the T13 level in some dogs (EVANS; LAHUNTA, 2013). According to a previous report, the feline renal skeletopy would be more caudal than in dogs and C. thous, with the cranial pole of the right kidney predominating at the level of L2 and the left at the level of L3 (STOCCO et al., 2016). In Mustela putorius furo, the cranial pole of the right kidney is at the level of T14 and the left kidney at the level of L2 (EVANS; AN, 2014). In New Zealand rabbits, the male's cranial poles of right kidneys predominated at level T13, while female's predominated at L1, and left kidneys at level L2 in males and L3 in females (SANTOS-SOUSA et al., 2015). The cranial advancement of the right kidney is common to all domestic mammals, except in pigs (KONIG; MAIERL; LIEBICH, 2016). In C. thous, LLV was significantly bigger than LRV, which is explained by the greater distance from the left kidney in relation to the caudal vena cava. This finding was identical in domestic dogs (EVANS; LAHUNTA, 2013), L. gymnocercus (SOUZA et al., 2018), domestic cats (STOCCO et al., 2016) and New Zealand rabbits (SANTOS-SOUZA et al., 2015). For the same reason, it would be expected that LRA was bigger than LLA, which did not occur in C. thous, in L. gymnocercus (SOUZA et al., 2018), not even in domestic felines (STOCCO et al., 2016), although mentioned in the domestic dog (EVANS; LAHUNTA, 2013). On the other hand, in New Zealand rabbits, LLA was significantly bigger than LRA (SANTOS-SOUZA et al., 2015). In absolute dimensions, the renal arteries of C. thous were about 5 mm smaller than those of L. gymnocercus (SOUZA et al., 2018), while the renal veins practically did not differ. CONCLUSIONS Finally, it can be concluded that the average size of the kidneys of C. thous differed in the two disjoint populations analyzed. This result reinforces the findings of measurements previously performed on the skulls of this species and by other factors to be clarified, such as the differences in protein or hydric intake. Veterinary procedures concerning the kidneys of this species should consider possible differences in size, depending on the habitat, captivity or age of the specimen.
2020-07-02T10:34:34.134Z
2020-06-24T00:00:00.000
{ "year": 2020, "sha1": "a6f7e21ac5f8298c0b2fbacc66d951a71b9e6127", "oa_license": "CCBY", "oa_url": "https://periodicos.ufersa.edu.br/index.php/acta/article/download/9126/10258", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f93afc30dc09aacfa07a62044a07d3bd6315c87f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
258564768
pes2o/s2orc
v3-fos-license
First observations of warm and cold methanol in Class 0/I proto-brown dwarfs We present results from the first molecular line survey to search for the fundamental complex organic molecule, methanol (CH$_{3}$OH), in 14 Class 0/I proto-brown dwarfs (proto-BDs). IRAM 30-m observations over the frequency range of 92-116 GHz and 213-280 GHz have revealed emission in 14 CH$_{3}$OH transition lines, at upper state energy level, E$_{upper}\sim$7-49 K, and critical densities, $n_{crit}$ of 10$^{5}$ to 10$^{9}$ cm$^{-3}$. The most commonly detected lines are at E$_{upper}<$ 20 K, while 11 proto-BDs also show emission in the higher excitation lines at E$_{upper}\sim$21-49 K and $n_{crit}\sim$10$^{5}$ to 10$^{8}$ cm$^{-3}$. In comparison with the brown dwarf formation models, the high excitation lines likely probe the warm ($\sim$25-50 K) corino region at $\sim$10-50 au in the proto-BDs, while the low-excitation lines trace the cold ($<$ 20 K) gas at $\sim$50-150 au. The column density for the cold component is an order of magnitude higher than the warm component. The CH$_{3}$OH ortho-to-para ratios range between $\sim$0.3-2.3. The volume-averaged CH$_{3}$OH column densities show a rise with decreasing bolometric luminosity among the proto-BDs, with the median column density higher by a factor of $\sim$3 compared to low-mass protostars. Emission in high-excitation (E$_{upper}>$ 25 K) CH$_{3}$OH lines together with the model predictions suggest that a warm corino is present in $\sim$78\% of the proto-BDs in our sample. The remaining show evidence of only the cold component, possibly due to the absence of a strong, high-velocity jet that can stir up the warm gas around it. INTRODUCTION Complex organic molecules (COMs) have been widely observed across different stages of star formation towards high-mass and lowmass star-forming regions (e.g., Ceccarelli et 2022). Their chemistry is of great interest because organic molecules are the proposed starting point of an even more complex prebiotic chemistry during star and planet formation, thus linking interstellar chemistry with the origins of life (e.g., Ceccarelli et 2022, Mathew et al. 2022, Öberg et al. 2009, Münoz Caro et al. 2014. Methanol (CH 3 OH) is a fundamental COM and a precursor of more complex COMs. Purely gas-phase models cannot reproduce the abundance of methanol in dark clouds (e.g., Garrod et al. 2006). Gas-grain models have shown that under the physical conditions prevalent in dense and cold molecular clouds and pre-stellar cores, with n(H 2 )≥10 5 cm -3 and T ∼10 K, methanol can efficiently form on ice grains via surface chemistry and is then released into the gas phase via thermal and/or non-thermal desorption mechanisms (e.g., Tielens et al. 1991;Charnley et al. 1992;Watanabe & Kouchi 2002;Hidaka et al. 2004). In chemical models that consider only grain surface chemistry, the icy ★ E-mail: briaz@usm.lmu.de mantles covering the interstellar grains are processed by UV/cosmic ray irradiation forming radicals that react and form methanol and other COMs at warm temperatures of ∼20-30 K (e.g., Garrod et al. 2008). At cooler temperatures of ∼10-20 K, non-diffusive processes have been proposed for radicals to react and form methanol and other COMs (e.g., Garrod & Hebst 2006;Garrod et al. 2008;Jin & Garrod 2020). The main challenge in the chemical models is to understand the thermal and non-thermal desorption mechanisms at low temperatures that are able to remove COMs from the grain mantles and inject them into the gas-phase where they are detected (e.g., Herbst & Garrod 2022). Thermal evaporation is extraordinarily slow and obeys a standard Boltzmann law using desorption energy barriers for species heavier than hydrogen and helium typically on the order of 1000 K, or 0.1 eV. In case of methanol, the desorption or the binding energy is ∼3770-8618 K (e.g., Minissale et al. 2022). Dark cloud temperatures (∼10 K) are far too cold for this mechanism to be very significant, and thus non-thermal desorption processes must be more effective. In the inner regions of highly accreting objects with accretion to bolometric luminosity ratios of ∼50%-70%, dust grains can be heated to >20-40 K. Only weakly-bound species such as CO ice can desorb. Non-thermal desorption mechanisms such as photo-desorption triggered by absorption of UV photons and cosmic-rays induced sputtering are required. However for methanol, the UV absorption does not result in intact methanol molecules but rather in smaller species (Cruz-Diaz et al. 2016). Species such as ions and radicals resulting from UV absorption that remain in the grain mantle can react further to form more complex species. Sputtering by strong jets and shock emission knots can spot raise the grain temperature and inject frozen intact methanol into the gas-phase (e.g., Draine 1995, Wakelam et al. 2021, Paulive et al. 2022). An additional desorption mechanism is the excess formation energy of newly formed species that allows them to desorb immediately into the gas phase, the so-called chemidesorption . The exothermicity of the hydrogenation reaction that ultimately forms methanol on grains, CH 2 OH + H → CH 3 OH, is ∼4.1 eV, which is about an order of magnitude larger than the desorption energies of the products (∼0.18 eV). Laboratory experiments performed for a standard dark cloud model conditions have shown that only about 1% of the chemically desorbed methanol can reproduce the observed gas phase methanol levels (e.g., Garrod et al. 2006). We present here results from the first molecular line survey to search for methanol in early-stage Class 0/I proto-brown dwarfs (proto-BDs), with an aim to understand the chemical complexity during the early phases of brown dwarf formation. Proto-BDs are high-density (n(H 2 )≥10 5 cm -3 ) and cold (≤10 K) objects and are also predicted to posses a warm corino (∼20-70 K) in the inner dense regions (Machida et al. 2009; 2019) that can be observed using high-excitation molecular lines. Our recent survey of deuterium chemistry in proto-BDs has revealed emission in various deuterated species, including the doubly-deuterated D 2 CO molecule (Riaz & Thi 2022abc). These surveys have shown that proto-BDs show enhanced deuterium fractionation and high molecular D/H ratios in the range of ∼0.02-4.5. Low temperature and high densities can significantly enhance the molecular D/H ratio compared to the ISM elemental value (∼2.8×10 -5 ). The high ratios for the proto-BDs thus confirm a cold and dense environment in these objects. Since methanol is proposed to form on the surfaces of interstellar dust grains and in icy grain mantles, the physical conditions in proto-BDs should be ripe enough for the efficient formation of this fundamental COM, and the predicted presence of a warm corino should result in thermal/non-thermal desorption of the methanol thus formed to the gas phase. Section 2 describes the sample and the IRAM 30m observations of multiple methanol transition lines. Data analysis and results are presented in Sections 3 and 4. Section 5 presents a comparison of the observations with the predictions from the brown dwarf physical model, a discussion on the CH 3 OH ortho-to-para ratio, and the dependence of the methanol column densities on the extent of CO depletion and the bolometric luminosity of the system. SAMPLE, OBSERVATIONS, DATA REDUCTION Our sample consists of 14 Class 0/I proto-BDs (Table 1). A detailed discussion on the target selection, classification and measurements of mass, luminosity, H 2 number and column density for these objects can be found in Riaz et al. (2018) and Riaz & Thi (2022a). The distance to the targets is 436±9 pc in Serpens, 144±6 pc in Ophiuchus, and 294±17 pc in Perseus (Ortiz-Leon et al. 2017a, b;Dzib et al. 2010;Mamajek 2008;Schlafly et al. 2014;Zucker et al. 2019). The observations were obtained at the IRAM 30-m telescope between 2017 and 2022. We used the EMIR heterodyne receiver (E090 and E230 bands) and the FTS backend in the wide mode, with a spectral resolution of 200 kHz. The observations were taken in the frequency switching mode with a frequency throw of approximately 7 MHz. The source integration times ranged from 3 to 4 hours per source per tuning reaching a typical RMS (on T * A scale) of ∼0.01-0.03 K. The telescope absolute RMS pointing accuracy is better than 3 ′′ (Greve et al. 1996). The absolute calibration accuracy for the EMIR receiver is around 10% (Carter et al. 2012). The telescope intensity scale was converted into the main beam brightness temperature (T mb ) using standard beam efficiency of ∼76% at 86 GHz and ∼57% at 230 GHz. The spectral reduction was conducted using the CLASS software (Hily-Blant et al. 2005) of the GILDAS facility. The standard data reduction process consisted of averaging multiple observations, extracting a subset around the line rest frequency, and fitting a low-order polynomial baseline which was then subtracted from the average spectrum. DATA ANALYSIS Our observations cover the frequency range of 92-116 GHz and 213-280 GHz. While this frequency range covers several low-and high-excitation methanol lines, none of the CH 3 OH transition lines with upper state energy level E upper > 50 K were detected for our sample. Table 2 lists the CH 3 OH transition lines that were detected towards at least one of the targets. The CH 3 OH spectra are shown in Section A. We have measured the parameters of the line center, line width, the peak and integrated intensities using a single-or double-peaked Gaussian fit. A double-peaked profile was required for the case of J182952 and J162625 that show an extended wing in some of the spectra (Fig. A3). The wing may be the signature of wind/outflow. The line parameters are listed in Section A. The uncertainty is estimated to be ∼10%-20% for the peak and integrated intensities and Δv, and ∼0.02-0.04 km s -1 for V lsr . The errors on the line parameters are due to uncertainties in fitting the line profile, and mainly arise from the end points chosen for the Gaussian fit. The half power beam width of the telescope beam is ∼10 ′′ at 230 GHz and ∼28 ′′ at 100 GHz. The fluxes have been corrected for the beam filling factor, which is the ratio of the source size to the beam size. The source sizes are in the range of ∼3 ′′ -7 ′′ . The beam dilution factors range between ∼0.3 ′′ -0.7 ′′ at 230 GHz and ∼0.1 ′′ -0.25 ′′ at 100 GHz. The CH 3 OH column density and rotational temperature, T rot , were determined using the method of constructing a rotational diagram, as described in Goldsmith & Langer (1999). Both the upper level degeneracies and the partition function values to construct the rotational diagrams are taken from the CDMS database for consistency. The degeneracies in the CDMS database accounts for the spin degeneracy (g I =4). We have assumed optically thin lines at a single temperature. The assumption of CH 3 OH lines to be optically thin has been tested in low-mass protostars using 13 CH 3 OH observations and is found to be valid (e.g., Öberg et al. 2014). There are no 13 CH 3 OH line detections in our sample so this assumption cannot be tested for the proto-BD targets. Figure 1 shows the rotational diagrams for the proto-BDs with detection in at least two CH 3 OH transition lines. The beam-averaged column density and T rot (or T ex ) determined from these diagrams are listed in Tables 3; 4. The uncertainty on T rot and N CH 3 OH is estimated to be approximately 10%-20%, and is propagated from the errors on the line intensities, the slope and intercept of the linear fit to the data. The errors are larger for the weaker lines with a ∼3σ-5σ detection. We have included the uncertainties to obtain the linear fits. The CH 3 OH molec- Errors on L bol is estimated to be ∼20%. b The first, second, and third values are using the classification criteria based on the integrated intensity in the HCO + (3-2) line, the physical characteristics, and the SED slope, respectively. ular abundances relative to H 2 , [N(X)/N(H 2 )] are listed in Table 5, and have a measurement uncertainty of approximately 20%-25%. The measurements on N(H 2 ) are provided in Riaz & Thi (2022a). Note that these are beam-averaged abundances derived from singledish observations. We have assumed that the methanol lines trace the same volume for the proto-BDs, considering their small source sizes compared to the large beam sizes. Interferometric observations can provide a better insight into the physical scale of the peak emission in these lines. RESULTS Most objects show emission in only the low-excitation (E upper ≤ 20 K) CH 3 OH lines, and these can be fitted at a single T ex in the range of ∼5-15 K (Table 3). This indicates that a large mass fraction of the gas is cold in these proto-BDs. For the cases of J182959, J182952, J182856, J032838, and J032859, all of the points cannot be fit at a single T ex , and we see a break in the relation around E upper ∼ 20 K (Fig. 1). The fit to the points at E upper ≤ 20 K is steeper than the flat fit for E upper > 20 K points. The T ex derived for the E upper ≤ 20 K points is ∼5-7 K, while that for the high-excitation (E upper ∼ 21-50 K) lines is ∼17-28 K (Table 3). This indicates the presence of both a cold and warm gas component in these proto-BD systems. The χ 2 value for the fits are notably different; for e.g., the fit to the cold (E upper ≤ 20 K) and warm (E upper > 20 K) points for J182952 have a χ 2 value of 1.7 and 0.95, respectively, while the fit to all points has χ 2 = 6.6. The quality of the fit is thus better using two different fits. A detailed discussion on the sinlge versus two temperature components is provided in Section C. The column density for the cold component is about an order of magnitude higher than the warm component, indicating a large mass fraction of the gas is at a cold temperature (Table 4). An interesting feature is a rise in the column density at ∼30-40 K seen for J182856 and J032859 (Fig. 1), which is in contrast to the gradual decline with increasing E upper seen for the other objects. A sudden jump in the column density is also seen for J182959 at E upper ∼45 K. Such anomalies could be explained by the presence of warm clumpy material or possible shock emission knots in the line of sight. We note that for the sources with few line detections (e.g., J182854, J163136), fits to just two or three data points cannot provide a good constraint to the derived values for T rot and column density. However, the derived values for these sources are consistent within the uncertainties with the average values of T rot = 9.8±2.9 K and N(CH 3 OH) = (0.4±0.3)×10 14 cm -2 for the whole sample. The strongest emission is seen in the 2 02 -1 01 A line at the lowest E upper ∼7 K, and this line is detected in all objects. Among the highexcitation CH 3 OH lines with E upper ≥40 K detected in this work, half of the proto-BDs show emission in the 4 23 -3 12 E (E upper ∼45 K) and 5 15 -4 14 E (E upper ∼40 K) lines, 4 objects show emission in the 5 15 -4 14 A (E upper ∼49 K) line, while the 5 05 -4 04 E (E upper ∼48 K) line is detected in only two objects. The detection in a particular transition line and the peak line strength are dependent on E upper , A jk , and the critical density, n crit ( Table 2). Another factor is the location of the origin of emission. For instance, consider the case of J182856 that shows emission in 11/14 CH 3 OH lines probed in this work. This object shows strong emission in the 2 02 -1 01 A line with a peak intensity of ∼0.8 K, while the peak intensity in the 0 00 -1 11 E line is ∼0.15 K (Table A1). These lines are at a similar E upper ∼13 K but the 0 00 -1 11 E line has an order of magnitude higher n crit and a ∼6 times higher A jk than the 2 02 -1 01 A line, making the detection more difficult in the former case. J182856 shows the weakest emission in the 2 11 -1 10 A, 4 23 -3 12 E and 5 05 -4 04 E lines (Table A1). While the 4 23 -3 12 E and 5 05 -4 04 E transitions have E upper > 40 K and so are more difficult to detect, weak emission in 2 11 -1 10 A could be due to ortho vs. para emission (Sect. 5). Cold and warm gas component: predictions from brown dwarf formation model Simulations on the formation of brown dwarfs via gravitational collapsing core predict a "warm corino" region in close proximity to the central proto-BD (Machida et al. 2009;Machida & Basu 2019). Figure 2 shows the internal density and temperature structure of a proto-BD predicted by the brown dwarf formation model by Machida & Basu (2019). 1 The figure shows that, in addition to a low-velocity outflow, a strong high-velocity jet is driven by the inner Keplerian disk that is enclosed by the pseudo-disk. The density and temperature in the envelope and the outer pseudodisk regions is 10 5 -10 7 cm -3 and ∼10 K, respectively. In the region near the proto-BD (<50 au), there is a sudden rise in the temperature to >20 K and the density is as high as 10 8 -10 10 cm -3 . This is the predicted warm corino in the innermost, high-density and warmest regions in proto-BDs. The temperature in the warm corino may not be as high as hot corinos (>100 K) in protostars, but it is predicted to reach temperatures as high as ∼70-80 K. The temperature rise is mainly due to the adiabatic contraction of the gas and subsequent ejection of the high-density and hightemperature gas by the jet. Without the appearance of the jet, the hot gas is distributed only in a tiny domain near the protostar (r 10 au, for details, see Masunaga & Inutsuka 2000), which should be difficult to be observed. The jet can blow the high-temperature and highdensity gas from the region in close proximity to the central proto-BD. Thus, the high-temperature and high-density gas is entrained and pumped up by the jet from the region near the proto-BD, as seen in the white dotted box in the right panel of Figure 2. In addition, part of such gas falls again and again in the region far from the proto-BD (Tsukamoto, Machida & Inutsuka 2021). Although the thermal heating or radiative feedback from the central object is not considered in the simulation, these may not alone heat up the central region to temperatures >20 K due to the very low luminosity of the proto-BD. This implies that the warm corino region at temperatures >20 K wouldn't exist if there is no jet. However, we cannot definitely deny that when the radiative feedback from the central object is included, the temperature could rise to warmer temperatures (>30 K) at large radii of >50 au. We explain our scenario by comparing the observations and the theoretical studies. The barotropic equation of state in the brown dwarf simulations (Machida et al. 2009, Machida & Basu 2019 was modelled according to the one-dimensional calculation perfomed by Masunaga & Inutsuka (2000). The gas temperature in the models is a function of density, due to which the maximum temperature is determined by the maximum density that can be reached in the innermost regions. The highest temperature in the proto-BD model is ∼70-80 K and is reached in a narrow region at close-in radii of within 10 au of the central proto-BD without considering the jet. If we consider the pumping-up of the high-temperature gas by the jet, the temperature of ∼25-50 K appears at a radius of ∼10-50 au, and further to ∼7-20 K at ∼50-150 au, as seen in the right panel of Fig. 2. At larger radii (>150 au) or in the outer envelope, the temperature remains <6 K. The high-temperature (∼70-80 K) gas component probes high densities of ≥10 10 cm -3 . The warm-temperature (∼25-50 K) gas component traces moderate densities of the order 10 8 -10 10 cm -3 , and the low-temperature or the cold (∼7-20 K) gas component probes the low-density material at ≤10 8 cm -3 . However, the mass of the high-temperature component is expected to be quite low (∼0.0005-0.015 M ⊙ ) compared to the warm-temperature (∼0.006-0.02 M ⊙ ) and the cold gas component (∼0.015-0.04 M ⊙ ), indicating the pumping-up of the high-and warm-temperature components by the jet is not very effective. Comparing these models with our observations, the low-excitation (E upper ≤20 K) CH 3 OH lines with n crit of the order of 10 5 -10 8 cm -3 (Table 2) are probing the low-density cold gas component at ∼50-150 au in the proto-BD systems (Fig. 2). Most notably, all proto-BDs show emission in the 2 02 -1 01 A line (E upper ∼7 K; n crit ∼0.5×10 5 cm -3 ) and ∼86% show emission in the 0 00 -1 11 E line (E upper ∼13 K; n crit ∼1×10 8 cm -3 ). This indicates that a considerable gas mass is cold and at a low-density in these objects. The remaining CH 3 OH lines detected in this work with E upper ∼28-49 K are probing the warm-temperature gas component at ∼10-50 au in the proto-BDs. This is the warm corino region surrounding the jet launching zone. In particular, detection in the 2 11 -1 01 E line (n crit ∼2×10 9 cm -3 ; E upper ∼28 K) indicates that despite being a small fraction by mass, a high-density warm-temperature gas component originating from the warm corino region is present in the proto-BDs, as predicted by the models. The critical densities for some of these warm lines are, however, at 10 6 -10 7 cm -3 (Table 2), and thus they do not all trace the moderately high density material. As noted, if the radiative feedback is included, the central proto-BD can heat the low-density gas distributed at larger radii of >50 au and also above and below the pseudo-disk midplane to >30 K (e.g., Tomida et al. 2015). This could make the low-density gas component warmer than expected. For the objects with evidence of both a cold 11.98±1.9 6.36±1.0 18.82±3.2 11.83±1.8 11.33±1.8 J032911 13.58±2.7 ---- Table 4. CH 3 OH column densities and warm gas component as seen in the rotational diagrams, the column density in the cold component is about an order of magnitude higher than the warm component. This is consistent with the model predictions of a higher mass fraction of cold compared to warm gas component. We have shown here through observations of multiple transitions of CH 3 OH that a warm corino exists in ∼78% of the proto-BDs (Fig. 1). The remaining ∼22% proto-BDs show evidence of only the cold gas component. If all proto-BDs show a similar temperature structure then the relatively higher excitation CH 3 OH lines with E upper > 20 K should be detected in all of them. The non-detection does not directly imply the absence of a warm corino but rather that the detection of the warm corino in a few proto-BDs could be due to favourable inclination that allows a more direct view of the jet launching zone. Episodic accretion as well as a high-velocity jet can also raise the temperature around a proto-BD. If we only see the cold gas component then this suggests that either the system is very young and embedded so that thermal and non-thermal heating is not yet efficient or the inclination is close to edge-on that gives the least view of the warm corino zone while there is more colder material in envelope/pseudo-disk regions in the line of sight. None of the CH 3 OH transitions at E upper >50 K are detected in any of the proto-BDs. This suggests an upper limit on the gas kinetic temperature of ∼50 K in these objects. A sub-thermal population is less likely since we have detected lines from very high critical densities. The high-excitation methanol lines can be observed if a high-velocity powerful jet is associated with the proto-BD, because the jet stirs up the high-temperature gas around it. The strength (or momentum flux) of the jet is also related to the existence of the hot gas. Therefore, the non-detection in the high-excitation lines may be related to whether a powerful jet is associated with a proto-BD or not. It may also be the case that the gas with a high temperature of >50 K is embedded in high-density gas and thus optically thick. As predicted by the models, the high-temperature (∼70-80 K) gas component has the lowest mass fraction and thus very low in abundance than the cooler gas to be detected. It also traces the innermost and narrower regions at ≤10 au in the proto-BD than the cold component, and thus is more likely to be beam-diluted in single-dish observations. CH 3 OH ortho-to-para ratios In general, we see more ortho (E) lines detected than para (A) CH 3 OH (Table 2) at a similar range in E upper ∼7-49 K and A jk ∼ (0.3-6)×10 -5 . The para lines have n crit ∼ 10 5 -10 6 while the ortho have 10 5 -2×10 9 . The ortho CH 3 OH lines are thus tracing the warmer and denser gas mass, and likely a larger mass reservoir or column density than the para lines. Basically, detection in 5 para but 9 ortho transitions indicates that both spin symmetries may not trace common physical conditions. We can roughly estimate the fraction of the total CH 3 OH emission from the integrated fluxes of the A and E isomers. The CH 3 OH ortho-to-para (o/p) ratios for the proto-BDs range between ∼0.3-2.3 (Table 6). We have used the ortho and para column densities (Table 4) to calculate the o/p ratios for the objects that show emission in at least three ortho and para lines. For the rest of the objects, the o/p ratios are calculated from the integrated fluxes (Appendix A). The o/p ratios are consistent within 10% when calculated from the column densities or integrated fluxes, likely due to the lines being optically thin (Riaz et al. 2019). The CH 3 OH o/p ratios of less than unity suggest the presence of spin conversion processes that may be induced by molecular interactions on molecular ices or collisions in the gas phase, favouring an overabundance in the para spin symmetry (e.g., Sun et al. 2015). As shown in this laboratory study, in the gas phase, molecular collisions could induce the interconversion of spin in methanol molecules exhibiting a rate that decreases as the pressure increases with the number of collisions. Perhaps strong shock emission could result in the nuclear spin conversion of A and E symmetries, which is considered to be a rare event and could only be affected by non-reactive collisions and grain surface mechanisms (e.g., Willacy et al. 1993). This effect is expected to be more pronounced at earlier star formation stages (e.g., Minh et al. 1993;Wirstrom et al. 2011). Although there are some claims that different ortho-to-para ratio may reflect conditions where the molecule is formed, it is not clear whether nuclear spin temperature is kept as it is during the formation and desorption of this molecule. Such hypothesis are denied at least for H 2 O molecule recently (e.g. Hama et al. 2018). Figure 3 probes the dependence of the CH 3 OH column densities, abundances, and T ex for the proto-BDs on the CO depletion factor, f D , listed in Table 6. The details on the f D measurements are provided in Riaz & Thi (2022a). The uncertainty on the f D measurements is estimated to be ∼20%-30% (Riaz & Thi 2022a). For f D <1, CO is expected to be in the gas-phase, while higher values of f D ∼10 indicate a large fraction (>80%) of CO is frozen onto dust (Fig. 3ab), and the full range in values measured for these parameters are observed for f D ≤5. The lack of any definitive trend suggests that the gas-phase abundances of CO and CH 3 OH may not reflect their ice-phase abundances, possibly due to the different desorption timescales of these species. Figure 4 shows a tight correlation between the CH 3 OH and H 2 CO column densities for the proto-BDs. The H 2 CO data is from Riaz . The H 2 CO column densities are lower than CH 3 OH by a factor of ∼3-17. A tight correlation is expected since H 2 CO is an intermediary in the formation of CH 3 OH. The H 2 CO o/p ratios for the proto-BDs range between ∼0.8-4 (Table 6). Low H 2 CO o/p ratios of <3 suggest formation on dust grains while H 2 CO o/p ≥3 indicate gas-phase formation of this molecule (e.g., Riaz et al. 2019). Correlations No clear correlation is seen between the CH 3 OH and H 2 CO o/p ratios; for e.g., J163143 has a CH 3 OH and H 2 CO o/p ratio of ∼0.9 and ∼4, respectively, while J163152 has similar CH 3 OH and H 2 CO o/p ratios (Table 6). A possible explanation for the lack of correlation between these ratios could be due to the presence of spin conversion processes, as discussed in Section 5.2. Figure 5 plots the N(CH 3 OH) and T ex for the proto-BDs. For a quantitative comparison of the correlations between the various parameters, we have calculated the Pearson correlation coefficient, r, from the best-fit to the observed data points, and the value for r is noted in each plot. A strong anti-correlation (r = -0.7) is seen between N(CH 3 OH) and L bol , with a clear rise in N(CH 3 OH) with decreasing bolometric luminosity, whereas the CH 3 OH abundances show a weaker anti-correlation (r = -0.3) with L bol . A weak correlation (r = 0.3) is also seen between T ex and L bol , with a slight decline in T ex with decreasing bolometric luminosity. Together, these trends suggest that the cool and dense physical conditions are ripe for efficient formation of methanol in proto-BDs, and the presence of a warm corino allows a large fraction of the methanol formed to be released in the gas-phase. Figure 6 shows a comparison of N(CH 3 OH) and T rot for the proto-BDs with low-mass protostars. Data for the low-mass protostars is also from IRAM 30m observations presented in Öberg et al. (2014) and Graninger et al. (2016). A comparison of the methodology used in our work for deriving the column densities and T rot with Graninger et al. (2016) is presented in Section D. Despite the large spread in the values for these parameters, the median N(CH 3 OH) and T ex for the proto-BDs are 4.0×10 13 cm -2 and 9.5 K, respectively. These values are comparatively higher than the median N(CH 3 OH) and T ex of 1.5×10 13 cm -2 and 7.2 K, respectively, for the protostars. There may be a tentative trend of a rise in N(CH 3 OH) and a decline in T ex with decreasing bolometric luminosity. SUMMARY We have conducted a molecular line survey with IRAM 30-m to search for methanol in 14 Class 0/I proto-BDs, with an aim to understand the chemical complexity during the early phases of brown dwarf formation. Our observations cover the frequency range of 92-116 GHz and 213-280 GHz. We have detected 14 CH 3 OH transition lines with E upper ∼7-49 K, and n crit ∼10 5 -10 9 cm -3 . Detection in a particular transition line is more a function of E upper than n crit . • All proto-BDs show emission in the transition lines at E upper < 20 K, indicating that a considerable gas mass is cold in these objects. None of the CH 3 OH transitions at E upper >50 K are detected in any of the proto-BDs. • Comparing the observations with the brown dwarf formation models indicates that the low-excitation (E upper ≤20 K) CH 3 OH lines likely trace the cold gas component at ∼50-150 au, while the higher excitation lines at E upper ∼25-49 K probe the warmtemperature gas component at ∼10-50 au in the proto-BDs. This is the warm corino region surrounding the jet launching zone. • The CH 3 OH column density for the cold component is at least an order of magnitude higher than the warm gas component. • The CH3OH ortho-to-para ratios for the proto-BDs are in the range of ∼0.3-2.3. Ratios of less than unity suggest the presence of spin conversion processes. • We find a clear rise in the CH 3 OH column densities with decreasing bolometric luminosity among the proto-BDs, indicating that the cool and dense physical conditions are ripe for efficient formation of methanol in these objects. • The median CH 3 OH column density for the proto-BDs is a factor of ∼3 higher than the median for low-mass protostars. • Observations of CH 3 OH lines from levels with upper energy L bol (L sun ) Figure 6. The CH 3 OH column densities (a) and T ex (b) vs. the bolometric luminosity for the proto-BDs (red points) and low-mass protostars (black points). These are the values derived from fitting all data points in the rotational diagrams. > 25 K together with the model predictions suggest that a warm corino is likely present in ∼78% of the proto-BDs in our sample. The remaining targets show evidence of only the cold (< 20 K) gas component, possibly due to the absence of a high-velocity jet that can stir up the warm-temperature gas around it. Further non-LTE studies with a radiative-transfer code as well as interferometric observations are needed to confirm these suggestions. ACKNOWLEDGEMENTS B.R. acknowledges funding from the Deutsche Forschungsgemeinschaft (DFG) -Projekt number RI-2919/2-3. This work is based on observations carried out with the IRAM 30m telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). DATA AVAILABILITY The data underlying this article are available in the IRAM archives through the VizieR online database. APPENDIX A: SPECTRA AND LINE PARAMETERS The observed CH 3 OH spectra are shown in Fig. A1. The line parameters derived from these spectra are listed in Table A1. The uncertainty is estimated to be ∼10%-20% for the peak and integrated intensities and Δv, and ∼0.02-0.04 km s -1 for V lsr . Figure A3. The observed CH 3 OH spectra (black) with the single or double Gaussian fits (red). Black dashed line marks the V lsr listed in Table A1 . Figure A3. Continued. APPENDIX B: CRITICAL DENSITIES The critical densities listed in Table 2 were computed by considering in the numerator the Einstein coefficient of the transitions and in the denominator the collision rate between the upper and the lower level of the transition. This is an approximation that we tested against a full computation with a radiative transfer code. We can define the critical density as the density of the collision partner (here H 2 ) above which the excitation temperature is equal to the kinetic temperature. We ran a grid of models with the escape probability code RADEX, assuming a kinetic temperature of 10 K, a turbulent width 1 km/s, and a background temperature of 2.73 K. The column density of either methanol-A and methanol-E is 10 12 cm 2 so that the emissions are optical thin. Table 2, the densities above which T ex ≃ T kin are slightly higher. The values also illustrate the possibility of a level to be overpopulated (T ex >10 K). APPENDIX C: RADEX MODELS One of the main assumptions for the LTE rotational diagram analysis and a main unknown in non LTE modelling is the hydrogen gas density of the emitting gas. Here we discuss the different tests and computation made to estimate this gas density. We have generated grids of RADEX uniform sphere models and calculated line intensity ratios for values of n H 2 between 10 3 and 10 8 cm -3 and T kin between 10 and 50 K using the python package (Holdship et al. 2021, in prep.). The column density for all the models is set to 5 × 10 13 cm -2 based on the rotational diagram analysis to keep the emission lines optically thin. In this limit, the intensity of each line is directly proportional to the column density. Therefore, the line ratios are only sensitive to the density of the collision partner H 2 and the gas kinetic temperature and not on the assumed column density. The collision partner is H 2 with collision rates computed by Rabli & Flower (2010) for gas between 10 and 200 K. The line width is 1 km/s and the background temperature is 2.73 K. Table B1. Excitation temperatures for the observed methanol-E lines with a column density of 10 12 cm -2 , T kin =10 K, a turbulent width of 1 km/s, and a background temperature of 2.73 K. Figure C1. RADEX line ratio diagrams. The left panel includes the ratios 2 02 -1 01 E / 2 12 -1 11 (indicated by the red lines) and 5 15 -4 14 E / 2 12 -1 11 (indicated by the black lines) and the right panel includes the ratios 2 02 -1 11 E / 5 15 -4 14 E (red lines) and 2 11 -1 01 E / 2 02 -1 11 E (black lines). Observed line ratios when available are shown by the blue points. The second diagram (right panel of Fig. C1) suggests that the gas around J182952 can be as warm as 18 K with a density of ∼ 3 × 10 6 cm -3 . Interestingly, J032838 has ratios 2 02 -1 11 E / 5 15 -4 14 E of 0.89 and 2 11 -1 01 E / 2 02 -1 11 E of 0.62, which are inconsistent with the diagram. This suggests that a single density, single temperature RADEX model is insufficient to explain the entire set of lines. In order to investigate if the T rot (or T ex ) derived using the rotational diagram is an indicator of the gas kinetic temperature, T kin , we have derived column densities of the CH 3 OH 2 02 -1 11 transition line for J182856 using the non-LTE radiative transfer code RADEX (van der Tak et al. 2007). For a fixed H 2 number density and line width, we set T kin at 5, 10, 15, 20, 30 K, and varied the column density to match with the observed peak line intensity, T mb . This resulted in a range of T ex ∼ 4-8 K, and column density in the range of (3-5)×10 13 cm -2 . While a two-component analysis is a better fit to the data points for J182856, a single temperature component fit in the rotational diagram results in a T ex of 8.7 K and a column density of 5×10 13 cm -2 (Tables 3; 4), which are within the range measured with RADEX. Thus, T kin is close to T ex under the assumption of a single temperature gas component in the rotational diagram. The rotational diagram method could be over-estimating the cold component column density by a factor of ∼2 compared to the volume-averaged column density derived using RADEX. A more prominent difference is seen between the T ex ∼ 25 K for the warm component determined from the rotational diagram for J182856 and the T ex ∼ 4-8 K derived from RADEX. The warm gas component is predicted to have a narrow spatial coverage and would make a much lower fraction by mass compared to the cold component, as discussed in Sect. 5. Thus it could be diluted in the volume-averaged RADEX calculation while fitting it separately in the rotational diagram could provide a better characterization of the physical conditions in the system (Fig. 1). Figure C2 shows the RADEX model for four sources with sufficient CH 3 OH-E detected transitions. The RADEX analysis uses data in the Leiden-Lambda database, which uses degeneracies without the spin factor. We fixed the density n H at 10 6 cm -3 and the methanol column density at 5×10 13 cm -2 . We vary the gas temperature between 10 and 50 K in steps of 5 K. Apart from J032859, the single density, temperature, column density model cannot fit the data for the targets. All the other sources seem to require more than one component. Interestingly, the RADEX modelling derived parameters for J032859 (T=10-15 K, 10 6 cm -3 , N(CH 3 OH) = 5×10 13 cm -2 ) are consistent with the rotational diagram analysis ( Fig. 1; Tables 3; 4). Figure C2. Comparisons between observed and modelled line intensities using RADEX for different sources. The models (circles) assume a single component with a constant kinetic temperature, a density of 10 6 cm -3 , and an average column density for methanol of 5×10 13 cm -2 . The temperature is varied between 10 and 50 K in a 5 K increment. The observed values are plotted with a star symbol. APPENDIX D: COMPARISON WITH PROTOSTARS In Fig. 6, we have compared the CH 3 OH column densities and T rot for the proto-BDs with low-mass protostars. The data for the low-mass protostars is also from IRAM 30m observations presented in Öberg et al. (2014) and Graninger et al. (2016). We have tested our rotational diagram methodology by re-analysing the values in Table 1 in the erratum of Graninger et al. (2016) for low-mass protostars that have more than 2 detected transitions. As shown in Table D1, our re-analysis of the Graninger et al. (2016) data is consistent within the numerical uncertainties (for example the partition function value is interpolated) with their calculations. Therefore, we can confidently compare our derived column densities and T rot for the proto-BDs with their measurements for the protostars. We note that we have taken the degeneracies and the partition functions from the CDMS database that includes the spin degeneracies in their upper level degeneracy values. Compared to the Leiden-Lambda database, the values we used in rotational diagrams are indeed a factor 4 lower in the CDMS database. Since the partition function values are also taken from the CDMS database, the extra factor cancels-out in the g up /partition function ratio. Therefore, the values plotted in the y-axis of the rotational diagrams do not affect our computation. This paper has been typeset from a T E X/L A T E X file prepared by the author.
2023-05-05T15:15:14.740Z
2023-05-03T00:00:00.000
{ "year": 2023, "sha1": "0ec765cc3482aaeab554e68ced27878db068d423", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1f3b8889c2ada737e762b75ca972f4ad928158cc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
270730238
pes2o/s2orc
v3-fos-license
Frequency stability control method for multi-energy system considering storage characteristics and transmission inertia of hot and gas pipeline network . The grid-connected capacity of renewable energy generation in multi-energy microgrid is increasing. This leads to a decrease in the inertia level in the microgrid, which has a great impact on the frequency stabilization control. This article proposes an adaptive control method for frequency control of inertia in multi-energy microgrids. Firstly, the system frequency fl uctuation problem is addressed. Analyze the response characteristics of virtual inertia and the in fl uence of physical inertia of rotating equipment on system frequency dynamics in multi-energy microgrid. Secondly, study the energy transfer characteristics of electric, thermal and gas systems in multi-energy microgrids. The energy coupling model between the subsystems in the multi-energy microgrid is established. And according to the difference of energy transmission time inertia of electric, heat and gas subsystems, the microgrid inertia response time model of different energy systems is established. Then, according to the energy balance stability criterion of multi-energy microgrid, combined with the current operating state of the system and the state of the higher-level distribution network, the fast response adaptive over-compensation control of multi-energy microgrid cluster is carried out to realize the inertia allocation in multi-energy microgrids. Finally, the proposed advanced frequency control method of multi-energy microgrid considering inertia demand is veri fi ed by simulation. Introduction With the development of new power system.The high proportion of power electronic devices is connected to the multi-energy microgrid, which seriously weakens the inertia support and frequency regulation ability of the power system under the same power disturbance.In order to cope with the risks caused by various uncertainties in the multi-energy system and to avoid the system inertia deficiency caused by fault conditions.To improve the level of system frequency stability, there is an urgent need for frequency control of multienergy microgrids by analyzing the coupling characteristics of energy among energy sources, the inertia time characteristics of energy subsystems, and the relationship between distributed power sources and frequency regulation.[1][2][3]. The current domestic and international research on inertia frequency control strategies for power systems has made certain breakthroughs, but the analysis and summarization of the frequency change characteristics of low-inertia power systems still lacks in-depth and comprehensive elaboration and exploration.In order to solve the problem of frequency fluctuation in multienergy microgrids, the rate of change of frequency (RoCoF) index is generally used to estimate the critical value of power system inertia.Literature [4] proposes a difference calculation method for inertia assessment.The power of each regional contact line after disturbance and the system frequency are utilized to assess the inertia demand of the system online, but the assessment needs to be completed before, so that the system frequency safety can be guaranteed.Literature [5][6][7] obtains the real-time system frequency through dynamic monitoring and assesses the system inertia demand online.Literature [8] obtains the system real-time frequency and power values by selecting the appropriate sliding step, and utilizes the frequency response equation to obtain the online estimated value of system inertia.Literature [9], considering the inertia support and primary frequency regulation ability of photovoltaic units, a frequency response model of power system considering the inertia support and primary frequency regulation ability of new energy is established.Literature [10][11][12][13], the inertia control model of doubly-fed wind turbine with stator flux oriented vector control is derived.A cooperative control strategy based on virtual inertia control and rotor speed control is proposed.The virtual inertia control module and rotor speed control module are established in Matlab / Simulink. Literature [14], an adaptive virtual inertia and damping coefficient cooperative control scheme based on VSG is proposed.By establishing the mathematical model of VSG, the influence of virtual moment of inertia J and damping coefficient D on the stability of the system is analyzed.The cooperative adaptive control strategy of inertia and damping is obtained by improving the traditional VSG control strategy.The parameters of the adaptive control system are determined.Literature [15] proposed a multi-energy microgrid-based optimal regulation and control method, which analyzed the interconnection relationship between gas, heat, and power grids, and established the nonlinear equations of the gas system, heat system, and power system.The corresponding parameters in the system are obtained by solving the tidal equations of the gas, heat, and power systems, so as to realize the purpose of real-time optimization and control of power among multi-energy microgrids.The above research mainly focuses on the control of the frequency regulation characteristics of the power subsystem after adding clean energy into the multi-energy microgrid, or the research on the coordinated and optimized scheduling control of the grid primary frequency regulation and multi-energy microgrid by adding energy storage devices.In order to meet the inertia demand of the multi-energy microgrid system and improve the frequency cooperative support capability of the system, it is necessary to analyze the influencing factors of the inertia demand of the multi-energy microgrid cluster, and to realize the inertia frequency stability of the system through the coordination of multi-energy [16][17][18][19]. Research on frequency control of multi-energy systems.Some references use the energy function method.The energy function method is a method to judge the transient stability of the system directly from the energy point of view without relying on numerical integration.The energy function method is faster than the time domain simulation method, and can obtain the stability margin information. This paper examines the time inertia characteristics of various energy subsystems in a multi-energy microgrid that affect the grid frequency.It then performs dynamic frequency control based on the inertia time characteristics of different energy sources.The first step is to establish a model for the multienergy microgrid's inertia demand characteristics.Secondly, we establish the stability criterion for energy supply and demand balance in the multi-energy microgrid by analyzing the real-time state of each energy flow.We then calculate the real-time energy supply satisfaction degree of the system.The system energy demand is quantified by over-assessment using the time scale difference of different energy transmission inertia, and adaptive over-compensation coefficients are preset for the energy balance control of the multi-energy microgrid.The simulation verifies that the multi-energy microgrid can achieve the inertia dynamic frequency control of the system through adaptive overrun control under different fault conditions. The main research contents of the article as follows: Section II analyzes the inertia demand of multi-energy microgrids.Obtained the effect of transmission time inertia on grid frequency for heat and gas networks in multi-energy microgrid.Section III research on adaptive frequency control of multi-energy microgrid.In the section IV, the simulation verification of adaptive control in multi-energy microgrid inertia dynamic frequency control is carried out.Section V is the conclusion. Multi-energy microgrid inertia analysis When the system is disturbed.Taking advantage of the long inertial support time and support capacity of the gas system and the thermal system.Provide power subsystem minute frequency modulation service.The inertia in the multi-energy microgrid mainly contains power-side inertia and load-side inertia.As the inertia in the system is reduced and the inertia distribution is changed, the frequency deviation propagation becomes faster.When the new energy source does not participate in frequency control, the system frequency amplitude becomes larger.If the new energy source participates in frequency control, its output power responds quickly to the frequency change and acts quickly on the generator rotor, which can reduce the frequency amplitude. When the multi-energy microgrid operates under autonomous conditions, the multi-energy microgrid should be able to cope with the active shocks caused by the interruption of the external contact line through its own regulation, and keep the system frequency change stable within the safe range. The generalized inertia is composed of rotational inertia and simulated inertia.The rotational inertia includes the rotational inertia of synchronous motor and asynchronous motor.The simulated inertia is the equivalent inertia provided by the power electronic interface device through the improvement of the control strategy.The simulated inertia resources include the following type and the grid type converter resources.The composition of the generalized inertia resource system and the typical equipment of various inertia resources are shown in Figure 1.The inertia of a multi-energy system is determined by its total rotational kinetic energy.Distributed generators based on non-rotating power electronics will significantly reduce the overall inertia factor of the system.Distributed generators based on non-rotating power electronics will significantly reduce the overall inertia factor of the system due to the widespread use of renewable energy sources.To measure the active regulation capability of the power electronics interface, the existing concept of rotational inertia and primary FM modulation coefficient is generalized [20,21]. The equivalent inertia of a power electronic device in a multi-energy system represents Je.Compared with the time t=0-, the energy increment injected by the power electronic equipment into the power grid at time t is equivalent to the energy released by Je. mass accelerated from the angular frequency e (0 _) where e  is the angular frequency of the rotating magnetic field corresponding to the grid-side current iabc of the converter. ( ) e e e e0 out 0 (2) Where in,s P is the set value of the input power of the grid-connected power electronic equipment in the multienergy microgrid.out P is the output power value of power electronic equipment in microgrid. The swing equation of synchronous generator in power system is: The swing equation of synchronous generator in transient process is obtained as follows : ( ) T J is the inertial time constant of the generator set.  is the rotor angle of the generator. is the deviation between the rotor angular velocity and the synchronous angular velocity. s is the synchronous angular velocity. H is the inertial time constant.P m is the mechanical input power of the generator.Pe is the electromagnetic output power of the generator.D is the equivalent damping coefficient.P e and are the amplitude and disturbance frequency of the periodic disturbance load, respectively. When there is a sudden fluctuation in the load in the power subsystem, the energy demand in the system is expressed as (5) Where: 0  is the initial value of generator power angle. is the generator power angle after system equilibrium.a P is the acceleration power of the generator Analysis of Frequency Influences on Multi-Energy Microgrids Considering Gas Grid Response According to the momentum conservation equation of natural gas pipeline, combined with its flow and pressure relationship.Analogous to the power network, the node flow and node pressure functions of the gas pipeline network are obtained.The momentum balance equation is Where g  is the density of the transport medium in the gas network.g v is the flow rate of the medium in the gas network.g p for the gas network pressure. for the friction coefficient of the gas network.Usually a function of the gas medium flow rate. D is the diameter of the gas network. is the inclination of the pipe network. g is the acceleration of gravity.t 、 x respectively time and space coordinates. / ( ), 1 / , / ( ) R and T are the gas constants and temperature of the natural gas, and A is the crosssectional area of the pipeline.The gas system flow and pressure function model obtained as Where G is the amount of gas flowing out of the end of the gas pipe network.0 G is the amount of gas flowing into the first end of the network.0 p is the pressure at the first end of the pipeline at the pumping station.l is the length of the pipe. h is the vertical height of the pipe.In the gas subsystem, the rotational inertia is mainly provided by the gas turbine, so the gas subsystem fluid inertia model is obtained at different nodes as. ( ) T ap GT 2 r d 1800 d Where  is the gas compression coefficient. is the compressor pressure ratio. r M is the moment of inertia.P T is the power of the turbine.ap P is the compressor power consumption in the gas network system.P GT is the gas turbine output power. Response time analysis of multi-energy microgrids considering thermal network inertia The main indicators to determine the energy balance in the thermal system are the pressure of the pipe network and the temperature of the transmission medium.By calculating the pressure of the heating network and the temperature of the medium, the transmission energy of the heating network system is obtained, and the energy function model of the thermal system is established. Similar to the energy function of the power network, the momentum conservation equation of the thermodynamic system is established as. Where h  is the density of the transmission medium in the heating network. h v is the velocity of hot medium.D is the diameter of pipe network.h p is the rated pressure of the heat pipe network. is the friction coefficient of the pipe network. is the inclination angle of the pipe network.g is the acceleration of gravity. t 、 x denotes time and space respectively. From the above equation, the relationship between the medium flow rate G and the pressure in the system can be obtained.The pressure balance equation of the thermal system is. p , 0 h p are the pressure at the end and head of the pipeline.A is the cross-sectional area. When the pumping station in the piping system is pressurized, the relationship between the pressure difference A at the two ends of the pipe and the flow rate is (2 ) ( ) Where are the intrinsic factor of the pressurized pumping station.h p  is the rotation frequency of the pressurized pump. The pressure energy required during medium transfer is expressed as ) By analyzing the pressure energy lost by the transfer of medium flow in the pipe network and the thermal energy transferred.The thermal inertia time in the control process of the thermal subsystem is obtained from the above analysis as ( ) Where v T is the transmission response time constant of the thermal system. q is the medium flow rate in the system. Multi-energy flow cooperative control mechanism When analyzing the mechanism of the total energy balance on the power angle change of the power subsystem at a certain time scale in the multi-energy microgrid, divide the energy balance relationship of the multi-energy microgrid into two parts: electric energy balance and non-electric energy balance.At the same time, due to the conversion relationship between electric energy and non-electric energy, such as gas and heat energy, non-electric energy can provide balance support when electric energy is unbalanced at a specific time scale, while electric energy can also provide balance support when non-electric energy is unbalanced. When the energy balance of a certain energy changes at a specific time scale, it is first judged whether the total energy of the system is balanced.If the total energy of the system is balanced, the energy imbalance support can be provided by the mutual conversion between different energy forms.If the total energy of the system is also unbalanced, it is necessary to increase the energy supply output or reduce the load demand to ensure the energy balance of the system.The control time scale of the power subsystem in the multi-energy microgrid is small, while the energy balance control time scale of the heat and gas system is large.In the case of energy fluctuation of power subsystem, the energy stability control of multi-energy microgrid can be realized by the energy transmission inertia between the pressure energy of the heat and gas subsystem pipe network and the electric energy.Improve the level of grid frequency stability. Analysis of adaptive frequency stability control From the unbalanced power to the system frequency recovery stability.The frequency dynamics of the traditional power system is determined by the kinetic energy increment of the synchronous machine rotor, the unbalanced power and the rotational inertia.Unbalanced power and rotational inertia determine the frequency acceleration.The increment of rotor kinetic energy further determines the offset of frequency.This mechanism will be promoted in power systems with a large number of power electronic interfaces.The frequency dynamics are determined by the total energy storage increment, unbalanced power and equivalent inertia of the system. In the process of energy stability evaluation in multienergy microgrid.Considering the potential energy at the coupling node of the power subsystem and other energy systems in the multi-energy microgrid, the energy function model is established as follows : P, Q, V and  are active power, reactive power, voltage amplitude and phase angle respectively.Taking P as an example to illustrate the meaning of each subscript.Pgi and Plj are the active power of generator node i and load node j, respectively.Pij is the active power from node i to node j.D is the generator damping.w represents the angular velocity of the generator rotor.When the energy in the system fluctuates, the deviation between the current frequency of the power grid and the rated frequency is () According to the current frequency of the system, the system power angle  is    is the offset of the system phase angle after frequency offset. () n ft is the current frequency of system.Set In order to ensure the energy balance of the whole energy system, it is necessary to satisfy that the energy function value ( ) , When W < 0, the system is stable.When W = 0, is critical stable state.When W > 0, is unstable.In the case of unstable system, the system is adjusted in the following steps. (1) When the multi-energy microgrid is disturbed, the initial inertial response (t 0 ~): the total equivalent inertia of the power supply of the synchronous generator set and the virtual synchronous machine interface determines the initial acceleration of the frequency. (2) When the system frequency fluctuates, the equivalent inertia response (~t p1 ) is performed by controlling the electronic device: the frequency deviation change rate reaches the threshold, and the power electronic interface power supply with additional inertia provides equivalent inertia to slow down the acceleration of the system frequency drop. (3) When the frequency fluctuation exceeds a certain range, primary frequency modulation (t p1 ~tp2 ) is carried out: the frequency drop exceeds the dead zone of primary frequency modulation, and the power supply of synchronous generator set and virtual synchronous generator set increases output. (4) Frequency secondary drop (tp 2~) : Since the equivalent inertia of the system is reduced to the minimum at this time, the frequency may decrease rapidly, and the secondary drop extreme value may be smaller than the first drop. (5) Primary frequency recovery frequency: The system frequency is restored to a safe range by increasing the power output in the multi-energy microgrid. (6) Secondary frequency modulation: when the primary frequency modulation makes the adjustment deviation too large, by adjusting the generator output setting value, acting on the unit, wind turbine blade, increase the generator output of the system, or to a certain extent, according to the heat and gas transmission time inertia to reduce the corresponding load, until the rated frequency of the system is restored. Figure2 Frequency adjustment of multi-energy system Adaptive control and parameter adjustment model of multi-energy microgrid In the process of adaptive parameter cooperative control of inertia frequency of microgrid, the output of new energy in multi-energy microgrid system has great randomness and volatility, which leads to frequent jitter of system frequency and poses a certain threat to the safe and stable operation of system frequency.In the dynamic control process of multi-energy microgrid, due to the fluctuation of energy supply and demand, it is difficult for the control system to maintain the real-time energy balance of the system under a fixed compensation coefficient.In order to drive the compensation coefficient to change adaptively according to the change of diversity, a compensator that can provide adaptive lead compensation coefficient is designed, which can be expressed by the following function. (1 The leading angle of formula(28)can be written as The phase of the transfer function of the formula (28) is the same as the phase of .The corresponding transfer function can be obtained from the reference [22][23][24] as follows.1) As shown in the formula (30), the molecule represents the compensator phase that needs to be regulated in advance.c T is the time constant related to the system dynamics.The size of the c T value directly determines the anti-interference ability of the system.If the increase of c T value increases the gain and phase of the system to a certain extent, the lead compensation of all control loops in the multi-energy microgrid is expressed as where i W is the weight factor of the lead compensator i DC , and the root of the molecule is used as the phase of the lead compensation.so Formula(32) can be obtained. In the process of multi-energy microgrid control, increasing the advancer can enhance the accuracy of system compensation, thereby increasing the stability of the system.By matching (31), the weight factors of the coefficients on both sides are obtained, and the following equations are obtained: When m=2n+1, id W  can be uniquely obtained by solving the neural network, where 1 T and m T are the minimum and maximum lead time of the control system.Therefore, the lead compensation adaptive control based on the set value considering the demand response is shown in Figure 3. Example simulation In this paper, the simulation uses a 21-node distribution network and a 9-node gas distribution network system in Figure 4.The multi-energy microgrid system includes: a power-to-gas device with a capacity of 0.7 MW; the capacity of each gas turbine is 1MW, and the maximum transmission capacity of the tie line between the multi-energy microgrid and the upper power grid is 30MW.The equivalent capacity of the thermal power unit in the upper power grid is 100 MW, and the per-unit value of the adaptive advanced equivalent frequency modulation parameter is set to M=6.48, K=28.12,T R =6.95, F H =0.41.The length of a single pipeline of the gas system is 8km, and the diameter of the pipeline is 0.6m.The adaptive advance compensation control proposed in this paper is used for example simulation verification. Figure4 Multi-energy coupling system Without considering the hot gas inertia condition in the multi-energy microgrid, the gas turbine can also carry out the active support of the system through its own power output control, while the electric-to-gas device in the system can produce a certain amount of natural gas, which can be regarded as a negative natural gas load.The gas pressure dynamics at each node of the gas distribution network under normal operation and continuous low-frequency/high-frequency perturbation of the multi-energy microgrid is shown in Fig. 5, and the light shaded part is the range of fluctuation of the gas pressure at each node, and the upper/lower bounds are the extreme point curves of the gas pressure dynamics under continuous low-frequency/high-frequency perturbation.Under the premise of gas system inertia, the gas pressure at the nodes of the whole gas distribution network is still within a reasonable range through inertia control, which can avoid the unfavorable impact of the gas pressure overrun at the nodes of the gas distribution network caused by the participation of multi-energy microgrids in P-frequency regulation.Figure5 Pressure dynamic curve the multi-energy microgrid, the inertia level of the power subsystem is affected by the adaptive advance control of heat and gas, thus affecting the power angle oscillation characteristics of the generator in the system.Among them, the inertia time control of hot gas affects the inertia demand of the system, thus affecting the power angle oscillation of the unit.When the electric power shortage, gas load, synchronous machine output, wind power output and photovoltaic output in the multi-energy microgrid system are shown in Fig. 6.In order to analyze the influence of multi-energy microgrid coupling demand response on system frequency stability, the system frequency variation under the coordinated control of electricity, heat and gas is obtained by analyzing the inertia control of multi-energy microgrid under different scene conditions.The following three scenarios are simulated respectively.Figure6 Load demand and wind power and photovoltaic output Scenario 1: When the renewable energy output in the multi-energy microgrid is reduced at 10 s, and the other energy supply and load demand in the system remain unchanged, the adaptive control method with different lead compensation coefficients is adopted.The lead time is 0s, 20s and 40s respectively, and the system frequency simulation results are shown in Figure 7.In the case of reduced power supply, by increasing the supply of gas, and through the gas turbine to supply the corresponding power and provide a certain inertia support.The upper system power angle quickly restores stability.When the system fluctuates, the power angle fluctuation of the synchronous generator is compared between the time inertia control considering the multi-energy microgrid and the time inertia control without the multi-energy microgrid as follows.and 40s is taken respectively.The system frequency is shown in Figure 9. Due to the large inertia of the gas network, when the system fails, there is no instantaneous large energy fluctuation in the multienergy microgrid.Through adaptive advance control, under the premise of less gas supply to ensure the pressure of the pipe network, the upper system increases the output power of the synchronous machine.The power transmission capacity of the tie line is increased, and the frequency is restored to be stable.The power angle change is compared with the time inertia control of the microgrid and the time inertia control of the multi-energy microgrid.In order to verify the dynamic performance of the designed control method, a 20 % load disturbance is applied to the multi-energy microgrid at t = 30 s, and the system response curve is shown in Fig. 11.When the power and gas network in the system fail at the same time, by enabling the energy storage device, under the large time inertia response of the gas system, the frequency fluctuation of the system is obviously reduced after 20 s advanced compensation control.The control system can perform advanced compensation control before the energy of the multi-energy microgrid changes greatly to ensure the stability of the multi-energy microgrid energy.The comparison between the time inertia control considering the multi-energy microgrid and the time inertia control without the multi-energy microgrid is as follows.The power angle of the synchronous generator can be quickly restored to stability by using the method proposed.motor Through the analysis of simulation results, the advanced time scale control method of multi-energy microgrid based on energy function is obtained.It can maintain the sum of the electric heating gas energy demand in the system consistent with the sum of the actual control output, and ensure the energy balance.In the process of setting the lead time control, the system control effect is optimal at 20 s.The adaptive advanced energy function control method can improve the minimum power angle oscillation of the system. Through multi-scenario simulation, in the case of system disturbance or failure.Through the advanced control method based on energy flow, the transmission and transformation of energy can be reasonably controlled according to the renewable energy output and load power in the system.The coordination of the heat and gas network in the system and the traditional power supply in the region participates in the stability balance control, so as to suppress the influence of system disturbance and fault on the system stability. Figure13 influence of power adjustment speed of heat and gas network on frequency change The analysis of the frequency curve in Fig. 13 shows that the inertia demand of the multi-energy microgrid system increases with the extension of the response time of the gas network adjustment power in the system.When the power adjustment time of the gas network in the multi-energy microgrid exceeds 1 min, the inertia demand of the multi-energy microgrid system is greater than the inertia reserve inside the system.The danger of active power imbalance caused by expected faults cannot be solved by putting more inertia levels into operation alone.If the gas network load can respond quickly after receiving the control signal within a period of time before the system fluctuation, the power adjustment task can be completed.The inertia demand in multi-energy microgrid is greatly reduced.The heat network gas network in the multi-energy microgrid system based on advanced control completes power adjustment within a reasonable time.When the inertia time is equal to 2 ~ 5min, the quasi-steady-state frequency of the system is low.At this time, it can be considered to increase the proportional coefficient of the inertia control of the gas network, so as to improve the quasi-steady-state frequency of the system. Conclusion (1) Aiming at the problem of inertia reduction and frequency stability control difficulty in multi-energy microgrid.In this paper, a generalized inertia model in multi-energy microgrid is established.The classification of inertia in multi-energy microgrid is analyzed.Combined with the difference of energy transmission time scale of different energy subsystems, the energy function of multi-energy microgrid is established to analyze the system frequency stability process, and the source of inertia energy in multi-energy microgrid is obtained.Provide sufficient energy support for the power system to maintain system frequency stability. (2) Aiming at the different characteristics of transmission time scales of power grid, heating network and gas network in multi-energy microgrid.Through the short-term storage and pressure adjustment method of the pipe network system.The advanced control coefficient is set to realize the power balance of the multi-energy microgrid in a short time and maintain the frequency stability of the system.In the advanced control method of caitong in this paper.When the lead time is set to 20 s, the system stability is the best. (3) In order to maintain the stability of the system frequency in the multi-energy microgrid, there is a shortage of inertia level and active power in the system when the multi-energy microgrid fails.The control time scale of the hot gas system is obtained by calculating the allowed latest intervention time of the primary frequency modulation.The simulation results show that the stable control of the system power angle can be effectively realized by adaptive advance control. Figure 7 Figure 8 Figure 7 Frequency fluctuation simulation diagram Figure 9 Figure 9 Frequency fluctuation simulation diagram Figure 10 Figure 10Power angle fluctuation of synchronous motor Scenario 3: When the grid node 21 and the gas supply network 7 in the multi-energy microgrid fail at the same time.The lead time was set to 0ms, 20 s and 40 s respectively.In order to verify the dynamic performance of the designed control method, a 20 % load disturbance is applied to the multi-energy microgrid at t = 30 s, and the system response curve is shown in Fig.11.When the power and gas network in the system fail at the same time, by enabling the energy storage device, under the large time inertia response of the gas system, the frequency fluctuation of the system is obviously reduced after 20 s advanced compensation control.The control system can perform advanced compensation control before the energy of the multi-energy microgrid changes greatly to ensure the stability of the multi-energy microgrid energy.The comparison between the time inertia control considering the multi-energy microgrid and the time inertia control without the multi-energy microgrid is as follows.The power angle of the synchronous generator can be quickly restored to stability by using the method proposed. Figure 11 Figure 12 Figure 11 Frequency fluctuation simulation diagram
2024-06-26T15:17:56.568Z
2024-06-25T00:00:00.000
{ "year": 2024, "sha1": "d4cf4c4d13c3b914302e69c8b9990e14ee5e409b", "oa_license": "CCBY", "oa_url": "https://www.stet-review.org/articles/stet/pdf/forth/stet20240129.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "10899f6f4ff4fc25e95d30379dc00674cb2d6f0c", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
251612480
pes2o/s2orc
v3-fos-license
Hepatic Encephalopathy and Treatment Modalities: A Review Article Hepatic encephalopathy (HE) is a condition that is commonly seen in individuals suffering from liver cirrhosis. After excluding brain illness, HE is described as a range of neuropsychiatric disorders in individuals with liver impairment. It is characterized by personality changes, intellectual impairment, and a depressed level of consciousness. Toxins that are typically eliminated from the body by the liver build up in the blood and eventually reach the brain, causing HE. Many signs and symptoms of HE may often be treated if caught early and treated properly. It is important to remember that not everyone who is affected may experience every symptom mentioned below. Affected individuals should speak with their doctor and medical staff about their specific disease, associated symptoms, and general prognosis. Many people only have minor symptoms, known as minimal HE. The exact pathophysiology of HE is still being debated, with the primary theories focusing on neurotoxins, reduced neurotransmission caused by alterations in brain energy metabolism, systemic inflammatory response, and blood-brain barrier (BBB) disturbances in liver failure, as well as metabolic irregularities. Introduction And Background Hepatic encephalopathy (HE), which can manifest as many different neurological or mental diseases, from asymptomatic to coma, is a generic term for brain dysfunction brought on by hepatic insufficiency and/or portal-systemic shunting [1]. The underlying cause of liver illness is not taken into account in this definition of HE. However, chronic liver diseases (CLDs), including viral hepatitis, primary biliary cholangitis, alcohol-related liver disease, and non-alcoholic fatty liver disease, can all harm the brain through mechanisms unrelated to liver loss or dysfunction. A liver transplant (LT) is considered to be able to reverse the metabolic disorder known as HE. However, several studies have demonstrated that HE is characterized by neuroinflammation and neuronal cell death, and that extended durations of overt HE might have irreversible effects. These manifest as ongoing neurological issues following LT [2]. More significantly, regardless of the severity of the liver disease, HE has been related to a high risk of mortality, suggesting that it is a sign of hepatic insufficiency, but it may also have distinct implications for pathophysiology and prognosis [3]. Review Pathogenesis and causes According to data from recent studies, ammonia continues to play a significant role in the pathophysiology of HE [4]. In most cases, ammonia is changed in the liver to urea and eliminated through the urine. The brain is severely harmed by ammonia. Despite the fact that ammonia is considered to play a part in the development of HE, some individuals with high ammonia levels may not exhibit symptoms, indicating that additional factors may be at play. Inadequate functioning of central nervous system cells known as astrocytes which help in maintaining the blood-brain barrier (BBB), causes dysfunction of the BBB, leading to the entry of harmful substances into the brain parenchyma causing permanent damage to it. To identify the precise underlying processes that cause HE and the symptoms that go along with it, more research is required. HE can be brought on by low oxygen levels in the body, the use of specific medications, especially those that have an impact on the central nervous system, such as benzodiazepines and other sleeping aids, antidepressants, and antipsychotics, as well as desiccation, decreased bowel movements, gastrointestinal bleeding, binge drinking, septicemia, renal irregularities, and more. HE can be triggered by surgery in rare situations. Gastrointestinal bleeding is the most frequent trigger associated with HE, most likely because this condition is more common in patients with cirrhosis than in the general population ( Figure 1). FIGURE 1: A hypothesis on how different precipitating causes could cause HE. According to this concept, low grade cerebral edema, which affects astrocyte hydration, is a critical event and one of the main mechanisms that causes astrocyte dysfunction and the clinical symptoms of HE. Ammonia The neurotoxic ammonia is the most well studied in relation to HE. The gut produces ammonia as a byproduct of bacterial urease activity, protein digestion, and amino acid deamination. As a result, the concentration of ammonia in the systemic circulation is regulated by a functioning urea cycle in a healthy liver [5]. When there is any liver pathology leading to decreased functioning of the liver, it causes ammonia to build up in the system and thus leads to encephalopathy. Ammonia levels affect prognosis and are a crucial part of the development of HE and are a treatment target. In individuals with cirrhosis, astrocyte swelling brought on by hyperammonemia may be a crucial development factor for HE [6]. The conversion of ammonia to glutamine by astrocytes, which results in an increase in intracellular osmolarity, may be one cause of brain edema. When acute liver disease is evaluated biochemically or using postmortem material, brain glutamine contents are much higher [7]. Dysfunction in neurotransmission Neuronal disinhibition is caused by altering neurotransmission systems Recent research suggests that the development of HE may be influenced by a dysfunctional lymphatic system, which aids in the elimination of different chemicals that build up in the brain [8]. Oxidative stress and inflammation A damaged BBB and increased neuroinflammation are caused by the diseased liver, which also exacerbates systemic inflammation. Additionally, it encourages superimposed infection and gut bacterial translocation. Oxidative stress, a generalized condition that commonly exists in cirrhosis, may have an impact on BBB porosity because lipids, proteins, and DNA are highly reactive with reactive oxygen (and nitrogen) species [9] as well as minor weight loss. Different therapeutic alternatives for metformin-intolerant women with polycystic ovary syndrome (PCOS) should be examined as a result of metformin intolerance and its associated adverse effects. Bile acids Recently, it was shown that cirrhotic individuals with HE's cerebrospinal fluid (CSF) had enormous quantities of bile acids [10]. Regional cerebral edema in rats with acute galactosamine-induced liver failure has been demonstrated in animal models, proving that the BBB has lost its barrier function, at least in part [11]. Buildup of manganese With high plasma levels brought on by the liver's inability to eliminate it, manganese has also been linked to the etiology of HE and has been found to accumulate in the basal ganglia. The correlation between this and pallidal signal hyperintensity has been established seen in cirrhotic patients' MRIs [12]. Inflammation It is critical to emphasize that brain cell destruction contributes to the development of HE as well as being one of its side effects. It has been demonstrated that under these conditions, astroglia produces tumor necrosis factor (TNF)-, followed by the release of glutamate and the activation of microglia. The proliferation of microglia and the production of pro-inflammatory cytokines including TNF-, interleukin-1 (IL-1), and interleukin-6 (IL-6) are often observed after microglia activation [13]. Animal and human research has provided evidence that elevated ammonia only causes HE when there is a systemic inflammatory response syndrome (SIRS) [14]. Thus, it is generally agreed that altered nitrogen metabolism caused by sepsis, as well as the release of pro-inflammatory mediators, might cause HE in cirrhotic patients [15]. Symptoms and signs A wide range of generalized neurological and psychological symptoms are brought on by HE. The signs of brain dysfunction include poor mental clarity and confusion. In the early stages, modest changes in behavior, demeanor, and logical reasoning can be seen. The person's attitude might shift, and their judgment could be clouded. Possible disruptions might occur to regular sleep habits. People could experience anxiety, depression, or irritability. They can struggle to focus. The individual may have a musty, sweet breath smell at any stage of encephalopathy. One of the signs of mild HE is musty or sweet breath, along with analytical difficulties, personality changes, poor focus, difficulties writing or losing other tiny hand movements, disorientation, amnesia, and poor judgment. Confusion, sleepiness or lethargy, anxiety, convulsions, profound personality changes, exhaustion, incoherent speech, trembling hands, and sluggish movements are some symptoms of severe HE. As the condition worsens, patients find it difficult to maintain their hands firm when they extend their arms, which causes their hands to flail around crudely (asterixis). People may jerk their muscles unintentionally or after being subjected to a sudden noise, light, movement, or other stimuli. The name for this jerky is myoclonus. Additionally, patients frequently experience confusion and drowsiness, as well as slow motions and speech. It is typical to feel disoriented. Encephalopathy patients seldom feel angry or aroused. They may eventually go unconscious and into a coma as their liver function continues to decline. Despite therapy, coma frequently results in death (Figure 2) [16]. Treatment modalities in patients with hepatic encephalopathy Depending on how severe HE is, several therapy modalities are used. The goal is to maximize the body's ability to remove ammonia from the bloodstream while reducing ammonia production, which remains the major priority. However, ammonia metabolism is intricate and controlled by several organs, including the brain, muscles, kidneys, and liver. In order to optimize their efficacy, medications used to treat HE must be well studied and put through controlled clinical studies. A significant barrier to effective therapy for HE is the absence of interventions for other triggering variables such as oxidative stress, inflammation, or other brain changes. Managing the consequences of encephalopathy, while actively detecting the underlying cause, is crucial. The first step in therapy is to address any potential precipitating causes, such as infections, electrolyte imbalances, dehydration, etc. after the patient is well after the initial episode, it is time to address the recurrence of HE. Even though mild and episodic HE are minor in nature, they can significantly impact a patient's ability to live a normal life. Sadly, only overt hepatic encephalopathy (OHE) is currently regularly treated, and there are few other medical alternatives for treating HE as a whole. Finally, a more individualized course of therapy will need to be created for the patients, taking into account not only the stage of HE but also the underlying disease and its past. Non-absorbable Disaccharides and Polyethylene Glycol Standard therapies that try to lessen the quantity of ammonia absorbed into the bloodstream include lactulose and, to a lesser extent, lactitol. Lactulose has several effects, including acting as a laxative and producing a hyperosmolar environment that hinders the colon's ability to effectively absorb ammonia. Despite the paucity of research supporting the use of lactulose in the treatment of acute HE, a recent metaanalysis demonstrated the efficacy of non-absorbable disaccharides in the management and prevention of HE. Other advantages include a decrease in fatalities from all causes and significant liver-related morbidities [17]. It also demonstrated that non-absorbable disaccharides therapy might lessen major liver disease-related side effects such as liver failure, hepatorenal syndrome, and variceal hemorrhage [18]. Antibiotics There has been studies on oral antibiotics being used to regulate intestinal flora and lessen ammonia generation as a treatment for HE. Neomycin, an aminoglycoside antibiotic, helps to inhibit glutaminase and reduce ammonia levels while being poorly absorbed and reaching large amounts in the stomach [19]. The use of antibiotics in HE was first made popular by this substance. Neomycin's usage is currently not permitted in clinical practice. A semi-synthetic, non-absorbable antibiotic derived from rifamycin is known as rifaximin. When it comes to lowering blood ammonia levels, rifaximin has been demonstrated to be at least as effective as neomycin with fewer side effects [20]. It has anti-inflammatory qualities and exerts its effects via altering the makeup and metabolism of the gut microbiota, among other ways [21]. It has been demonstrated that rifaximin and lactulose combined treatment is more successful than rifaximin alone [22]. Probiotics Live bacteria supplements known as probiotics are thought to help intestinal dysbiosis and reduce ammonia output. Probiotic therapy may help in the onset of overt HE and reduce the plasma ammonia levels, according to a Cochrane systematic review, but it has minimal impact on mortality [18]. L-Ornithine L-Aspartate (LOLA) The production of glutamine and the urea cycle, two crucial processes during ammonia detoxification, are stimulated when L-ornithine L-aspartate (LOLA) is used as a supplement [23]. Comparatively to placebo or no-intervention controls, findings based on meta-analyses show that in cirrhotic individuals with covert hepatic encephalopathy (CHE) and HE, L-ornithine-L-aspartate is superior at symptom recovery and blood ammonia level reduction [23]. Intravenous Albumin Infusion After studies demonstrated that it improves outcomes for those having cirrhosis with hepatorenal syndrome or spontaneous bacterial peritonitis, it became widely used in patients with the disease. The alleviation of oxidative stress reduction and plasma expansion to cause vascular dysfunction is hypothesized to be the mechanism of action [23]. Nutritional Supplement Branched chain amino acids (BCAA) such as valine, leucine, isoleucine, etc, are commonly deficient in patients with liver cirrhosis. Skeletal muscles also help in the detoxification of ammonia with the amidation process for glutamine synthesis using BCAAs. So, it is essential to give BCAA supplements to patients with liver cirrhosis. Antioxidant Supplement These help in improving the general health of patients, reduce oxidative stress on the body, especially the liver, and help in faster regeneration. Proper nutrition and antioxidants help reduce the chances of carcinoma and other complications related to the hepatic system [20]. Conclusions In individuals with cirrhosis associated with end-stage liver disease, HE plays a substantial role in morbidity. The unexpected nature of HE has a significant negative influence on patients and reduces their quality of life. First, ask the patient to avoid further alcohol consumption and start lactulose solution as soon as possible to reduce the blood ammonia levels and prevent further irreversible brain parenchymal damage. New and forthcoming therapeutic options have been created as a result of research into the complexity of HE. The mainstays of therapy continue to be avoiding HE precipitants and a mixture of lactulose and rifaximin. For acute increase in blood ammonia levels, LOLA should be prescribed as it led to a rapid decrease in blood ammonia levels. Nutrition and antioxidant supplement should be continued for the general health of patients and to prevent further progress of cirrhosis into carcinoma. The main aim in patients with HE is to reduce the blood ammonia levels to prevent brain parenchymal damage which can lead to permanent mental abnormalities. Future research should focus on identifying further innovative pathways and therapeutic targets in the hopes of really helping people with HE. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-08-17T15:05:52.872Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "94143f8cc4aa95b650e9985f1ae42b66896f9aac", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/106782-hepatic-encephalopathy-and-treatment-modalities-a-review-article.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c57def9267f8d543c91efb5957cf15b6b5026c67", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233391462
pes2o/s2orc
v3-fos-license
Key components influencing the sustainability of a multi-professional obstetric emergencies training programme in a middle-income setting: a qualitative study Background Multi-professional obstetric emergencies training is one promising strategy to improve maternity care. Sustaining training programmes following successful implementation remains a challenge. Understanding, and incorporating, key components within the implementation process can embed interventions within healthcare systems, thereby enhancing sustainability. This study aimed to identify key components influencing sustainability of PRactical Obstetric Multi-Professional Training (PROMPT) in the Philippines, a middle-income setting. Methods Three hospitals were purposively sampled to represent private, public and teaching hospital settings. Two focus groups, one comprising local trainers and one comprising training participants, were conducted in each hospital using a semi-structured topic guide. Focus groups were audio recorded. Data were analysed using thematic analysis. Three researchers independently coded transcripts to ensure interpretation consistency. Results Three themes influencing sustainability were identified; attributes of local champions, multi-level organisational involvement and addressing organisational challenges. Conclusions These themes, including potential barriers to sustainability, should be considered when designing and implementing training programmes in middle-income settings. When ‘scaling-up’, local clinicians should be actively involved in selecting influential implementation champions to identify challenges and strategies specific to their organisation. Network meetings could enable shared learning and sustain enthusiasm amongst local training teams. Policy makers should be engaged early, to support funding and align training with national priorities. Supplementary Information The online version contains supplementary material available at 10.1186/s12913-021-06385-5. Background Multi-professional obstetric emergencies training is one of the most promising strategies to improve global maternity care [1,2]. Effective training has been associated with improved perinatal outcomes [3][4][5] as well as positive impacts upon organisational and human factors [6]. Improvement in knowledge and attitudes does not always translate into improved outcomes [7]. Sustainability of effective training programmes following successful implementation projects remains a challenge [8,9]. Healthcare staff are often faced with competing initiatives and priorities, [10] and many initiatives are not sustained after the initial funding ends [11]. Understanding, and incorporating, key sustainability components into the implementation process could improve the implementation of local training programmes and embed them within healthcare systems, [12] reducing the burden of promising but short-lived interventions on limited resources and funding. Successful implementation of obstetric emergencies training programmes in high, low and poor socioeconomic areas of middle income settings is well described [5,[13][14][15][16][17][18][19][20]. However, there are few data describing implementation in middle-income settings, where a significant proportion of the world's births occur. Middleincome settings have specific challenges, as despite a modest level of resource and income they often experience persistently high mortality and morbidity; hence findings from high and low-income settings may not be directly generalisable to middle-income settings. Interventions should be adapted to individual contexts for successful implementation [21] and therefore a better understanding of middle-income settings is required. PRactical Obstetric Multi-Professional Training (PROMPT) is a multi-professional obstetric emergencies training programme with robust evidence of effect [13][14][15][16]. Local PROMPT courses are one-day, in-house training courses for multi-professional maternity staff, comprising lectures and practical scenarios focusing on emergencies such as pre-eclampsia, haemorrhage and shoulder dystocia. However, there is minimal experience of implementing PROMPT in middle-income settings despite successful implementation of PROMPT in a variety of high and low-income contexts internationally. The Philippines PROMPT Project was developed to address the needs of maternity care in the Philippines, a middle-income setting. Women birth in a range of public and private healthcare settings, including home births with or without the presence of skilled birth attendants, community birthing centres, rural health units, first level referral hospitals and tertiary hospitals. Maternal and neonatal mortality rates have declined in the Philippines in recent years (currently 120:100,000 live births and 12.6:1000 live births respectively) [22,23] but failed to meet the targets set by the Millennium Development Goals [24]. There is an ambition at both local and government levels to continue to improve perinatal outcomes and progress towards achieving the Sustainable Development Goals for perinatal health [25,26]. The Philippines PROMPT Project was a feasibility study investigating implementation of local PROMPT courses in seven pilot tertiary hospitals. The selection criteria for these hospitals included: tertiary urban hospitals, varied birth rates and a mix of public and private hospitals. The PROMPT training package was adapted to local clinical practices and needs following reflexive feedback from multidisciplinary teams [27]. Local PROMPT training was successfully implemented between December 2015 and February 2017: 31 local PROMPT courses delivered training to 816 multiprofessional staff, 87% of all maternity staff in the hospitals. The aim of this follow-up study was to identify the key components that influence the sustainability of PROMPT in the Philippines, a middle-income setting, using focus group methodology. Methods This was a 2 year project with seven maternity units participating in the Philippines PROMPT Project. Three units were selected for the qualitative study. This allowed for a representative sample of the seven units. The number of units interviewed was limited due to limited research resources. The units were purposively sampled (Table 1) as a representative sample of private, public and teaching hospital settings. The researchers planned to include additional units if data saturation was not achieved. However this was not required. Focus group participants were recruited by distributing study information leaflets to all staff undergoing training in each unit, aiming for six to eight multi-professional volunteers for each focus group. Staff interested in participating were asked to contact the local PROMPT trainers. Local PROMPT trainers were responsible for ensuring multi-professional representation ( Table 2). Focus groups were conducted rather than individual interviews, to reflect the multidisciplinary nature of the training. All focus group participants gave written consent. Two focus groups were held in each unit in September 2016. The trainers and training participants were interviewed in separate focus groups to allow more open discussions amongst the groups. Focus Group 1 comprised local PROMPT trainers Focus Group 2 comprised local PROMPT training participants. KG, the lead researcher of the Philippines PROMPT Project, facilitated the focus group discussions using a semi-structured topic guide [see 'Supporting Information: Topic Guide Facilitators, Topic Guide Participants']. Topics relevant to the group were used for discussion. This topic guide was adapted from a topic guide developed for and piloted in a parallel-process evaluation of PROMPT training in Scotland. ML, a Research Midwife, made field notes to record key phrases and non-verbal communications. Nobody else was present other than the researchers and participants. The focus group participants were aware that KG and ML were female PROMPT faculty members from the United Kingdom but did not know any personal information. KG and ML had prior experience of focus groups and Qualitative Research Methodology. Focus groups were conducted in English and lasted 25-54 min and were audio recorded. All participants were fluent in English. All the training resources were in English. Transcripts were produced by an independent transcription company. Transcripts were not returned to participants for comments or correction and the participants were in agreement with this plan. NViVo 10 software (QSR International) was used to manage the data. Reflexive thematic analysis process was used for analysis. Transcripts were reviewed and initial codes were developed by KG. Three researchers independently coded the same transcript and compared the codes. Any discrepancies were addressed by identifying the code that most suited the research question. This process was carried out for three transcripts until consistency of interpretation was achieved. The codes from the complete dataset were collated. Codes that occurred more frequently or were infrequent but felt to be significant to the study question were identified and initial themes were identified. These themes were refined through an iterative process to identify the best fit for themes that had an influence on sustainability of the project [28]. The qualitative researcher and RB had not been involved in delivering the training bringing a more independent interpretation of the data. Results The focus groups comprised multidisciplinary participants including obstetric nurses (17), resident obstetricians (13), consultant obstetricians (10), resident anesthesiologists (4) and consultant anesthesiologists (3). All the participants had attended the PROMPT training that was delivered in English and had good spoken English. The facilitators were familiar with working together within their teams and therefore all contributed freely to the focus group discussions. Within the training participants' focus groups, not all staff were familiar with working together and the consultant obstetricians and anaesthesiologists were notably more forthcoming during the discussions. A conscious effort was made to involve all participants, encouraging them to give their views and ensuring multidisciplinary representation. Three main sustainability themes and nine sub-themes were identified (Fig. 1). These were: attributes of the local champions, multi-level organisational involvement and addressing organisational challenges. Theme 1: attributes of the local champions Implementing the Philippines PROMPT Project required clinical leads from the participating hospitals to nominate a team of local champions responsible for coordinating and running the local PROMPT training in their maternity units. Each hospital had five multi-professional local PROMPT champions, representing each staff group contributing to maternity care: Obstetric consultant and resident, Obstetric nurse, Anesthesiologist and Training Officer. Three key sub-themes related to local champions emerged from the focus groups: motivation and commitment, their influence within the hospital and how they promoted the training. (Fig. 2). Motivation and commitment Driven by their belief in the value of the training, the champions demonstrated their commitment by meticulously organising the training days, rescheduling their clinical commitments and even sacrificing other incentives. Where there were challenges to seeking approval from hospital committees, the champions showed their commitment by persistence and using their initiative outside of the organisational process. The champions not only planned how to deliver the training, but also considered additional service improvements and expansion of the training to include other specialties. Influence within the maternity unit The champions were often respected local leaders who could influence their colleagues within the maternity department and engage senior staff in the training, particularly senior consultants, some of whom were initially reluctant to attend. In two units, local champions were able to engage senior managers to support the training. In one unit, the hospital chairman promoted the training by sending personalised text messages to their consultant colleagues. Promoting the training The champions took initiative and used various techniques to promote training participation. These included regular updates at staff meetings, texts and innovative publicity materials such as T-shirts designed with the Project logo. Another theme that emerged from the focus groups was multi-level organisational involvement, which included clinical leads of the specialty involved, as well as members of the Hospital Trust board, clinical support services, support from local policy makers and endorsement from national organisations. (Fig. 3). Engagement within the hospital Multi-level involvement within the hospital enhanced the implementation process and led to local service improvements that could be employed to further reinforce the perceived utility of the training. Examples of successful initiatives included: organisation of emergency equipment into emergency boxes and allocation of a dedicated anesthesiologist to the labour ward, ensuring readily available anaesthetic expertise. Support from the senior leadership team, such as Obstetrics Department Leads, seemed to validate the value of training to staff and provided useful role modelling. Senior leadership incentivised staff to attend by ensuring that the training occurred during paid working hours. Multi-level support from clinical services not directly involved in maternity care was improved after local PROMPT training and helped to create sustainable service improvements. For example, staff attending training identified that some medications required in an emergency were not readily available as they had to be prescribed and dispensed off-site, and this caused delays in providing potentially life-saving treatment. The problem was conveyed to the pharmacy department and a system was organised so that these medications were stored on the Labour Ward, replacing the previous off-site dispensing system. Support from local and national organisations Each focus group identified the benefit of official support from local policy makers and national organisations such as Department of Health (DoH) or Philippine Obstetrical and Gynecological Society (POGS) mandating the training. They suggested that embedding the training into the DoH national curriculum would provide traction for all DoH-governed hospitals to implement the training. A mandate from POGS would call obstetricians to action although potentially may not appeal to nonobstetricians, contrary to the multi-professional nature of PROMPT training. Unit C, who had relied mainly on the resourcefulness of the champions to implement the training in the absence of the Medical Director's support, suggested ideas for involving and getting traction with local leaders. Theme 3: addressing organisational challenges Each hospital identified similar, and apparently universal, challenges to sustainability: promotion, dissemination and adoption of the tools introduced through the training, securing staff attendance on the training days, managing changes in staff personnel and the cost implications for releasing staff for training. (Fig. 4). Implementing the tools During the training, tools such as a Labour Ward Board, a Maternity Early Warning Score (MEWS) chart and emergency boxes for clinical emergencies such as preeclampsia and postpartum haemorrhage were introduced. These practice tools were extremely popular and although all of the trainers wanted to implement these tools, they recognised challenges with their availability and funding. MEWS charts utilise a red and amber warning score system to aid early recognition of the unwell woman [29] but staff were concerned that colour printing these single-use charts would incur significant costs. Emergency boxes organise key equipment, drugs and algorithms required for specific emergencies into accessible, portable boxes. Some clinicians questioned how the replenishment of supplies would be funded and who would be responsible for refilling the boxes after each use. Although senior staff were keen to implement the tools, at ground level some Obstetric residents expressed concerns that high patient volume and heavy workload impacted upon the time available to implement these tools during routine clinical practice. Ensuring all staff attend training The focus groups reported challenges with local staff attending training due to staffing pressures restricting access to protected study leave, staff rotation in and out of the maternity departments, and securing the attendance of staff that had initially signed up to attend training. In Unit B, only 1 anesthesiologist attended the training due to insufficient staffing to cover clinical commitments. Unit C staff were often requested to leave the training early to fulfil clinical demands. Unit C suggested restructuring the training day into two half-days to reduce the length of time that staff were away from the wards in a single session, therefore reducing the likelihood of staff being removed from training. Differences in public and private hospital challenges were apparent. Unit A, as a private hospital, did not receive emergency patient referrals and had an international accreditation that enabled training to be embedded into the departmental policy. Units B and C, as tertiary public hospitals, received emergency patient referrals from outlying units, therefore patient volume, workload and staff availability for training could be unpredictable. Unlike Unit C, Unit B is a University teaching hospital with a structured programme for Training and Research that facilitated the implementation process. Unit A staff explained that they infrequently experience obstetric emergencies; this is likely related to fewer births and patient selection. Unit A recognised the need for their staff to attend training despite experiencing fewer morbidities and mortalities compared to public hospitals. They valued the training as an opportunity for their staff to refresh these less frequently used skills, which may improve clinical outcomes. Managing changes in personnel and the cost implications of releasing staff to attend training Suggestions to manage changes in personnel and regular staff rotations included recruiting more trainers to build a larger faculty and running more local PROMPT courses, although the cost implications of releasing staff to attend further training was highlighted. Suggestions to improve attendance included making training mandatory for all staff and providing regular outcome-based feedback to demonstrate clinical improvement to encourage 'buy-in' from the staff, particularly those who may not perceive the training as valuable. Discussion Three key themes influencing sustainability were identified by the focus group participants: attributes of local champions, multi-level organisational involvement and addressing organisational challenges. These themes were universal across all three hospitals and agreed amongst the individual staff groups. The themes resonate with findings from studies in high and low-income settings [10,11,30]. Similar challenges and facilitators were identified, such as staffing and funding resources and involvement of senior leadership and local policy makers. Our study adds the perspectives of local maternity unit staff to the current literature on sustainability of healthcare interventions, provides an exemplar of implementation and associated challenges within middle-income settings and outlines potential strategies to improve sustainability of local training. Adoption of an intervention in one setting may not be generalisable to other settings [1]. There is a call for a deeper understanding of the underlying social processes and 'active ingredients' supporting implementation [31]. This unit-level study directly explores ground-level staff experiences of the implementation process. These staff are immersed in their own context and most likely to understand issues specific to their unit. They also have an integral knowledge of the staff and system that external organisations may struggle to acquire. This study highlights the advantages of harnessing local expert knowledge to select influential champions to lead implementation of the intervention and address local challenges with effective strategies. The local champions were the driving force of the training, and apparently key to the success of initial implementation, as well as sustainability. There was no specific methodology to objectively assess the suitability of the champions, however this subjective system of the unit leaders selecting the champions appears to have been successful. Further research is indicated to identify this intuitive selection of champions as this is likely to have an impact on the implementation of the project. Implementation is influenced by multiple interacting factors at numerous levels within, and across, healthcare systems [12]. In this study, backing from senior leaders within the hospital setting was crucial for embedding the training into hospital policy. Their perceived involvement was not essential for implementation but it was evident in Unit C that without this support, greater efforts were required from the champions to implement the training. Involvement from other clinical services, such as the pharmacy department, enabled service improvements that enhanced the perceived value of the training and reinforced the importance of it being locally run. A national mandate for training was recommended, particularly by Unit C. However, some studies suggest that top-down approaches can fail to engage groundlevel staff who may not perceive the value of the intervention or may feel change is being imposed upon them [32]. Tailoring implementation to an organisation's needs increases the likelihood that staff will adopt the intervention [33]. This study demonstrates a bottom-up approach utilising the expertise of local staff to mobilise ground-level staff and promote local ownership, with support from external organisations to enable scale-up and sustainability. All hospital staff groups described similar challenges regarding staffing and resources and presented different strategies for overcoming them. Increasing the number of training faculty can build a critical mass of trainers sufficient to maintain sustainability beyond any changes in training personnel. Regular knowledge and outcomebased feedback was considered as a method to validate the training and encourage 'buy-in' from any reluctant staff, as well as policy makers. Ground-level staff anticipated difficulties implementing practice tools due to high patient turnover, time pressures and restricted funding. This study demonstrates that the feasibility of introducing these tools should be tested, possibly with a short-term service evaluation that includes feedback from staff on their perception of these tools. This process could guide the successful integration of tools that support local clinical practice. Many of our findings align with the current literature, [34][35][36][37] which is positive and may validate our methodology and findings, however we have also identified some issues specific to middle-income settings. The three hospitals are representative of all tertiary maternity care settings in the Philippines: private, public and university-affiliated tertiary urban hospitals, and we would expect our findings to be representative of other similar-sized hospitals with comparable demographics in other middle-income settings. The limitations of the study were acknowledged and addressed where possible. The team was aware of possibilities of reporting bias and this was reduced by conducting separate focus groups for trainers and participants to reduce any reporting bias. The focus group facilitators were involved in delivering the original PROMPT Train-the-Trainers (T3) programme to the participating units, and as a consequence, their involvement may have introduced a bias. Although Focus Group 1 participants were aware that the focus group facilitators were part of the original T3 training team, Focus Group 2 participants had not attended the original T3 programme and were not aware of the focus group facilitators' involvement. The scope of the focus groups was explained to the participants and they were encouraged to give honest opinions. The participants had good spoken English but we cannot exclude the possibility of losing subtle findings in translation. Field notes were used to capture the non-verbal interactions. It is debatable whether the effect of using the same team to run focus groups was entirely detrimental to the quality of data. The team were immersed in the setting up of the training programme and this familiarity enabled them to explore issues that an independent researcher might not have. These results may not be representative of smaller non-tertiary or rural hospitals, where the infrastructure and systems are different to the study hospitals from large metropolitan settings. However, the findings from this study are likely to be relevant to future sustainability of local training across a variety of hospital settings. A follow up of these focus groups after a period of time would be useful to identify longer term sustainability of the project and highlight new themes. Conclusions To conclude, we have identified three key themes that influenced the sustainability of a local obstetric emergencies training programme in a middle-income setting. These factors, including potential barriers to sustainability, could be usefully considered when designing and implementing training programmes in other middle-income settings. Local champions are vital to identifying challenges and develop strategies specific to their organisation. Successful strategies could be collated and made available to local training teams and across networks. In addition, policy makers and major stakeholders should be engaged early on, to gain support for funding and to align training with local or national training priorities. Abbreviations DoH: Department of health; MEWS: Maternity early warning score; POGS: Philippine obstetrical & gynecological society; PROMPT : Practical obstetric multi-professional training This study was presented at RCOG World Congress 2018 as an oral presentation. Authors' contributions KG, RB, CW, ML, ME and TD designed the study. KG, ML, NB and RI contributed to the acquisition of the data. KG, RB and IdS interpreted the data. KG, RB, CW and TD drafted and revised the manuscript. ML, ME, NB and RI revised the manuscript. All authors read and approved the final manuscript. Funding Funding was provided by Project HOPE, who received financial support from Ferring Pharmaceuticals' Corporate Social Responsibility Programme. Ferring Pharmaceuticals did not have a role in the design of the study or collection, analysis or interpretation of the data, or writing the manuscript. Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate Ethical approval was granted by the University of Bristol Ethics Committee, ID 41204 on August 31, 2016. All focus group participants gave written consent to participate in this study. Ethical approval was not submitted to a committee local to the study. The participants were given written information confirming that University of Bristol ethical approval had been granted. Consent for publication Not applicable. Competing interests TD is a Trustee, and KG and ML are Members, of the PROMPT Maternity Foundation, the charity that produces the PROMPT training package. They do not receive any financial reward for their association with this charity. CW is a Member of, and employed by, the PROMPT Maternity Foundation. NB is employed by Project HOPE, the organisation that coordinated the local implementation of the Philippines PROMPT Project and provided funding for the study. RI was an employee of Project HOPE during the time of the study.
2021-04-26T13:44:08.238Z
2021-04-26T00:00:00.000
{ "year": 2021, "sha1": "9bc0c1eea63afde1ed1987b1e824add8972b7be6", "oa_license": "CCBY", "oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-021-06385-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9bc0c1eea63afde1ed1987b1e824add8972b7be6", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
261540658
pes2o/s2orc
v3-fos-license
Creating a common language for soundscape research Much of the work into the understanding of our auditory environment, referred to as soundscape research, has emerged from international and interdisciplinary research. This has enabled growth in understanding and increased opportunities for optimising shared environments but has also formed one major obstacle: a lack of a common language to describe soundscapes. Therefore, the purpose of this study is to validate translated soundscape descriptors in Dutch as part of the Soundscape Attributes Translation Project (SATP). For this Introduction Human auditory perception is integral to how we attend, interact, respond, and transform our surroundings [1].Therefore it is no wonder that much of the work into perception and understanding of the auditory environment (referred to as soundscape research) has emerged from interdisciplinary research in the fields of acoustics, architecture, environmental studies, and psychology [2].Such an interdisciplinary approach has enabled growth in understanding soundscape perception, increasing opportunities for application by urban planners and others involved in constructing and optimising shared environments.For example, the acoustic environments of schools, parks and other facilities can be investigated to create optimal soundscapes that facilitate the purpose of learning, relaxation and more [2][3][4].Besides creating optimal environments for various purposes, soundscapes allow for research on sociocultural, attitudinal, and physiological factors in sound perception.For instance, when the relationship between attitudes toward COVID-19 and outdoor soundscape appraisal was investigated, people that were more concerned with COVID-19 were also found to be more sensitive to sound exposure [5]. Research on the perception of soundscapes faces a major obstacle: the absence of a common language.Previously, Axelsson and colleagues developed eight attributes to describe soundscapes in Swedish and translated them into English [6,7].They became the de-facto standard of soundscape description as noted by Nagahata [8].These attributes were translated to English as "Pleasant, Annoying, Eventful, Uneventful, Vibrant, Calm, Chaotic, and Monotonous".They were subsequently translated into over ten languages including e.g.Portuguese, Indonesian, and Greek.However, several soundscape researchers noted the lack of validation of this standard across these languages [9,10].Validation of such measures is crucial to ensure their cross-cultural and crosslinguistic applicability, since evidence suggests that similar words even within the same language can be used in functionally different ways to describe different scenarios [11].For example, previous research into the interlingual compatibility of the descriptors "noise" and "sound" found that the neutral term "sound" could obtain a negative connotation when directly translated into Japanese where it was observed to be synonymous with the traditionally negative descriptor of "noise" [9].Further, a study by Cao & Gross [12] supports the idea of cultural differences between participants in the processing of sensory stimuli.This is rooted in the social orientation hypothesis describing cognition as a source of cultural differences and social context as a determinant of the perception of sensory consequences.In a study with Dutch and Norwegian participants, it was found that differing political, historical, and cultural contexts influence the understanding of apparently straightforward notions [13].Linguistic equivalence of words does therefore not straightforwardly imply equivalence in meaning. Given the lack of validated cross-linguistic descriptors to assess soundscapes, the Soundscape Attributes Translation Project (SATP) was conceived to validate the above-mentioned attributes as proposed in ISO/TS 12913-2:2018 [14,15].Although English is the most spoken language on the planet, it is spoken only by 18 percent of people, of which fewer than a third are native speakers [16].With all intended translations within the SATP, about 2.53 billion native speakers from all around the world would be represented, enhancing the international utilisation of validated soundscape attributes in research, as well as application [14]. The current study is concerned with the validation of the Dutch soundscape attributes as part of the SATP, through a standardised listening experiment following ISO recommendations and the SATP protocols.The field of soundscape studies has been avidly adopted in The Netherlands and Flanders (the Dutch speaking part of Belgium).Publications (to name a few) range from applications in healthcare for people with disabilities [17] and dementia [18,19], to urban planning [20,21].But also numerous studies related to public health [22] and noise abatement policies [23] were published.Even historical soundscape studies were conducted in The Netherlands [24].While some of these studies use Dutch translations of the ISO soundscape attributes and Weber [23] included a principal component analysis as part of one of them, it seems there are no explicit formal attempts at validating the vernacular.Therefore, the aim of this study is to test the conjecture that the proposed Dutch translation of the soundscape attributes are employed similarly to the English attributes.If they indeed match on perceived meaning, the Dutch translations should lead to similar average ratings of soundscape appraisal compared to the English attributes, indicating that they are suitable to be employed in future soundscape research in The Netherlands and Flanders. Participants The study involved 32 participants (22 women, 10 men).All participants were adult Dutch native speakers and first-year psychology students at the University of Groningen, in the Netherlands.Thirteen participants were below the age of 20, 18 participants were between the ages of 20 and 25, and one participant was between 31 and 35.The recruitment of participants took place through the SONA participant pool used by the Psychology department of the University of Groningen.All participants signed up voluntarily.Participation was compensated through the assignment of SONA credits, which the participants needed to fulfil their study program's requirements.All participants indicated having no history of hearing loss, though normal hearing was not assessed through audiometry.Many participants reported that the stimuli were louder than they had expected. Translations The first step of the standardisation process involved translating the attributes from English to Dutch.To obtain these translations two expert panels with soundscape researchers were formed, one in The Netherlands (N = 4) and one in Flanders (N = 3).All members had previous experience with translating or employing the Dutch attributes in scientific studies.Both expert panels independently of each other held group discussions and provided two or three translations per attribute.Subsequently, the chairs of the Flemish and Dutch groups discussed the proposed translations, which contained as many differences as similarities.These differences were somewhat expected, as Flemish can be considered a dialect of Standard Dutch with some lexical and grammatical differences.When the translations between the groups overlapped, those words were immediately selected for further use in the validation procedure (see Table 1).After thoughtful consideration, consensus was reached on the other attributes as well.For this, the chairs of the expert groups chose words that are common in both Standard Dutch and Flemish and were likely to be used by laypersons (as opposed to picking highly technical terms). Throughout the process, the main focus of the translation was to secure the contextual meaning of the initial English attributes, not the literal linguistic meaning.For example, the adjective "eventful" may be applicable to sounds in English but the literal translation "veelbewogen" may not be prevalent in Dutch.This approach was supposed to eliminate the variation in soundscape description due to translation errors and word interpretation.Therefore, two translations for each attribute were developed.Furthermore, effort was made to capture the antipodal nature of the attribute pairs belonging to the orthogonal dimensions in the circumplex model (e.g.eventful -uneventful, see Fig. 1; [6,7].For one of the attributes this led to more neutral terminology in Dutch than in English: the term annoying could be literally translated as "irritant" or "hinderlijk", but both expert groups suggested translations that are closer to antonyms of pleasant, than literal translations of annoying, namely "onaangenaam" and "onprettig". Stimuli The SATP is led by the University College in London (UCL), which provided the audio files of multiple soundscapes that were used in this study.The final 27 stereo audio files were recorded between the spring and autumn of 2019 in London.They were set out to consist of a broad range of different sources and compositions of sound, such as a quiet park, a busy shopping street, or a construction site [25].In a pilot study, the final 27 audio files were selected from over 50 recordings to evenly cover all soundscape attributes.The duration of all audio files was 30 s.The audio files captured auditory environments containing mechanical, natural, and human activity sounds.Each SATP team was encouraged to use the same audio stimuli [26]. Headphones Following the SATP guidelines of standardisation, we utilised the Sennheiser HD650, over-ear, open headphones, connected to a Windows PC.To calibrate the headphone's playback level, we connected the Focusrite Scarlett 2i2 as an external sound card to the headphones and used a multimeter to set the volume of the system to a standardised voltage of 355 mV. Rating scales The web-based software Qualtrics was used for the response collection and was displayed on a monitor.The instructions provided at the top of the page were in line with the ISO recommendations and read as follows "For each of the 8 scales below, to what extent do you agree or disagree that the present surrounding sound environment is…".The eight slider scales were labelled with the Dutch translations of the soundscape attributes (see Table 1).The order of the attributes on the eight aforementioned scales were the same for all participants.A 100step slider was used ranging from 0 to 100 for every scale with a default position of the slider on 50. Procedure The procedure of the study was standardised and a detailed guide was provided by the UCL [14].Ethical procedures were followed, with formal ethical approval to conduct this study obtained from the Ethics Committee of the Psychology Department at the University of Groningen, in the Netherlands.The flowchart in Fig. 2 provides a visual overview of the entire methodological process. The study took place in a sound-attenuated room.When arriving at the lab, the participants were asked to leave their phones outside and then sit down in front of a screen on which the study would be displayed.After obtaining informed consent, a small test run was conducted to provide the participants with an impression of the study's trial procedure and rating scales and an opportunity to ask questions.Then, the participants were informed that the actual study would start.It began with a short survey assessing the participants' age, gender, nationality, and which languages they spoke.After that, each participant listened to the 27 audio files in random order and rated them on the eight different attributes.They were required to listen to each stimulus for thirty seconds before proceeding to their evaluation.During the display of the rating scales, it was allowed to replay the stimulus as many times as desired.After the rating, the participants sat in 30 s of silence before they could proceed to the next sound stimulus to reduce interference between two consecutive stimuli.A timer was visible to them.The study was completed when each audio file was listened to once; subsequently, the participants were asked questions regarding their experience and thoughts on the experiment.Except for the informed consent form shown at the very beginning, the whole study was conducted in Dutch.Most participants completed the study within 50 min, with a few participants taking a few minutes longer. Results Table 3 shows the mean and standard deviations for the ratings on each attribute in Dutch for each audio file.These results are also visualised in Fig. 3, along with the average ratings of the English sample obtained in the study by Oberman and colleagues [26].Visual inspection shows that, with a few minor exceptions, the ratings are fairly well matched between the two languages, indicating that the Dutch translations were used in a similar fashion to the original English attributes, Furthermore, formulas 1 and 2 were employed to calculate a pleasantness score and eventfulness score for each audio file.These formulas are a manipulation of the formula provided in ISO/TS 12919-3:2019 [27] (see Formula 1 & 2), which converts five-point Likert scale responses into coordinates.Our manipulation consisted of adjusting the formula to function with 100-point Likert scales.We performed the same transformation to the original English data collected by the University College London [26]. Plotting both sets of scores shows small differences between the Dutch and English data, but overall presents a coherent pattern between both languages (Fig. 4).For illustration, the data from the Dutch sample were subtracted from the English sample to create a difference plot, showing that Dutch participants rated the audio files slightly more pleasant and eventful than the English participants, driven by lower ratings on the attributes Annoying and Uneventful (see Fig. 5; mind the scale-difference for readability).To further inspect which audio files led to the largest differences in ratings between the two samples, the difference plot in Fig. 5 was rendered.It shows four purple markers that indicate audio files where the difference score resulted in a change in quadrant in the circumplex.In all four cases, the change in quadrant was borderline and not significant: an audio file was never rated categorically different on the Pleasantness or Eventfulness attribute.The files in question are W09, W15, W23a, and E10, of which the former two are dominated by mechanical and traffic sounds and the latter two clearly feature human sounds.The six audio files the largest differences between the translations were (in order of largest to smallest Euclidean distance, with a cut-off point >0.15)W09, CT301, W01, E01b, E12b, and HR01.All these audio files primarily feature (monotonous) mechanical sounds.Consistent with the overall trend, these files are rated as more pleasant and eventful by the Dutch sample. Since a Null Hypothesis Significance Testing (NHST) framework makes the interpretation of a null effect difficult, we also compared the Pleasantness and Eventfulness ratings in the two samples using independent t-tests from a Bayesian framework (which does provide the possibility to evaluate the relative evidence for the null and alternative hypotheses).We performed this analysis with JASP [28].In both cases, there was moderate evidence [14] in favour of the null hypothesis (of no difference), with Bayes Factors of BF 10 = 0.302 and BF 10 = 0.294 for Pleasantness and Eventfulness, respectively.We continued this for all eight attributes, of which the results are shown in Table 2.For most of the attributes there is moderate evidence in favour of the nullhypothesis, except for the attributes Uneventful (BF 10 = 0.361) and Annoying (BF 10 = 0,377) for which anecdotal evidence was found. Discussion The outcomes of this study show moderate evidence in favour of the conjecture that the Dutch translations of the soundscape attributes met their objective of being employed similarly to the English attributes, indicating that the contextual meaning of the attributes has largely been preserved during the translation process.Albeit not statistically significant, some differences were found indicating that the Dutch sample rated the audio files as slightly more pleasant and eventful compared to the English sample, driven by lower ratings on the opposing attributes Annoying and Uneventful.These differences are largest on audio files featuring monotonous and mechanical sounds, which is supported by the outcomes of the Bayesian analysis showing that the translations for the attributes Uneventful and Annoying don't fit as well as the others.We hypothesise that this relates to the translation of the attribute Annoying as "onaangenaam" and "onprettig" (which are closer to "unpleasant" as a more neutral antonym of "pleasant"), rather than the more literal translations "irritant" or "hinderlijk".A categorical principal component analysis included in a study by Weber [23] suggests that the translation "hinderlijk" indeed might be a good candidate.Further research could focus on more rigorous analysis of the translations of these specific attributes. At the moment of writing, researchers within the SATP have published five papers on the translation process, each adopting their own language and methodology.While the SATP provided standardised protocols and materials for the listening experiments [14,26], each research group was free to obtain the translations of the soundscape attributes as they saw fit, leading to large differences in methodologies.Some studies describe a rather straightforward approach of some kind of (expert) panel discussion like our own, such as the papers on the Indonesian [29] German translations [30].Other papers critically evaluate such approaches, which could be prone to translation errors and deviations in non-expert settings.Therefore, they address procedures that are more elaborate.For example, the authors of the Malaysian study [31] mention a combination of qualified translators, focus group discussions, in situ evaluations, and quantitative analysis on the most accurate translations.The paper on the Thai translations [32] presents a quantitative evaluation method to assess the psychometric equivalence between original and translated attributes, or in other words the translation quality.Within this mathematical endeavour, the authors focus on various evaluation criteria such as understability, clarity and anonymity.Like Gudmundsson [33] they advise that in translating psychometric instruments specific translation protocols should be implemented.Lastly, the Greek team [34] proposes elaborate crosscultural adaptation methodology, specifically meant to maintain meaning between both languages.It consists of using bilinguals in a combination of forward and backward translations, synthesis, pre-tests and a committee approach, recommended to employ prior to listening experiments.In the light of these rigorous methodologies it may be a fair criticism to question to what extent our expert panels were appropriate for translating the soundscape attributes, since the participants did not have a professional background in translating or interpreting, nor were they representative of the target audience.Suboptimal selections made by the expert panels in the initial translation process could thus have very well led to the differences found in this study. Furthermore, considering that the translation process was a joint effort of The Netherlands and Flanders (Belgium) and that it was specifically designed to suit the populations of both regions, we advise to include Flemish participants as they were not part of the listening experiments in the present study.Studies on demographic factors found that factors like gender and age are related to soundscape perception [35,36], but also that factors like social interaction and noise sensitivity influence the way people perceive their surroundings [37,38].As our sample could be rather homogeneous, it would also be advisable to continue these listening experiments with a generally more heterogeneous sample. Conclusion The purpose of this study to translate eight attributes to describe soundscapes from English to Dutch and to ascertain the validity of these translations, as part of a large international effort to establish a common language within the field of soundscape research.After comparison of the data between the Dutch sample and the English sample, results show modest evidence indicating that the Dutch translations were used similarly to the original English attributes when rating 27 audio files.These findings imply that the contextual meaning of the attributes was largely preserved during the translation process.Despite some limitations and while further research is necessary (specifically for the attributes Uneventful and Annoying), our findings are encouraging.They suggest that, although not perfect, the Dutch translations of the English soundscape attributes could already be useful for describing the general appraisal of a person's soundscape in The Netherlands. Fig. 1 . Fig. 1.Circumplex Model of Soundscape Attributes, including the English and Dutch terminology. Fig. 3 . Fig.3.Average Attribute Ratings for Each Audio File Note.Average ratings in the eight attributes for each of the 27 audio files, as a function of language (Dutch in blue, English in red).The attributes are labelled only for the last radar plot, showing the mean overlap of all audio files combined. Fig. 4 . Fig. 4. Average Dutch and English Ratings of Soundscapes in Comparison Note.Ratings of the 27 audio files plotted onto the Eventfulness and Pleasantness attributes.Each dot represents the mean rating of an audio file (Dutch in blue, English in red).Crosses represent means across the 27 audio files. Fig. 5 . Fig. 5. Average Differences Between Dutch and English Ratings of Soundscapes.Note.The data of the Dutch sample were subtracted from the English sample.The cross represents mean difference across all audio files and indicates the average tendency in rating difference on the Eventfulness and Pleasantness attributes.Purple markers indicate audio files where the difference score resulted in a change in quadrant in the circumplex. Table 1 Dutch Translations of the English Soundscape Attributes.Table includes the initial proposals of the two expert groups.Overlap between the groups is indicated in bold. Table 2 Outcomes of Independent T-Test between the Dutch and English samples using a Bayesian framework on each Attribute. K.A. van den Bosch et al. Table 3 Mean and Standard Deviations for the Ratings in Dutch on each Attribute for each Audio File.
2023-09-06T15:18:16.904Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "74ef1d57f60cc89c8cb6d5db39034b63630ab266", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.apacoust.2023.109545", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "ba9ca430a6b5abd1143cb2fb18f955d5c2281da4", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
4627413
pes2o/s2orc
v3-fos-license
Intra-cage dynamics of molecular hydrogen confined in cages of two different dimensions of clathrate hydrates In porous materials the molecular confinement is often realized by means of weak Van der Waals interactions between the molecule and the pore surface. The understanding of the mechanism of such interactions is important for a number of applications. In order to establish the role of the confinement size we have studied the microscopic dynamics of molecular hydrogen stored in the nanocages of clathrate hydrates of two different dimensions. We have found that by varying the size of the pore the diffusive mobility of confined hydrogen can be modified in both directions, i.e. reduced or enhanced compared to that in the bulk solid at the same temperatures. In the small cages with a mean crystallographic radius of 3.95 Å the confinement reduces diffusive mobility by orders of magnitude. In contrast, in large cages with a mean radius of 4.75 Å hydrogen molecules displays diffusive jump motion between different equilibrium sites inside the cages, visible at temperatures where bulk H2 is solid. The localization of H2 molecules observed in small cages can promote improved functional properties valuable for hydrogen storage applications. cages are occupied by large molecules such as tetrahydrofuran 7 and the filling of the remaining small cages by H 2 gas is realized already at a pressure of 60 bar of H 2 at 260 K. The starting temperature of H 2 deintercalation in such clathrates increases to 255 K under ambient pressure 8 . The observed differences in H 2 sorption pressure and temperature of the gas deintercalation of these two types of clathrates raise the question whether they can be related to the differences in interaction between hydrogen and host framework inside the different size cages? The direct observation of the motion on the microscopic scale of the hydrogen molecules can be an essential part of the answer. In general, a H 2 molecule in a solid or in a confinement can take part in various types of motions: rotations around its center of mass, vibrations around an equilibrium position occupied for much longer time than the period of the vibrations and jumps between equilibrium positions. The last type of motion is generically called self-diffusion 9 and has been both experimentally and theoretically extensively studied in great detail in many systems 9,10 . Note that diffusion inside closed confinement domains does not imply macroscopic material transport. On the other hand, if the diffusive motion inside confinement cages is slow, this can slow down macroscopic diffusion implying transitions from one cage to another through their common surface 9 . Thus, the macroscopic cage-to-cage migration of H 2 molecules depends on several aspects of the behavior and interactions of the H 2 molecule, including the mobility inside the confinement cages and the activation energy needed for molecules to pass from one cage to another. Previous studies determined such activation energy values 11,12 to pass through the pentagon and hexagon windows separating the cages as 26 kcal/mol and 6 kcal/mol respectively, but were unable to distinguish experimentally between the motion of the hydrogen inside different cages 8,12 . The experimental estimates of the long range self-diffusion constant for hydrogen contained in small cages of binary clathrates range 13,14 from 10 −8 cm 2 /s to 10 −11 cm 2 /s. Later, it was claimed that hydrogen does not diffuse there at all 15 . The cage-to-cage diffusion of confined H 2 and stability of entire clathrates hydrates have been also studied as a function of cage occupancy with help of molecular dynamic simulations 16,17 . It was found that the increase of the number of occupants leads to the higher mobility. In the small cage the double occupancy results in the distortion of the unit cell and is thermodynamically unstable leading to the expulsion of one molecule 16 out of the cage. In the large cage higher number of confined molecules is claimed to reduce the energy barrier of the hexagonal window due to the guest-host interactions 17 . Rotational motion of confined hydrogen has strong quantum features. Quantized rotation transitions E J of molecular hydrogen with respect to the ground state are given by E J = BJ(J + 1)], where B = 7.35 meV is the quantum rotational constant and J is a quantum number J = 0, 1… . 17 . The confinement leads very often to the lifting of degeneracy of states and, indeed, such splitting of rotational transitions has been reported for H 2 in both cages in hydrogen clathrates [18][19][20] . Vibrational motion of confined hydrogen can be both localized and induced by host lattice phonons depending on the coupling to the clathrates hydrates framework. The coupling of the H 2 and clathrates hydrates is expected to be weak, however, due to the small size of the molecules [20][21][22] . There is a rather generic notion of "rattling" motion of guest molecules inside the cage, often defined as large-amplitude anharmonic vibrations of the guest molecules [22][23][24] . In hydrogen filled clathrates the existence of quantum rattling at frequencies centered around 10 meV (80 cm −1 ) has been reported for the hydrogen enclosed in the small cage [18][19][20][21]25 , while the existence of such modes in large cage was not confirmed experimentally yet. The goal of the presented work is the study the role of the confinement dimensions in the intra-cage dynamics of molecular hydrogen enclosed in nanocages of two different dimensions in clathrate hydrates. Besides high technological interest clathrate hydrates are particularly suitable model systems to study the role of confinement since the interactions between the framework of the clathrates and H 2 are of the same nature in both cages. For experimental characterization we used neutron quasielastic scattering due to its remarkable capability to probe directly and specifically the individual motion of H 2 molecules at nanoscale. To distinguish between the contributions of hydrogen in two different cages we used a combination of data from binary tetrahydrofuran and fully hydrogenated clathrates. The signal of hydrogen confined in small cages was established using the difference between the empty and H 2 gas loaded spectra of binary tetrahydrofuran clathrate, where H 2 only occupies small cages. Subtracting the small cage H 2 signal from the spectrum of pure hydrogen clathrates with H 2 in both small and large cages, we can deduce the signal of H 2 molecules in the large cages. This method of course is a first approximation, and assumes that the occupation of neighboring cages has little impact on the confined particle motion. In view of the sII structure one can expect that this is a good approximation 5,20,22 , and indeed it provided us with a self-consistent ensemble of results. The average occupation of hydrogen in our study was found to be one molecule in the small cage and two in the large ones. It has to be mentioned that in general the spectral intensities in molecular hydrogen depend on the composition in terms of ortho and para spin isomer states, which can be altered by the relaxation between these states. In the experiment we verified that the concentration of ortho-hydrogen stayed constant by immediately cooling down the samples after H 2 loading to 10 K and measuring the scattering intensities, and periodically repeating this process. We found the signal unchanged over the duration of the experiment, confirming our earlier systematic observation of slowdown of ortho to para hydrogen conversion due to confinement. We can thus assume that the stored H 2 remained in the normal 3:1 ortho-para concentration ratio, which is the equilibrium value at the loading temperature of 250 K. Since, in addition, the neutron scattering cross section of ortho-hydrogen is nearly two orders of magnitude higher in the (Q, ω) domain of our study than for para-H 2 , in the data analysis we could assume that within error all the signal comes from scattering on ortho-hydrogen. Results and Discussion The feature measured in neutron spectroscopy is the dynamic structure factor S(Q, ω) that is the Fourier transform of Van Hove space-time correlation function, weighted by the scattering strength of the various atomic nuclei (here ω stands for the neutron energy transfer and Q = k f -k i for the neutron momentum transfer) 26 . Due to the very large incoherent scattering cross section of hydrogen the dynamic structure factor is dominated by the signal related to the self-correlation function of molecular hydrogen. For the analysis contributions of various types of motion of H 2 molecules have to be taken into account, which can be written as a convolution 9 : vibrations rotations d iffusion In the temperature and energy range studied, vibrations contribute to the spectra through a Debye-Waller factor f DW , which in the case of isotropic mean square displacement of the center of mass < u 2 > is equal to In the temperature and momentum range of our experiment this factor was consistently found to remain close to 1. The inelastic rotation contribution consists of the spectrum of transitions between quantized rotational states J = 0, 1, … , the first of which above the J = 0 ground state occurring at E = 14.7 meV, i.e. outside the energy range studied in this experiment. The remaining elastic scattering component in S rotations (Q, ω) for the ground state of molecular hydrogen can be described by 27,28  In the data analysis S diffusion (Q, ω) was determined by normalizing the experimental data to the rotational contribution by using eqs (1) and (2), assuming equilibrium H-H distance. The dimensionless quasielastic neutron scattering structure factor for diffusive motion we are thus concerned with is generally described as 9 : stands for the so-called elastic incoherent structure factor (EISF) and reflects geometrical parameters of the corresponding diffusive motion, A k (Q) are amplitudes of the corresponding quasielastic contributions of different diffusional modes and Neutron scattering spectra of the bulk and confined hydrogen measured at the time-of-flight spectrometer NEAT 29 are presented in Fig. 2 and show clearly that by varying the confinement dimension we can decrease or even increase the hydrogen mobility compared to that in bulk hydrogen at the same temperature. The melting and boiling temperatures of bulk hydrogen at ambient pressure are 13.99 K and 20.27 K, respectively 30 . In the frozen state the spectra of the bulk hydrogen at T = 10 K show the elastic line only, centered around ω = 0 meV (Fig. 2a). Scientific RepoRts | 6:27417 | DOI: 10.1038/srep27417 With melting of hydrogen the elastic line is transformed into a broad Lorentzian due to more diffusive motion of the molecules. At T > 20.27 K hydrogen is a gas, which leads to faster dynamics and a strong decrease of the signal within our energy/time window. H 2 confined into small cages exhibits up to 50 K no signal in addition to the intense elastic line (Fig. 2b). This indicates strongly restricted dynamic activity of H 2 in small cages, where H 2 is practically "frozen" on the timescale of this study. Neither does the Q-dependence of elastic intensities of hydrogen in small cages show any contribution of diffusive motion and can be described by a mean square displacement of the center of mass due to rotational and vibrational contributions. The observed values of the mean square displacement range from 0.04 ± 0.07 Å at 10 K and 0.1 ± 0.07 Å at 50 K, which confirms strong localization of hydrogen in small cage. At higher temperatures the hydrogen molecules become more mobile and start to explore a small volume in the center of the cage with radii ranging from 0.5 Å at 90 K to 0.9 Å at 200 K 31 . The increase of the confinement size from 3.95 Å to 4.75 Å of the average crystallographic radius leads to the strong increase of the hydrogen mobility already at 10 K in the large cage compared to both the bulk solid and hydrogen confined in the small cage at the same temperature. As a result, in addition to the elastic line at ω = 0 we observe a strong quasielastic signal (QENS) at low frequencies already at T = 10 K (Fig. 2c). We have applied several models including the diffusion on the surface of and within a volume of a sphere, jumps between two and more positions in a cage 9 for the analysis of the data. Our results revealed that the observed signal best corresponds to jumps between different equilibrium sites located at the corners of a tetrahedron inside the large cage ( Fig. 1): The hydrogen molecule rests on a site for the residence time τ s and jumps toward another site placed at distance l during a time interval much shorter than the residence time 32 . Considering occupation of large cages by the experimentally observed average number of two molecules, the quasielastic signal for this motion can be then described 32 by a combination of two Lorenztians with widths proportional to 1/τ s : where B(Q) is the constant (ω independent) baseline. The geometrical arrangement of the equilibrium sites determines the specific form of the momentum transfer Q dependence of the intensity of the elastic line (EISF) A 0 (Q) (Fig. 3), which in our tetrahedral case can be described as . Here j 0 is the spherical Bessel function, displaying a minimum at Ql = 4.5. The full quantitative analysis of the data in Fig. 3 also reveals evidence for the presence of a Q independent elastic signal, i.e. a fraction K imm of the H 2 molecules that do not participate in the observed diffusive motion. Thus in our model in Eqn. (4) the functions A are to be replaced by Using the set of five variable parameters at each temperature, i.e. the fractions of mobile and immobile particles K mob and K imm , respectively, the jump length l, the residence time τ s and the spectral baseline B(Q), we were able to well fit all spectra in the (Q, ω) range covered experimentally, as represented by the dashed lines in Figs 2c and 3. In contrast, the application of other models did not lead to satisfying results. Particularly, the model for the jumps between two sites (dumb-bell) reproduced well the Q dependence of the EISF (Fig. 3), but was not able to provide consistent results for the temperature dependence of the spectral lineshape in the range studied (Fig. 2c). The determined parameters τ s and l for all temperatures are summarized in the Table 1. It is significant, that the jump length deduced from our data in Fig. 3 are in remarkable agreement within error (given in Table 1 as standard deviation) with the distances of 2.93 Å between the 4 tetrahedrally arranged equilibrium sites for H 2 (actually D 2 ) molecules found by the crystallographic study 5 . This study has also found that the H 2 molecules in the large cages can be both in localized or delocalized states and the fractions in each state is a function of temperature and pressure in equilibrium. This lends strong principal support for our observation of mobile and immobile fractions of H 2 atoms, even if the pressure, temperature and loading parameter domain explored there does not overlap with ours in this work. The distance between the equilibrium sites (i.e. the jump length l) is on the other hand significantly shorter than the H 2 -H 2 distance of 3.776 Å in solid hcp hydrogen crystal at ambient pressure 33 , indicating that this confinement leads to more compressed but also more mobile state than solid hydrogen. The obtained values of the residence time τ s fall in the range of those reported previously for hydrogen adsorbed on the carbon nanohorns 27 , however show much weaker temperature dependence. The observed difference in mobility between small and large cages can be understood as caused by the modulation of cage potentials as a function of the cage dimension. The localization of hydrogen in small cages indicates the existence of molecular traps of potential minima in the center of the cage, matching the molecule size. The trapping can explain the reduction of the sorption pressure which is required for the loading of hydrogen molecules inside the small cage. In the same time it helps to contain the hydrogen inside the cage and enhances in this way gas release temperature in binary clathrates. The increase of the cage dimension leads to a flatter potential that, in contrast, promotes intra-cage mobility. Indeed, the weak temperature dependence of residence time τ s indicates low values of activation energy required for molecules to move between four equilibrium positions. Furthermore, our findings are supported by previous theoretical calculations which revealed deep minima in the potential energy surfaces of small cages with a width of about 1.5-2 Å 25,34 and a flattening of potential in the large cages 34 with observed off-center minima at about the distance of 3 Å from each other. The glass type mobile-immobile dynamic heterogeneity 35 observed in a well-ordered crystalline matter and present at all temperatures is a clear signature of substantial randomness in the flatter potential landscape in the large cages. This randomness can be caused by spread in hydrogen bond lengths and angles in the host structure, as reported recently 36 . In addition, the random occupation of the four H 2 equilibrium sites in the large cages filled on average by two hydrogen molecules can lead as well to potential fluctuations and higher disorder. Random inhomogeneities in the potential landscape are widespread in porous systems; therefore the existence of dynamic heterogeneity is expected to be a common feature, which has to be considered when conceiving new materials. The representation of such dynamic inhomogeneity by two extremes, a mobile fraction with a given residence time and an immobile fraction with at least an order of magnitude longer residence time (which would remain hidden within experimental resolution) can just be the simplest quantitative interpretation. A dynamic heterogeneity in the form of broad distribution of residence times caused by the random variations in the structure would lead to very similar spectra, in particular in view of the temperature dependent flat background B(Q), which could contain contributions from long spectral tails resulting of a broad distributions of Lorentzian line widths in the spectrum. In summary, we found direct evidence of large difference in the microscopic dynamic behavior of molecular hydrogen confined inside cages of different dimensions in the nanoporous clathrate hydrates. In the small cages of clathrate hydrates with average crystallographic radius of 3.95 Å we observe a structural arrest of confined hydrogen that can play a substantial role in determining the functional properties such as reducing of the sorption pressure of hydrogen and enhancing of the gas release temperature. The moderate increase of the crystallographic confinement radius to 4.75 Å for the large cages leads, in contrast, to a formation of novel type of hydrogen state with a shorter H 2 -H 2 distance but at the same time substantially higher mobility at T = 10 K then the bulk hcp hydrogen at the same (ambient) pressure and the same temperature. Crystallographic evidence shows 5 that H 2 molecules in the large cages start to delocalize within the cages with increasing temperature at 70 K and to escape at ambient pressure from the high pressure loaded clathrate hydrate at 100 K. Our observation of significant diffusive mobility in the large cages at 10-50 K temperatures suggest that the macroscopic diffusion should rather be governed by the activation of the molecular transitions between large cages at higher temperatures. The H 2 molecules confined in the small cages, on the other hand, are trapped in the same temperature range at the center of the cages and might not be available for long range diffusion, independently of the activation of inter-cage jumps. This is an example for the confined mobility in the cages to influence the diffusion between the cages, which is responsible for the macroscopic material transport. Furthermore, in the large cage we observe strong glass-like dynamic heterogeneity which can be explained by a significant disorder of the potential landscape in crystalline clathrate network. Our direct observation of motion of H 2 molecules inside the cages of clathrate hydrates give direct evidence and a space-time characterization of the dynamic activity that constitutes or contributes to what phenomenologically is often referred to as "rattling". Our study reveals large quantitative and qualitative impact of the dimension and finer details of the confinement structure on functionally relevant dynamic behavior of the stored molecules. Similar effects can play a role in recently observed strong decrease of the ionic current and the non-linear dependence between the current and applied potential for smaller pores in ionic systems confined in porous carbon 37,38 . Confined ions can become localized in the potential wells as the pore diameter becomes comparable to the size of the solvated ion thus higher energy penalty is needed to extract ions from the pore. Methods Sample preparation and neutron spectroscopy experiments. The clathrates samples were prepared using fine, 99.8% deuterated ice powder. The preparation was done following the procedures described before 31,39 . For the synthesis of the binary tetrahydrofuran clathrates we added deuterated tetrahydrofuran (99.5%) to deuterated water in stoichiometric proportion (17:1 mol). The solution has been stirred in a thermal bath at T = 275 K for 48 hours until crystallization occurs. Prepared ice samples were ground to fine powder and loaded into a precooled cylindrical cell under cold nitrogen atmosphere and pressurized by hydrogen gas for 24 hours at temperature cycled in 270-277 K range. For the synthesis of pure hydrogen clathrates we have applied the pressure of 2000 bars, while for loading of TDF clathrates we used 200 bars. Afterwards, the samples have been cooled down slowly to 20 K by keeping the H 2 pressure at 200 bars for another day. Using the protection of cold nitrogen atmosphere at 1 bar pressure, the prepared samples have been loaded into aluminum flat cells, sealed and placed into the cold cryostat at T = 30-50K for neutron scattering investigations. The weight of the samples has been monitored before and after the experiment. By using deuterated hydrogen containing materials in the samples with the exception of the fully protonated H 2 gas filling we have achieved that the incoherent neutron scattering cross section was to > 80% dominated by the signal from the loaded H 2 gas. In addition, the spectra measured on the H 2 unloaded tetrahydrofuran clathrate were used as background for correcting the data for the signal from the D 2 O matrix and tetrahydrofuran. The neutron scattering experiments were done at the time-of-flight spectrometer NEAT at Helmholtz Zentrum Berlin 29 using two experimental configurations. The first one with incoming neutron wavelength λ I = 2 Å has been used to probe diffraction patterns, which monitored the formation of the clathrates. The second configuration with λ I = 5.1 Å and instrumental elastic resolution ranging between 90 and 110 μ eV has been used for investigation of the dynamics in the low energy range (− 2 to 5 meV neutron energy transfer) corresponding to picosecond time scale. The spectra, collected in the temperature range from 10-50 K, were corrected and evaluated using standard data treatment routines. In addition, for the data analysis we excluded detector areas where we observed Bragg reflections from the clathrate framework.
2018-04-03T01:50:08.978Z
2016-06-07T00:00:00.000
{ "year": 2016, "sha1": "4b36950730e5edb5553fe101ade8e5fb69f0981b", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep27417.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4b36950730e5edb5553fe101ade8e5fb69f0981b", "s2fieldsofstudy": [ "Chemistry", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
254319785
pes2o/s2orc
v3-fos-license
THE EFFECT OF USE VIDEO IN THE MODEL OF GUIDED INQUIRY LEARNING ON TRAVELING WAVES AND SOUND WAVES MATERIALS IN CLASS XI SMA NEGERI 7 SIJUNJUNG The many technological discoveries of the 21 st century require the world of education to adapt quickly. Based on curriculum 2013’s competence standards. Then it requires reinforcement of a scientific approach to add in learning. But in the fact remains that the learning model demands the 2013 curriculum and the 21 st - century technology-based learning medium is not fully finish. Researchers have attempted to study using video as a learning medium in guided inquiry learning to look at the impact on learning participants. The kind of research being done is a quasi-experiment research design with the randomized research group one design. The population in this research are all public high school seniors registered in 2020/2021. Samples from this research are XI MIPA 1 as experiment class and XI MIPA 2 as control class. The research instrument consists of posttest and performance assessment sheets. The data on this research is analyzed using a two average descriptive analysis and testing similarities ate definite point 0,05 for the competence of students made up of knowledge and skill. Studies have concluded that there is an influence that means use the video in guided inquiry learning to traveling wave and sound waves in XI SMA Negeri 7 Sijunjung. It is supported by has data analysis on each knowledge and skill competence with two averages found th> ttable. At the competence knowledge, 4,824>2,002 and at skill competence 4,098>2,002 who are in the Ho rejection area means there are differenct results for the competence of knowledge and student’ skills. I. INTRODUCTION Education is a necessity in ménage, invent, and increase human's quality. Education now is characterized by 21 st -Century education. 21 st -Century education is a century in which information is widespread and growing. (1) 2013 curriculum graduates standards, the target of learning that includes reinforcement and development in each competence attitudes, knowledge and skill on every educational unit. Each competency has different value acquisitions. Attitude competence is evaluated through activities of appreciation and implementing. (2) Knowledge competencies are assessed through activities of recollect, understand, apply, analyze, evaluate, and create. Skill competence is assessed through activities of observing, give the questions, experiment and presenting. According to the competence standards of the 2013 curriculum, it is necessary to strengthen the scientific approach to apply in learning. Scientific approach is learning that encourages students to be better able to understand, ask, experiment, and presentation. (3) To achieve learning through a scientifical approach, a contributing factor in the learning process include those of the learning model and the media learning. 2013 curriculum focuses on the use of three learning models based on education No 103, one of the recommended learning is guided inquiry learning. Guided inquiry learning is a learning model that involves a student's ability to search and investigate things systematically, logically, and critically so that the student can personally formulate his or her findings. Guided inquiry learning is must students to be more active and participate in the learning process since the incubation is accompanied by a learning model aimed at requiring students to find and solve the problems. (4) According to Brydon Lamb's concept humans learn 83% through sight, 11% through by hearing, 3,5% through by nose, 1,5% by sensitive, and 1% by taste. (5) This suggests that learners are more understanding a lesson through sight. Based on this fact, researchers are interested in using video as learning to support the application of guided inquiry learning. The Video present more interesting than books, pictures, and audio media. This can be seen from the effectiveness of video use in terms of time, the speed of delivery of messages and the appeal of video. The video used in the teaching process has many benefits of showing objects that are normal and cannot be directly observed. Video can capture a process correctly and can be viewed over and over again. (6) Physics is a subject devoted to understanding the concepts that focuses on the process of building knowledge through invention., presentation,and systematic and governed by a specific rules. (7) Physics is based on the very nature of physics, which is that students need to understand facts, theories, principles, laws, and procedures and can apply them in their daily lives. (8) Based on technology-based learning and learning media to support the fact learning process that takes place in the field is not all expect by the 2013 curriculum. Many physics teachers are on the learning process despite the repeated changes of the curriculum. SMA Negeri 7 Sijunjung is one of the used 2013 curriculum executioners school, but the 2013 curriculum demands are not fully executed. The process of scientifical learning remains to be seen in the learning process still applies to the discussion and speech systems and most students only as listeners in the learning process. The use of method speaking information tends to memorize only by the students without the thought process, the method of speaking is simply the transferring of the teachers mind to the student without comprehension the student being scientifical. (9) In addition, the medium of learning used is still a textbook source and learning medium while the 2013 curriculum demand of learning especially on physics subjects require models and learning media that enable the student to be active, innovative, creative, and independent by scientific approach. According to the results of the midterm exam T.P 2020/2021 only 22% students from grade XI completed. Based on exposure, researchers are attracted to doing research using video as learning in the guided inquiry learning to see the results of student study. II. METHOD The research to investigate the effect of the use of videos in incubating learning guided inquiry on traveling waves and sound waves materials in grade XI SMA Negeri 7 Sijunjung. The kind of research used was an quisyexperiment with a randomized control group only design. The Research used as in table 1. The Research population is all students SMA Negeri 7 Sijunjung enrolled in the school year. Samples from this research are XI MIPA 1 as experiment class and XI MIPA 2 as control class. The steps taken to determine a sample on this research:1)Will be collecting data on physics midterm two-class MIPA 2)Analyzing the score of the midterm results by calculating the average score and the deviation standard 3)The normality test, homogeneity test, and class average similarly test 4)Then could be confused, control classes and experiment classes Sugiono said everything set forth by research to be studied and thus gained information on the subject is variable. (10) Free variables in this research are the use of videos in guided inquiry learning and Variables bound in this study are the result of learning participants in knowledge and skill competence. The Control variable in this research is teachers who teachers the same class, the number of hours used, and the lesson material and the number of issues to be tested. The research instrument consists of posttest and performance assessment sheets. The data on this research is analyzed using a two average descriptive analysis and testing similarities ate definite point 0,05 for the competence of students made up of knowledge and skill. Studies have concluded that there is an influence that means use the video in guided inquiry learning to travel wave and sound waves in XI SMA Negeri 7 Sijunjung. It is supported by has data analysis on each knowledge and skill competence with two averages found th> ttable. III. RESULTS AND DISCUSSION A. Description of data Knowledge Competence The assessment of students learning results on knowledge competence is obtained from a final test( posttest) with a question of multiple choice of 40 items of matter for each. The 40 items used are posttest issues with the initial number of 50 items. Results for statistical parameters can be seen in table 2 : Based on table 2 it seems that the average student in the experiment class is higher than the control class. The Average class of students in experiment class 81,08 and control class 73,92. The Standard deviation of the experiment class was smaller than the control class. Variance experiment class is smaller control class, which means the knowledge competence control class is more diverse than the experiment class. Skill Competence The assessment of students learning results on skill competence during the learning activity which is experiment and activity student during the learning process. On test assessment for traveling wave used Melde experiment and for sound wave are organ pipeline experiment using a simulation phet. . Results for statistical parameters can be seen on table 3 Based on table 3 it seems that the average student in the experiment class is higher than the control class. The Average class of students in experiment class 73,5 and control class 72. The Standard deviation of the experiment class was small than the control class. Variance experiment class is smaller than the control class, which means the skill competence control class is more diverse than the experiment class. A. Knowledge Competence Data analysis on the competence of knowledge of students has been tested using normality test, homogeneity test, and two averages similarly. The test results are described in the following chart: 1). Normality test Normality tests were used to see if the two-class distributed samples were normal. The normality test on this research used the lilifors test. The normal test results did get Lo and Lt at real level (σ) 0,05 for n=30 and n=29 in experiment and control class. Table Based on table 4 Lo experiment class of 0,159 while for a control class of 0,109 with 30 students participant for experiment class and 29 students for control class. Lt for experiment class 0,161 and for control class of 0,165. Lo higher than those of Lt, this means data in experiment class a normally distributed control class. 2). Homogeneity test The homogeneity test was conducted to find out if the class sample came from a population that was homogeneous or not. Homogeneity test is done by comparing the Fh with Ftable at DK 29 and 28 at experiment and control classes. 3). Test of two average similarities Based on the normality test and homogeneity test, the conclusion are that data from post-testing class experiment and control classes on traveling wave and sound wave materials has distribution normal and homogeneous variance .Then and researchers in turn conducted a hypothetical test using the t-test. Based on table 6 showing of Th=2,824 while Tt=2,002 Criteria jo accepted Ho if -Ttable<Th<Ttable or -2,002<2,824<2,002. Ho was rejected Ha received meant that there was a difference in the knowledge competence of both samples because of the treatmen given which was the use of the video in the learning model guided inquiry to traveling waves and sound wave materials in class IX SMA Negeri 7 Sijunjung. b. Skill Competence Data analysis on the competence of skill of students has been tested using normality test, homogeneity test, and two averages similarly. This data comes from the result of a student immediate assessment made by two observers who are researchers and physics teachers at SMA Negeri 7 Sijunjung. Using the skill assessment table reference on the learners activity sheet and the learner's activity sheet during the learning process. The test results are described in the following chart: 1). Normality test Normality tests were used to see if the two-class distributed samples were normal. The normality test on this research used the lilifors test. The normal test results did get Lo and Lt at real level (σ) 0,05 for n=30 and n=29 in experiment dan control class. Table Based on table 7 Lo experiment class of 0,142 while for a control class of 0,124 with 30 students participant for experiment class dan 29 students for control class. Lt for experiment class at 0,161 and for control class of 0,165. Lo higher than those of Lt, this means data in experiment class a normally distributed control class. 2). Homogeneity test The homogeneity test was conducted to find out if the class sample came from a population that was homogeneous or not. Homogeneity test is done by comparing the Fh with Ftable at DK 29 and 28 at experiment and control classes 3). Test of two average similarities Based on the normally test homogeneity test performance on the students skill assessment results I was obtained that the data of both normally distributed samples and had homogenate variance for each of the items test. Then researchers in turn do make a hypothesis test of the research by used Uji-t. Test results are presented intable 9 below: Table Based on table 9 showing results skill competence students by getting results Th=3,464 while Tt=2,002 Criteria jo accepted Ho if -Ttable <Th<Ttable or -2,002<3,464<2,002. Ho was rejected and Ha received meant that there was a difference in the skill competence of both samples because of the treatment given which was the use of the video in the learning model guided inquiry to traveling waves and sound wave materials in class IX SMA Negeri 7 Sijunjung. DISCUSSION Based on researcrh conducted with 8 meetings for each KD, for KD 3.7 Travelling waves and KD 3.8 for Sound waves. In the experiment class using the video media in the incubation learning model is guided inquiry while dor the control class using the media image in the incubation learning model is guided inquiry. The video used in the research is a video download on Youtube and edited and combined to the traveling wave materials and sound waves based on the curriculum 2013 standard. While for the control class use media images adjusted to traveling waves materials and sound waves to support guided inquiry learning. To further maximize research, researchers make in LKPD ( Learns's activity sheet) to guided inquiry assessment. Knowledge Competence From the results of the data analysis, there is a difference in students learning from the knowledge aspect between students who use video media in an incubating learning model and students who do not use video media in an incubating guided inquiry learning model. According to studies conducted by Yolanda's learning activities using the teaching, incubation required learning media, The learning media would make it easier for teachers to communicate the messages from learning and make it easier for students to understand the material. (11) One type of learning media that is effective when combined with guided inquiry models is video media. Based on the posttest from both experimental and control classes, the test that uses the learning video in guided inquiry learning model's is shown higher than the control class that does not use the learning video in guided inquiry learning . Skill Competence From the results of the data analysis that video can be used as an alternative to support experimental activities that they feel are associated with an inhibitory factor. One of the benefits of using video as learning media incompetence skills is that they can vividly describe the process of learning physics. (12) For example in KD 3.7 on traveling waves, the media video illustrates how waves look and move so that learners can readily understand the process and shape of traveling waves and are already able to determine their characteristics. Then the KD 3.8 on the sound waves of the video media helps demonstrate processes and types of sound waves that cannot be explained using pictures or textbooks. From the results of the data analysis, there is a difference between students learning from the skill aspect between those who use the video in a guided inquiry learning model and those who do not use the video in the guided inquiry learning model. This can be seen in the learning process students are more likely to be active in the learning process. IV. CONCLUSION The use of video in the guided inquiry learning model made a significant impact on the increased competence of class XI SMA Negeri 7 Sijunjung. The knowledge competence average in experiment class is 81,08 and in control, class is 73,92. The skill competence average in the experiment class is 84,70 and in the control class is 79,98. For the test of two average similarities in knowledge competence in the experiment and control class is Th>Ttable is 4,824>2,002, meaning there was a discrepancy between studying the experiment and control class. For the test of two average similarities in skill competence in the experiment and control class is Th>Ttable is 4,098>2,002, meaning there was a discrepancy between studying the experiment and control class. Based on the above exposure, the use of video in guided inquiry learning models is on traveling waves and sound waves can have a significant impect with increased student learning results in knowledge and skill competence.
2022-12-07T21:22:30.558Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "43f42c4b45935a2b92a5c9d2c8aa4af01799505f", "oa_license": "CCBYSA", "oa_url": "http://ejournal.unp.ac.id/students/index.php/pfis/article/download/11412/4820", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "43f42c4b45935a2b92a5c9d2c8aa4af01799505f", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
16394540
pes2o/s2orc
v3-fos-license
A systematic review of symptom assessment scales in children with cancer Background The objective was to describe symptom assessment scales that have been used in children with cancer. Methods We conducted electronic searches of OVID Medline and EMBASE in order to identify all symptom assessment scales that have been used in pediatric cancer. Two reviewers abstracted information from each identified study. Data collected included study demographics and information related to the instrument and children enrolled. We also collected information about the purpose of instrument administration and whether treatment was altered as a result of this information. Results Fourteen studies were identified which evaluated eight different symptom assessment scales. Eight studies used child self-report and all studies included children on active treatment for cancer although 4 studies also included children following completion of treatment. The most common purpose of instrument administration was to measure the prevalence of symptom burden (n = 8). None of the 14 studies used the scale to screen for symptoms and none changed patient management on the basis of identified symptoms. Conclusions We failed to identify any symptom assessment scales that were used as a symptom screening tool. There is a need to develop such a tool for use in children with cancer. Background Cure rates for pediatric cancer are approaching 80% but the costs of this progress include a high prevalence of symptoms during treatment [1][2][3] and a high rate of chronic health conditions following completion of treatment [4]. It is important to identify and control symptoms in order to maximize quality of life (QoL) and reduce morbidity. Furthermore, there is some evidence that reduction in symptoms may improve future psychosocial functioning [5]. Within the adult oncology setting, screening of symptoms through patient self-report has been identified as an important priority [6][7][8][9]. Consequently, much effort has been focused on symptom screening and control. In particular, efforts by Cancer Care Ontario have culminated in the wide-spread use of a symptom screening tool based upon the Edmonton Symptom Assessment Scale (ESAS) [10]. The ESAS is a validated symptom screening tool which asks adult patients to rate the severity of nine common symptoms including pain, anxiety and nausea. In a satisfaction survey conducted in 2010 among 2,921 patients, 87% of respondents thought that the ESAS was an important tool for letting healthcare providers know how they feel [11]. However, no initiative to identify a common symptom screening tool has been undertaken in pediatric oncology. It is important to distinguish between QoL instruments and symptom assessment scales as these are closely intertwined but distinct. QoL is a multidimensional construct grounded in the World Health Organization's definition of health in which health is not merely the absence of disease, but rather, a state of complete physical, mental and social well-being [12]. Many QoL instruments include symptom assessment although their purpose is to measure the construct of QoL rather than the symptom specifically. In contrast, the purpose of symptom assessment scales is to identify and measure symptom burden. In order to identify an optimal symptom screening tool that may be used in children receiving cancer treatment, it would first be important to describe all symptom assessment scales that have been used in this population. This process would allow one to determine if any of these scales may be used as a symptom screening tool or if one could be adapted for this purpose. Consequently, the objective was to describe symptom assessment scales that have been used in children receiving cancer treatment. Data sources and searches We used the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guideline for reporting observational studies [13] to develop a protocol for this systematic review. We conducted electronic searches of OVID Medline (1948 to December 19, 2011) and EMBASE (1980 to December 19, 2011). Appendix 1 illustrates the search strategy. Study selection We included studies that used a symptom assessment scale to measure multiple symptoms. Exclusion criteria were: (1) Not published as a full article (conference proceedings excluded); (2) Pediatric data not available; (3) Population not cancer; (4) Symptoms retrospectively reported for a period that did not include current symptoms (i.e. studies which used a recall period such as 1 week and 1 month were included while studies that only evaluated symptoms that occurred in the past and did not evaluate recent or current symptoms were excluded); (5) Purpose of the study was only to evaluate a translated version; (6) Not a study; (7) Duplicate publication; (8) Symptom assessment scale not appropriate because: a) only included psychological symptoms; b) included items that are not symptoms; or c) only measured a single symptom or (9) Not in English. One reviewer (LS) evaluated the titles and abstracts identified by the search strategy and any potentially relevant publication was retrieved in full. Two independent reviewers (MCE and LS) assessed for eligibility. Final inclusion into the review was by agreement of both reviewers. Agreement between reviewers was evaluated using the kappa statistic. Strength of agreement as evaluated by the kappa statistic was defined as slight (0.00-0.20), fair (0.21-0.40), moderate (0.41-0.60), substantial (0.61-0.80) or almost perfect (0.81-1.00) [14]. Data extraction, quality assessment and analytic approach Two reviewers (MCE and LS) extracted data from included trials using a standardized data collection form. Data collected included trial demographics (year of publication, country in which study was conducted, language in which the instrument was administered), name of the instrument, information related to instrument administration (how administered, proxy or self-report, number of times administered), information about the number and characteristics of children enrolled (age, on/off active treatment) and the five most common symptoms identified. We also collected information about the purpose of instrument administration, whether treatment was altered as a result of this information and whether there were difficulties with administration for studies in which child self-report was used. We defined screening for this review as whether the study specifically reported abnormal results to clinicians or altered treatment because of identified symptoms. We then described the details of the identified scales. Study quality was assessed using a modified version of an instrument previously developed to describe quality in studies of prognosis [15]. This instrument examines four potential sources of bias: study participation, study attrition, confounding variables and measurement of outcomes. Given that study attrition is less relevant in this setting, we excluded this item. Each element was rated as having low, medium or high risk of bias for each study. The analytic approach was purely descriptive and the data were not synthesized. Figure 1 illustrates the flow diagram of trial identification and selection. A total of 686 titles and abstracts were reviewed, and 34 full articles were retrieved. Of these, 14 satisfied pre-defined inclusion criteria. Reasons for excluding 20 articles are provided in Figure 1. The reviewers had perfect agreement on articles for inclusion (kappa = 1.00). The number of studies that illustrated low risk of bias was as follows: study participation (n = 6), confounding variables (n = 4) and measurement of outcomes (n = 7). Table 1 illustrates the characteristics of the included studies. Of the 14 studies, [3,[16][17][18][19][20][21][22][23][24][25][26][27][28] 6 were conducted in the United States [18][19][20]23,25,26] and 3 were conducted in the United Kingdom [16,24,28]. Twelve were conducted in English [3,[16][17][18][19][20][23][24][25][26][27][28] and 2 were conducted in other languages in addition to English (Spanish [18] and Swedish [3]). Instruments were administered in person only (n = 10), by telephone only (n = 1) or in multiple formats (n = 3). None of the 14 studies used the symptom assessment scale to screen for symptoms and no study changed patient management on the basis of symptoms identified. The most common purpose of instrument administration was to describe the degree of symptom burden in their population (n = 8). Results Of the 11 studies that described the most common symptoms, the most frequently cited symptoms appearing on the 5 most common lists were: fatigue (n = 9), nausea (n = 7), pain (n = 5), drowsiness (n = 4), and anorexia (n = 3). There were 6 studies that described the mean number of symptoms per patient in their cohort; this number ranged from 1.9 to 12.7. Three of the studies which used child self-report noted that children sometimes needed assistance or clarification of questions [19,24,26]. Table 2 illustrates the details of the eight identified instruments including the number of items, description of items for scales that included < 15 items, dimensions and scale types. Discussion We identified 14 studies that used eight different symptom assessment scales to measure symptoms in children with cancer. The most common use of these scales was to describe the prevalence of symptom burden. None were used as a symptom screening tool and none were used to influence patient management. Consequently, there is an absence of symptom screening tools which have been used in children with cancer. Measuring symptom severity in children is critical. Children undergoing cancer treatment suffer and may only seek help when symptoms become severe [16,30]. In one study in which children 13-18 years of age completed an electronic version of a symptom questionnaire, participants noted that self-reporting symptoms was reassuring, made them feel more in control, helped them to remember their symptoms and allowed them to see how symptoms changed over time [16]. Identifying a feasible and clinically useful symptom screening tool is important. Symptom screening instruments could be used by patients in routine clinical practice in order to identify problems and focus the families' and healthcare providers' attentions on symptom control. These instruments may also be used to determine symptom prevalence and thereby inform the prioritization of clinical patient services and/or research resources. In considering an ideal screening instrument, the scope of symptoms should include the most important symptoms to the patient. The instrument should take into account the perspective of the patient's family regarding symptom impact, be applicable to children of all ages and have adequate psychometric properties such as reliability and validity. Both parent-proxy versions and child self-report versions would be important to address the needs of children of different ages and cognitive abilities. In order to be feasible in clinical practice, a brief screening tool is likely to be more successful than lengthy assessment scales. Once a feasible and clinically useful screening tool is identified for pediatric cancer, a future step could be to identify, adapt or develop evidence-based guidelines for the management of each symptom included in the tool. Such a system could improve patient/family selfmanagement and improve the ability of healthcare professionals to standardize monitoring and care. Our study has important limitations. First, we only included studies published in the English language. The rationale for this decision is that our research plan is to first identify or adapt a symptom screening tool for use in English with later translation into other languages. Second, it is possible that there are symptom screening tools being used in practice that have not been evaluated in the peerreviewed literature. Another limitation of our study is the exclusion of scales which address psychosocial symptoms alone. A final limitation is that our review excluded single symptom scales. Although these scales are extremely important in clinical practice and research, they do not address our goal of identifying a scale which could be used as a symptom screening instrument or adapted for this purpose. A future goal will be to examine the eight symptom assessment scales identified in this review and determine if one of these could be used as a symptom screening tool or if one could be adapted for this purpose. Such a goal would likely be best accomplished using a consensus methodology among a multi-disciplinary group of experts in pediatric oncology supportive care. Conclusion In conclusion, we performed a systematic review of symptom assessment scales and identified eight instruments which have been used in children with cancer; none were used for the purpose of screening of symptoms or altered care. Identification or development of a symptom screening tool in pediatric oncology should be a priority.
2017-06-22T08:02:55.496Z
2012-09-26T00:00:00.000
{ "year": 2012, "sha1": "4c1ad961e00302aa9bdd95dd3a6fa6e91e4c7719", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-12-430", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4c1ad961e00302aa9bdd95dd3a6fa6e91e4c7719", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229943767
pes2o/s2orc
v3-fos-license
Association between blood lead levels and metabolic syndrome considering the effect of the thyroid-stimulating hormone based on the 2013 Korea National health and nutrition examination survey Imbalances in thyroid-stimulating hormone (TSH) levels are associated with metabolic syndrome (MetS), and the underlying mechanism is partly in alignment with that of lead exposure causing MetS. Many studies have reported the association between lead exposure and MetS, but no study has considered the possibility of TSH mediating lead's effect on MetS. Therefore, we aimed to examine the association between lead exposure and MetS considering TSH as a partial mediator. The data of 1,688 adults (age ≥19 years) from the Korea National Health and Nutrition Examination Survey in 2013 were analyzed. The prevalence of MetS in the Korean population was 21.9%, and the geometric mean of blood lead and serum TSH levels were 1.96 μg/dL and 2.17 μIU/mL, respectively. The associations between blood lead levels, serum TSH levels, and MetS were determined through a multiple logistic regression analysis. Blood lead levels were positively associated with high TSH levels (upper 25%) with an odds ratio (OR) and 95% confidence interval (CI) of 1.79 (1.24, 2.58) per doubled lead levels. The increase in blood lead and serum TSH levels both positively increased the odds of developing MetS. The OR of MetS per doubling of blood lead level was 1.53 (1.00, 2.35), and was not attenuated after adjusting for TSH levels. These findings suggest that higher levels of blood lead are positively associated with serum TSH levels and MetS. By exploring the role of TSH as a partial mediator between lead and MetS, we verified that lead exposure has an independent relationship with MetS, regardless of TSH levels. Introduction Thyroid hormones play a key role in maintaining most of the basic metabolic processes in our body. They are significantly involved in the functions of the nervous, reproductive, and cardiovascular systems in both children and adults [1]. One of the most important regulators of thyroid function is the thyroid-stimulating hormone (TSH), which controls the serum levels of thyroid hormones by a negative feedback system [2]. Through numerous studies, a significant link has been established between TSH and metabolic syndrome (MetS), which is a cluster of abdominal obesity, hypertriglyceridemia, low high-density lipoprotein cholesterol (HDL-C) levels, high blood pressure, and fasting glucose disorder [3][4][5]. The association between TSH and MetS is partially explained by the participation of thyroid hormones in lipid metabolism. The normal biosynthesis of cholesterol is disturbed by thyroid disorders, leading to changes in serum lipid concentration [6]. In addition, atherogenic lipid alterations following thyroid dysfunction can also affect the vasculature leading to elevated blood pressure [7]. There are several environmental risk factors that can affect the homeostasis of TSH levels and induce some clinical disorders [8]. Lead is also one of the hazardous elements considered as a thyroid-disrupting chemical. A study suggested that lead exposure can induce functional impairment of the pituitary-thyroid axis and provoke the alteration of TSH levels [9]. This was supported through several epidemiologic studies which demonstrated the effect of lead exposure on TSH levels. The results from those studies presented significant associations between environmental exposure to lead and altered levels of TSH, even at low, general environmental concentrations [9][10][11][12][13]. Some studies have argued that environmental exposure to lead is associated with the occurrence of MetS [14,15]. However, lead exposure can alter the levels of serum TSH, and an increase in TSH levels is significantly associated with the development of MetS. Moreover, lead and TSH share a common mechanism in the pathway causing MetS [16]. Exposure to lead disturbs systemic lipid metabolism and in turn induces adverse effects leading to MetS which overlaps with the effect of altered TSH levels contributing to the development of MetS [15,16]. Thus, the impact of lead on MetS could be an indirect effect, considering the hypothetical causal chain where exposure to lead affects the TSH levels, and altered TSH levels thereby triggers the development of MetS. In other words, the influence of lead on the development of MetS could be partially or completely mediated by TSH. To date, no study has considered these three factors simultaneously, possibly overlooking the effect of TSH on the association between lead and MetS. Consequently, if TSH acts as an intermediate factor, previous studies might have overestimated the effect of lead exposure on MetS. Therefore, we hypothesized that altered levels of TSHs increase the risk of MetS and mediate the effect of lead on MetS. Therefore, this study aimed to determine the associations between environmental lead exposure, TSHs, and MetS in a large representative sample of the general Korean population. We also aimed to verify our hypothesis by examining the role of TSHs as a partial mediator in the association between lead and MetS (Fig 1). Study population The Korea National Health and Nutrition Examination Survey (KNHANES) is an ongoing series of cross-sectional surveys, which has been conducted by the Korea Center for Disease Control and Prevention since 1998. It is designed to obtain information about the health and nutrition status of the non-institutionalized Korean citizens from a representative sample of the Korean population. The survey is comprised of three parts: health and behavior interview, health examination, and nutrition survey. Health interviews and health examinations are conducted at the mobile examination center, and nutrition surveys are performed through household interviews. Among the KNHANES data sets, the present study used the data from the 2013 KNHANES. The 2013 survey was the only year that included both blood lead and serum TSH measurements in the health examination section, in addition to the measurement or survey data required for the diagnosis of MetS. Therefore, we used the data from the 2013 KNHANES to examine the association between lead exposure and MetS considering the effect of TSH levels. A total of 8,018 participants were included in the 2013 survey. Of those participants, 2,355 had measurement data on blood metal and TSH levels. We excluded individuals with missing information related to the diagnosis of MetS (n = 38), those with missing information on any other covariates (n = 583), and those who had been or are currently undertreatment for thyroid disease or thyroid cancer (n = 46). As a result, 1,688 participants aged 19 years or older were included in the final analysis (Fig 2). Blood lead levels Heavy metal sampling was conducted in a subsample consisted of 2,400 individuals aged over 10 years, who were randomly selected from each survey unit according to sex and age. The blood lead levels were measured using graphite furnace atomic absorption spectrometry (model AAnalyst 600; PerkinElmer, Finland) at a central laboratory (NeoDin Medical Institute, Seoul, Korea). Internal quality assurance was achieved by ensuring quality control of all analytical equipment using the following standard reference materials: Whole Blood Metals Control (Bio-Rad, USA), Blood Metals Control (G-EQUAS, Germany), and pooled normal whole blood (self-manufactured). With regard to external quality control, the acceptable standards of precision and accuracy were met passing the German External Quality Assessment Scheme, CDC Lead and Multielement Proficiency Program, and Korea Occupational Safety Thyroid-stimulating hormone levels The levels of serum TSH in a subsample of 2,400 individuals aged over 10 years were analyzed. Approximately 15 mL of blood was collected, and within 30 minutes the serum was separated and then transferred to the testing facility. Serum TSH levels were measured with an electrochemiluminescence immunoassay (Cobas8000 E-602; Roche, Germany) at the central laboratory (NeoDin Medical Institute, Seoul, Korea) within 24 hours after sampling. An E-TSH kit (Roche Diagnostics) was used for measuring the levels of TSH, and the reference range was 0.35-5.50 mIU/L. The results reported met specifications for accuracy, general chemistry, special immunology, and ligand established by the quality control and assurance program of the College of American Pathologists. Metabolic syndrome MetS was diagnosed according to the criteria of the revised National Cholesterol Education Programme Adult Treatment Panel Ⅲ (NCEP ATP Ⅲ) proposed in 2005 by the American Heart Association and National Heart, Lung and Blood Institute, with a modified standard for defining abdominal obesity based on the criteria suggested by the Korean Society of Obesity [17,18]. Based on the criteria of the revised NCEP ATP Ⅲ, MetS was defined as the presence of at least three of the following features: 1) abdominal obesity (waist circumference: �90 cm in men and �85 cm in women, 2) elevated triglycerides (serum triglyceride (TG) levels: >150 mg/dL) or receiving treatment for elevated triglycerides, 3) HDL-C levels less than 40 mg/dL in men and 50 mg/dL in women or receiving treatment for reduced HDL-C levels, 4) elevated blood pressure (systolic blood pressure: >130 mmHg or diastolic blood pressure: >85 mmHg) or receiving antihypertensive treatment, and 5) fasting glucose disorder (fasting glucose: >100 mg/dL) or receiving treatment after being diagnosed with diabetes. Covariates The potential confounders that had an effect on the association between lead exposure and MetS were sex, age, education levels, household income, exercise, smoking status, alcohol intake, aspartate aminotransferase (AST) levels, alanine aminotransferase (ALT) levels, serum creatinine levels, and blood mercury and cadmium levels. The level of education was classified into three categories according to the individual's highest level of education: less than high school graduation, high school graduate, and college or more. Household income was divided by quartiles. People who responded that they do not perform moderate levels of exercise more than 5 days a week, 30 minutes per day, were classified into the little exercise group, while those who responded that they perform moderate exercise more than 5 days a week, 30 minutes per day, were classified into the moderate exercise group. People who responded that they perform strenuous exercise more than 3 days a week, 20 minutes per day, as well as those who answered positively to both questions were classified into the vigorous exercise group [19,20]. Smoking status and alcohol intake were divided into three groups: never, former, and current smoker or drinker, respectively [19]. In addition to the variables previously mentioned, creatinine-adjusted urinary iodine was also included as a covariate when the association with TSH was being considered. Statistical analysis In this study, we performed statistical analyses to determine the association between lead, TSH, and MetS and furthermore tested for mediation following the methods of Navas-Acien [21] and Agarwal [22]. Referring to their methods, the four step approach proposed by Baron and Kenny was used [23]. In our case, TSH was hypothesized to be the intervening variable causing a mediation effect on the association between lead and MetS. Thus, we first examined the association among the following variables: (a) blood lead and serum TSH levels, (b) serum TSH levels and MetS, and (c) blood lead levels and MetS (Fig 1). Moreover, to verify the possible mediating effect of serum TSH levels, the association between blood lead levels and MetS was examined after further adjustment for serum TSH levels (c'). The associations between the three variables were determined by obtaining the odds ratios (ORs) and their 95% confidence intervals (CI) using multiple logistic regression analysis. First, we examined the association between blood lead levels and high TSH levels, which was defined as the upper 25% of the sample's TSH level distribution. Blood lead levels were log-transformed before analysis due to their skewed distribution and examined as continuous variables. The participants were also subdivided into quintiles based on their blood lead levels; these variables were included in the logistic regression models as categorical variables. The model was adjusted for sex, age, education, income, smoking status, creatinine-adjusted urinary iodine level, body mass index (BMI), mercury level, and cadmium level. Second, the association between serum TSH levels and MetS was determined. Along with the prevalence of MetS, the correlations between the diagnostic components of MetS and serum TSH levels were determined. Serum TSH levels were log-transformed prior to analysis and categorized by quintiles. All models were adjusted for sex, age, education, income, smoking status, alcohol intake, exercise, AST level, ALT level, serum creatinine level, and creatinine-adjusted urinary iodine level. Finally, the association between blood lead and MetS was examined. The models were progressively adjusted for covariates and then further adjusted for serum TSH and creatinine-adjusted urinary creatinine to test for mediation effects. Ethics statement The Korea National Health and Nutrition Examination Survey (KNHANES) was approved by the institutional review board of Korea Centers for Disease Control and Prevention (KCDC), and the approval codes from 2013 are as follows: 2013-07CON-03-4C and 2013-12EXP-03-5C. We used secondary data provided from the KNHANES for this study. Table 1 demonstrates the distribution of the basic characteristics of 1,688 study participants according to MetS status. The weighted prevalence of MetS in the Korean population was 21.9%. People with MetS were significantly older and had higher BMI levels. The proportion of people with lower education levels were higher in the MetS group, and people in the non-MetS group had higher household incomes. The MetS group showed higher proportions of individuals with little and moderate exercise levels. Blood levels of heavy metals and TSH were significantly higher in individuals with MetS. Results The distribution of blood lead and serum TSH levels according to the characteristics of the study participants are shown in Fig 3. The points indicate the geometric means, while the horizontal lines display the 95% confidence intervals of the geometric means. The weighted geometric means of blood lead and serum TSH levels were 1.96 μg/dL and 2.17 μIU/mL, respectively, and are represented by the dotted vertical line. Blood lead levels increased by age, were higher in men than in women, and increased by smoking status. Serum TSH levels were higher in women than in men and increased with higher BMI levels. The ORs for high TSH levels (upper 25%) based on log-transformed lead levels and lead quintiles are shown in Table 2. Lead levels were positively associated with high TSH levels after adjusting for the potential confounders. The OR (95% CI) for the occurrence of high TSH levels per one unit increase of log-transformed lead levels was 1.79 (1.24, 2.58). The ORs for high TSH levels also increased according to the lead quintiles compared to the lowest quintile, and there was a significant increase in the trend for the OR (p for trend = 0.028). Table 3 demonstrates the association between MetS risk and serum TSH levels. The prevalence of MetS increased with higher TSH levels. The OR (95% CIs) for MetS one unit of increase in ln-TSH levels was 1.25 (1.03, 1.51). The ORs for possessing the clinical aspects related to each of the five diagnostic components of MetS were also evaluated according to the log-transformed serum TSH levels. Although the associations with TSH levels were not significant for all components, the ORs for having clinical disorders related to the diagnostic components of MetS indicated positive associations between the diagnostic components and TSH levels. The association between MetS risk and blood lead concentration is presented in Discussion In this study, we investigated the association between environmental exposure to lead and MetS considering the effect of TSH, which we hypothesized to be an intermediate factor that mediates the influence of lead on MetS. By analyzing a large representative sample of Korean adults who participated in the 2013 KNHANES, significant associations between blood lead, serum TSH levels, and MetS were found in the general Korean population. The associations were evaluated by multiple logistic regression models, and the results showed that increased PLOS ONE Association of blood lead levels with metabolic syndrome considering the thyroid-stimulating hormone's effect levels of blood lead are positively associated with elevated levels of serum TSH. Moreover, the risk of MetS was higher in individuals with higher blood lead and serum TSH levels. After additional adjustment for serum TSH levels, the association between blood lead levels and MetS was evaluated, and the OR remained stable which suggests that serum TSH does not influence the effect of lead on MetS development. The positive association between TSH levels and MetS has been established through numerous studies. It may in part be explained by the regulation of lipid metabolism by thyroid hormones. Overt hypothyroidism, which is characterized by increased TSH and decreased thyroxine (T4) levels, leads to an increase in serum cholesterol levels [24]. More specifically, thyroid hormones regulate the activity of some key enzymes in lipoprotein transport, which PLOS ONE Association of blood lead levels with metabolic syndrome considering the thyroid-stimulating hormone's effect participate in the regulation of cholesterol contents. Therefore, altered levels of thyroid hormones can induce considerable changes in the synthesis of cholesterols [6]. This in turn affects serum lipid profiles, and the elevation of serum lipid concentrations lead to MetS. A study conducted in the Netherlands explored the association between thyroid dysfunction and MetS and identified a significant association between serum TSH levels and serum lipid levels even in the euthyroid range [24]. Another study based on a healthy Korean population showed significant positive correlations between serum TSH levels and the levels of total cholesterol, triglycerides, and low-density lipoprotein cholesterol. The results from this study showed that TSH concentrations were positively related to the prevalence of MetS with almost a doubled risk of MetS in people with higher TSH levels, and confirmed the influence of TSH in serum lipid profiles leading to the cause of MetS [4]. The results of this study also indicated that lead exposure is a risk factor of MetS. However, the mechanisms underlying the association between lead exposure and MetS is partially similar to that between TSH and MetS. Exposure to lead induces oxidative stress, which contributes to disturbances in systemic lipid metabolism [14,15,25,26]. Oxidative stress due to lead exposure can induce alterations in serum lipid contents by stimulating lipid peroxidation or by enhancing the susceptibility of lipids to peroxidation [27]. Subsequently, elevated levels of serum lipid due to lead exposure increases the likelihood of MetS as shown in the effects of altered TSH levels on lipid profiles. Meanwhile, the increase in blood lead levels can induce an increase in serum TSH levels. This association was also confirmed by the findings of our study, which indicated the positive association between elevated blood lead concentrations and high serum TSH levels. These results correspond to those of previous studies, which reported the association between lead exposure and thyroid dysfunction. Although most of these studies were conducted in workers who are occupationally exposed to lead, they consistently revealed a positive association between blood lead and serum TSH levels [28,29], and this positive relation was also observed with relatively low concentrations of blood lead [30]. Based on these results, we considered the possibility of a mediation effect by TSH and suggested that previous studies that reported the association between lead exposure and MetS could have been overestimating the effect of lead if TSH does have a mediation effect. However, after investigating the mediation effect of TSH on the relationship between lead and MetS, we have confirmed that PLOS ONE Association of blood lead levels with metabolic syndrome considering the thyroid-stimulating hormone's effect TSH does not mediate the effect of lead exposure on MetS development. Hence, the effect of lead exposure on MetS can be considered as an independent effect, regardless of TSH levels. The independent effect of lead exposure on MetS may be explained by the fact that the occurrence of MetS caused by lead exposure is rather complicated and involves some other factors, which do not completely overlap with those in the process of MetS development due to imbalance of TSH levels. Oxidative stress induced by lead exposure plays an important role in several biological processes in vivo. It may cause an increase in the production of proinflammatory mediators and lipid peroxidation, suppress nitric oxide (NO) levels, and alter calcium homeostasis, which in turn increase the likelihood of clinical disorders in the MetS cluster [14,15]. Other factors influenced by lead exposure, other than that related to lipid profiles also have significant associations with MetS. Especially, suppression of NO is related to the development of MetS components such as insulin resistance, endothelial dysfunction, hypertriglyceridemia, and chronic adipose tissue inflammation [31]. NO participates in many important physiological processes, including the regulation of vasodilation and regional blood flow and mitochondrial biogenesis and function. Reduction in the bioavailability of NO can cause endothelial dysfunction, and impaired activity of isoenzymes that play a role in NO formation is closely associated with insulin resistance [31,32]. In addition, calcium homeostasis is also correlated with the occurrence of MetS [33,34]. Abnormal serum calcium levels may affect insulin sensitivity and insulin release, which in turn leads to an increase in the risk of diabetes and MetS [33]. Based on these explanations, there are several more pathways through which lead can cause MetS, besides the common pathway shared with TSH level modifications. While dysregulation of lipid metabolism is the overlapping pathway of MetS causation by lead and TSH, dysfunction of non-lipid factors which also link the association between lead and MetS are not yet proved to be related to TSH levels [35][36][37][38][39][40]. Therefore, the effect of lead on the occurrence of MetS can be regarded as an independent effect not mediated by TSHs. The importance of this study is that we revealed the significant associations between lead exposure and TSH levels and that between lead exposure and MetS in a large representative sample of the general Korean population. Previous studies reporting the associations between two of the three variables mentioned above using nationwide data have mostly been conducted in the United States, and only a limited number of studies have been conducted in the general Asian population. Moreover, most studies on lead exposure in Korea have been conducted in industrial workers, and no study has reported the association between lead and TSH levels in the general Korean population. Hence, this was the first study to report the association between lead exposure and TSH levels in the general Korean population. Another remarkable strength of this study is that we noticed that TSH is closely related to both lead exposure and MetS and hypothesized the role of TSH as an intermediate factor. By examining the effect of TSH in the relationship between lead and MetS, we verified that TSH does not have an influence on the effect of lead exposure on MetS development and thereby eliminated the possibility that previous studies on lead and MetS could have overestimated the effect of lead exposure on MetS development. There are some limitations to our study. First, as KNHANES data are cross-sectional survey data, the associations that we illustrated cannot represent a causal relationship. Second, we only utilized TSH as an indicator of thyroid function. KNHANES data also include the data on serum levels of fT4, which are also widely used as a measure of thyroid function, but we did not determine the effect of fT4 on the association between blood lead and MetS. However, TSH is independently affected by lead exposure regardless of other thyroid hormones [9,29,41], whereas the effect on other thyroid hormones is associated with the duration of exposure to lead [9]. Since KNHANES does not provide relevant data on the duration of exposure to environmental risk factors, it would be inappropriate to use fT4 as an indicator of thyroid function. Finally, blood lead mainly represents the degree of recent exposure as lead has a biological halflife of less than 30 days in the blood [27]. Therefore, blood levels of lead may not be a useful marker for determining the effect of long-term exposure to environmental lead on chronic diseases, such as MetS. Bone lead is a preferable biomarker of cumulative lead exposure instead of blood lead [42]. However, we used blood lead levels in our analysis because the data from KNHANES did not include data on bone lead levels. Yet, since the measurement error of exposure is likely to be non-differential, the actual association can be expected to be larger. Conclusion A significant association between blood lead levels and MetS was observed in the general Korean population. Moreover, serum TSH levels were both positively associated with blood lead concentrations and MetS. The association between lead exposure and MetS was not attenuated after adjustment for TSH levels, suggesting that TSH does not mediate the effect of lead exposure on MetS development. These findings may indicate that lead has an independent effect on the pathogenesis of MetS. Further studies are needed to quantify the associations between lead and TSH in general populations and to examine the mechanism of MetS due to lead exposure.
2021-01-02T06:18:07.073Z
2020-12-31T00:00:00.000
{ "year": 2020, "sha1": "b899ab372a8960b14cfd7e2753d4885bdde98556", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0244821&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bafd4c4291e6e811c5365da7789fe87abcffbf92", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
252357470
pes2o/s2orc
v3-fos-license
Gene amplifications cause high-level resistance against albicidin in gram-negative bacteria Antibiotic resistance is a continuously increasing concern for public healthcare. Understanding resistance mechanisms and their emergence is crucial for the development of new antibiotics and their effective use. The peptide antibiotic albicidin is such a promising candidate that, as a gyrase poison, shows bactericidal activity against a wide range of gram-positive and gram-negative bacteria. Here, we report the discovery of a gene amplification–based mechanism that imparts an up to 1000-fold increase in resistance levels against albicidin. RNA sequencing and proteomics data show that this novel mechanism protects Salmonella Typhimurium and Escherichia coli by increasing the copy number of STM3175 (YgiV), a transcription regulator with a GyrI-like small molecule binding domain that traps albicidin with high affinity. X-ray crystallography and molecular docking reveal a new conserved motif in the binding groove of the GyrI-like domain that can interact with aromatic building blocks of albicidin. Phylogenetic studies suggest that this resistance mechanism is ubiquitous in gram-negative bacteria, and our experiments confirm that STM3175 homologs can confer resistance in pathogens such as Vibrio vulnificus and Pseudomonas aeruginosa. Introduction Antimicrobial resistance (AMR) in pathogenic, commensal and food-borne bacteria remains a major health hazard for humans. According to a recent study, in 2019, almost 5 million human deaths were estimated to be associated with bacterial AMR, with a prognosis of an increase of up to 10 million deaths by 2050. It is widely accepted that inappropriate use of antimicrobials as therapeutics and feed additives in human and veterinary medicine coupled with a lack in understanding of bacterial resistance mechanisms account for the worldwide increase Table A in S1 Text). Remarkably, one of the remaining 5 resistant strains, with an intact tsx gene (strain S41), exhibited a more than 100-fold elevated MIC and was able to tolerate concentrations as high as 2 μg mL −1 albicidin ( Fig 1A). Analysis of DNA sequencing data of strain S41 revealed a gene amplification resulting in 3 to 4 copies of an approximately 47-kb genomic region but no single nucleotide polymorphisms (SNPs) (Tables A and B in S1 Text). The GDAs caught our interest and we decided to investigate this mechanism in more detail. We therefore repeated the evolution experiments with a mutant strain lacking the tsx gene (Δtsx) in order to avoid resistance effects resulting from mutations in the nucleoside transporter gene. After exposing the Δtsx strain stepwise to increasing albicidin concentrations from 0.125 μg ml −1 to 20 μg ml −1 (Fig 1A), similar genome alterations were observed as in the tsx-proficient WT strains that adapted to the antibiotic. Whole genome sequencing revealed that 5 out of 10 tested strains harbored GDAs ranging between 3 kb and 158 kb (strains T01, T04, T05, T10, and T12; Fig D(A) and Table B in S1 Text) with copy numbers varying between 3 and 15. In these strains, either no SNPs (2/5: T01 and T10) or SNPs in genes concerning the Table B in S1 Text). A structure of albicidin and azahistidine albicidin is included in the bar diagram. (B) Nanopore sequencing revealed 7 copies of the GDA region in the evolved tsx-deficient S. Typhimurium strain T12 (see below, T12), including the genes between topoisomerase IV subunits A and B (see above, WT). The common approximately 2,200 bp long region present in all 6 evolved strains (S41, T01, T04, T05, T10, and T12) is highlighted in cyan. It includes 3 genes: STM3175, ygiW, and partly qseB. (C) Arabinose-induced overexpression of STM3175 (cyan), but not YgiW (light green) or QseBC (olive), results in an elevated albicidin MIC in S. Typhimurium WT. (D) Arabinose-induced overexpression of the LBD (cyan), but not of the DBD (blue) of STM3175 revealed and increased MIC against albicidin in S. Typhimurium. (E) Crystal structure of STM3175 with the conserved AraC-type DBD and GyrI-like LBD shown in gray and cyan, respectively (please also see Fig M in S1 Text). Source data are provided as a source data file (S1 Data). DBD, DNA-binding domain; GDA, gene duplication-amplification; MIC, minimum inhibitory concentration; LBD, ligand-binding domain; WT, wild-type. https://doi.org/10.1371/journal.pbio.3002186.g001 common segment of the GDA region (3/5: T04, T05, and T12-qseB, stm3175, or topoisomerase IV subunit B and additionally one hypothetical protein (1/5: T12)) were identified (Table C in S1 Text), suggesting that this particular GDA region is responsible for the significantly increased albicidin tolerance in S. Typhimurium: 80-fold compared to the input strain or more than 1,000-fold compared to the WT strain ( Fig 1A). As mentioned above, a hallmark of GDAs is their reversibility in the absence of the appropriate selection pressure. To test the persistence of the GDAs, we incubated the evolved strains (T01, T04, T05, T10, T12, and S41) as biological triplicates for 24 h without albicidin over a period of 6 (T01, T04, T05, T10, T12, and S41) and 15 days (T12) (equivalent to approximately 600 and 1,500 generations, respectively). As shown in Fig B and Table D in S1 Text, we observed a reduction in gene duplications in most of the cases, which correspondingly resulted in a lower MIC to albicidin (Table D in S1 Text). As the phenomenon of heteroresistance is a complex mechanism that happens at the single-cell level, and where compensation and biologic fitness play a role, further experiments are needed to elucidate in detail the genesis of albicidin-mediated GDAs. GDAs contain genes for the sensory histidine kinase system QseBC and the regulatory protein STM3175 Alignments of the GDAs revealed a conserved approximately 2,200 bp overlap between the amplified regions present in all 6 (S41, T01, T04, T05, T10, and T12) analyzed strains (Fig D in S1 Text). The overlapping region contains the 2 genes STM3175 and ygiW that are transcribed polycistronically in an operon structure and the N-terminal part of qseB ( Fig 1B and Fig D(B) in S1 Text). Interestingly, STM3175 and ygiW are located directly upstream of the gene encoding DNA topoisomerase IV subunit A (Fig 1B and Fig D(B) in S1 Text). YgiW, a putative periplasmic protein has been linked to the oxidative stress response in E. coli [23]. Little is known about the function of STM3175, a putative regulatory protein. QseB and the adjacently encoded QseC (Fig 1B) belong to a 2-component quorum sensing system consisting of a sensory histidine kinase (QseC) and its response regulator (QseB). To investigate if the absence of these genes affects albicidin tolerance of S. Typhimurium, we constructed knockout mutants of qseBC and STM3175-ygiW of the WT and Δtsx strain. In MIC assays and corresponding colony-forming unit (CFU) counts, no detectable significant change in albicidin sensitivity was observed compared to the parent strains (Fig E in S1 Text), suggesting that these genes only affected albicidin tolerance in GDAs and higher copy numbers. GDAs of the regulator STM3175 afford albicidin resistance To investigate which of the amplified genes was responsible for the increased albicidin tolerance, MIC assays and corresponding CFU counts with STM3175, ygiW, and qseB were conducted under control of an arabinose-inducible promoter. Each gene was cloned into the lowcopy number plasmid pBAD30 and expressed in S. Typhimurium WT cells. After induction with arabinose, only overexpression of STM3175 imparted resistance, whereas overexpression of YgiW and QseB did not elicit albicidin tolerance (Fig 1C and Fig F(A-B) and Fig F(E) in S1 Text). Notably, the level of albicidin resistance increased with increasing arabinose concentration (Fig G in S1 Text). This suggested that the resistance mechanism solely depended on the increase of STM3175 concentration, which would be elevated by the increased copy number of STM3175 in the GDAs. Indeed, RNA-Seq analysis showed a clear increase in mRNA levels of the STM3175-ygiW operon as well as the other constituents of the GDA, including qseB/C but not the flanking topoisomerase IV subunits A and B (Fig H in S1 Text). Proteomic analyses further confirmed higher levels of STM3175, YgiW, and QseB/C in the mutant T12 compared to WT cells (Fig H and Table N in S1 Text). Closer inspection of its amino acid sequence revealed that STM3175 consists of 2 domains: an N-terminal AraC-like DNA-binding domain (DBD) and a C-terminal GyrI-like ligandbinding domain (LBD). In overexpression experiments, the LBD but not the DBD of STM3175 afforded resistance ( Fig 1D and Fig F(C-D) and F(F) in S1 Text). Interestingly, the full-length protein appeared to confer higher resistance than the LBD alone. Expression of the LBD in an STM3175 mutant strain showed the same resistance level as when expressed in WT cells with intact STM3175 (Table E in S1 Text). In agar diffusion assays, both the full-length protein and the LBD alone neutralized the effects of albicidin (Fig I in S1 Text). When a 2-fold excess of albicidin was added to the STM3175 or LBD proteins before spotting, bacterial growth was clearly reduced, albeit less severely than in the control sample without protein (Fig I(C) and I(E) in S1 Text), excluding enzymatic modification or degradation of albicidin as mechanism of action for STM3175. STM3175 structure The dual domain structure of a helix-turn-helix DNA-binding element combined with an effector binding domain is reminiscent of AlbA [20] (Fig J(A) in S1 Text), the MerR-like transcription factor that confers albicidin resistance in Klebsiella oxytoca. However, the affinity for albicidin and the dual domain structure appeared to be the only similarities. Not only is the sequence identity low (15%; Fig J(B) in S1 Text) but also the predicted secondary structure of the LBDs is markedly different. AlbA exclusively consists of helical elements, whereas STM3175 is composed of α-helices and β-sheets (Fig K in S1 Text). Circular dichroism (CD) spectroscopy of recombinantly expressed proteins confirmed the mixed content of α-helix and β-sheet in STM3175 with clearly reduced helical contributions when the DBD is not present (Fig L(A-C) in S1 Text). These results were in excellent agreement with our crystal structure of full-length STM3175 (PDB-ID: 7R3W) where the N-terminal domain consists of 7 helices that form 2 helix-turnhelix DNA-binding motifs typically found in AraC DBDs [24] (Fig 1E and Fig M in S1 Text). The DBD is connected to the C-terminal LBD via a short helical element and a short linker (approximately 10 aa), which was not resolved in the electron density. The LBD shows the characteristic GyrI-like fold of 2 SH2 motifs with a pseudo 2-fold symmetry forming a central groove that is clearly visible in the crystal structure despite the relatively low resolution of 3.6 Å ( Fig 1E). The dimensions of the groove (length approximately 30 Å, width approximately 10 Å, height approximately 12 Å) provide an ideal environment for accommodating compounds of linear architecture [25]. Despite presenting as a monomer in analytical gel filtration (Fig N in S1 Text), STM3175 forms domain-swapped dimers in the crystal, where the DBD of one polypeptide forms contacts with the LBD of a second molecule (Fig O(A-C) in S1 Text). The relative orientation of the N-and C-terminal domains of the 2 molecules is not identical, resulting in an asymmetrical dimer (Figs O(D) and O(E) in S1 Text). When generating homology models by RoseTTAFold [26] and Phyre2 [27], the E. coli transcription factor Rob (PDB-ID: 1d5y) [28] and GyrI (PDB-ID: 1jyha) [25] were identified as highest-ranking structural homologs (100% and 99.9% confidence, respectively; Fig P in S1 Text). While the structures of the N-and C-terminal domains in the crystal structure and homology model were very similar (RMSD <1.5 Å), the relative domain orientation differed significantly. The homology models showed a more compact structure, similar to that of Rob bound to DNA (Fig P(A) in S1 Text). This flexibility in the relative domain orientation is consistent with STM3175 structure models by AlphaFold [29,30] (model available in AlphaFold Protein Structure Database under UniProt ID: Q8ZM00), where the predicted aligned error plot also suggests lower confidence in interdomain accuracy of the prediction. Rob and GyrI, as well as a number of other GyrI-like domain containing proteins, are also found when submitting the STM3175 crystal structure to the DALI server [31] to identify structurally similar proteins in the PDB. However, in contrast to other GyrI homologs, STM3175 lacks the highly conserved Glu residue in the center of the binding groove, and the charges in and around the groove are not exclusively negative as in most GyrI-like domains [32]. In STM3175, the groove is mostly lined with hydrophobic residues and extends into a tunnel toward the C-terminus of the LBD (Fig M in S1 Text). STM3175 albicidin binding Since several Trp residues are located in and around the binding groove, we monitored the interaction with albicidin by tryptophan fluorescence quenching experiments. To obtain affinity constants, albicidin was titrated to STM3175 or STM3175-LBD, and the concentrationdependent decrease in the fluorescence signal was measured. Fitting of the quenching data suggests that STM3175 binds albicidin with sub-micromolar affinity (K d = 0.17 ± 0.01 μM; Fig 2A). The LBD alone binds with similar but somewhat lower affinity (K d = 0.35 ± 0.05 μM; Fig 2B), suggesting that the DBD is not or only little involved in albicidin binding. Attempts to crystallize the albicidin-STM3175 complex were unsuccessful, but molecular docking of albicidin to the crystal structure of STM3175 suggests a binding mode where albicidin is positioned in the central groove of the LBD (Fig 2C). In the best-ranked models of the complex, the N-terminal half of albicidin occupies the binding domain groove, which is a well-described binding site for other ligands of GyrI-like proteins [32]. The C-terminal building blocks D, E, and F extend through the tunnel toward the C-terminus of the LBD (Fig 2C). Curiously, in the domain-swapped dimers found in the crystal, the exit of this tunnel is blocked by one of the helices of the DBD (Fig M(C) in S1 Text). However, considering the low resolution of the X-ray structure, the exact positioning of albicidin in the binding groove cannot be determined with certainty by docking studies, and in some of the lower-ranked models, we also found albicidin in the opposite orientation (Fig Q(A) in S1 Text). STM3175 homologs in E. coli, Vibrio, and Pseudomonas A BLAST search revealed homologs in various members of the Enterobacteriaceae family, including Klebsiella, Klyuvera, and Citrobacter. These homologs display the same transcription regulator di-domain structure with sequence identity ranging between 60% and 90%. The genomic context corresponds to that of STM3175 in Salmonella with topoisomerase IV subunit A, QseB, and QseC located in the direct vicinity (Fig R(A-B) in S1 Text). Moreover, variants of the protein lacking the DBD, similarly to GyrI, are present in other Enterobacteriaceae genera such as Escherichia and Shigella. The E. coli homolog is known as YgiV (approximately 50% sequence identity, similar genomic context), and a search through NCBI records revealed a number (approximately 1,500) of database entries of proteins with the same name. In the present results are the short variant (160 aa) and the di-domain protein (288 aa) with the majority of hits in E. coli (short variant) followed by Salmonella enterica and Klebsiella pneumonia strains (long variants). To investigate whether these homologs confer albicidin resistance and whether they are regulated by a similar GDA-based resistance mechanism, we conducted evolution experiments in E. coli. When exposed to increasing concentrations of albicidin, 11 out of 20 strains adapted to at least 8 μg mL −1 within 10 passages (Fig 1A). Nine strains harbored GDAs, again varying in length and copy number (Fig S(A) and Table B in S1 Text). An approximately 644-bp-long region was present in all GDAs, and, satisfyingly, it contained the E. coli ygiV gene (Fig S(B) in S1 Text). When the E. coli ygiV gene was expressed under control of an arabinose-inducible promoter in S. Typhimurium, we also observed elevated albicidin tolerance comparable to that conferred by the LBD of STM3175 alone (Fig T(A-B) in S1 Text). AlphaFold and Rosetta both suggest that E. coli YgiV has a fold identical to that of the LBD of STM3175 with a binding groove sandwiched between 2 helices (AlphaFold model available in AlphaFold Protein Structure Database under UniProt ID: Q46866). Notably, the conserved Glu residue that is located at the base of the binding groove in most GyrI-like domains is also not present. In docking studies with the homology model, we obtained similar results as for STM3175 (Fig Q(B) in S1 Text) but with a preferred orientation of the albicidin carboxylic acid toward the C-terminus of helix1 (in STM3175, this would be the end of the groove opposite of the DBD). We then sought to identify homologs from more distant bacterial relatives. A BLAST search using only the LBD of STM3175 identified single-and di-domain homologs in other Gammaproteobacteria, including Vibrio sp. and Pseudomonas sp. Because of their pathogenic potential, we decided to investigate whether the homologs in Vibrio vulnificus (YgiV-Vv; 44% identity to STM3175) and Pseudomonas aeruginosa (AraC-Pa-like; 48% identity) also conferred albicidin resistance. When the 2 di-domain homologs were expressed under control of an arabinose-inducible promoter in S. Typhimurium, we again observed elevated MICs, similar to those conferred by STM3175 (Fig T(C-D) in S1 Text). Based on these results, it is likely that single-and di-domain STM3175 homologs in various gram-negative species can increase albicidin tolerance if present in sufficient copy numbers. Most homologs (>90% of 250 BLAST search results), including YgiV-Vv and AraC-Pa, have a Phe in place of the Glu residue usually found on β-strand 7 that forms the base of the binding groove (Fig U in S1 Text). Together with a similarly conserved Tyr, this Phe residue (Phe265 and Tyr267 in STM3175) forms an ideal interaction motif for the aminobenzoic acid blocks of albicidin (Fig 2D). MIC assays with STM3175 variants where the 2 aromatic residues at the base of the binding groove, F265 and Y267, were mutated to alanines, confirmed the critical role of these 2 amino acids at the binding site. The albicidin tolerance was drastically reduced in S. Typhimurium when STM3175 variants F265E, F265A, or the double mutant F265A/ Y267A were expressed instead of the WT protein (Fig V in S1 Text). Specificity of the resistance mechanism To evaluate the specificity of this resistance mechanism, the 6 evolved S. Typhimurium strains (S41, T01, T04, T05, T10, and T12) were challenged with compounds from different antibiotic classes, such as fluoroquinolones, tetracyclines, β-lactams, or sulphonamides. With MICs comparable to those of the WT strain, none of the evolved strains showed resistance against any of the tested antibiotics (Tables F and G in S1 Text). Interestingly, although albicidin and fluoroquinolones both target DNA gyrase [34], cross resistance was not observed in either our evolved strains or fluoroquinolone-resistant (FQR) S. Typhimurium strains [35] that were tested against albicidin (Fig W in S1 Text). SbmC (GyrI) from E. coli, to which the LBD of STM3175 shows homology, protects cells from the gyrase poison microcin B17 [36]. However, overexpression of the SbmC homolog in Salmonella (71% sequence identity to the E. coli variant) showed only a minimal increase in albicidin tolerance (Fig X in S1 Text). Likewise, despite its homology to SbmC, evolved E. coli strains harboring GDAs of YgiV did not confer resistance against microcin B17 (Fig Y and Table H). Taken together, these results indicate that the STM3175-based resistance mechanism against albicidin is highly specific. Discussion In our evolution experiments, we observed the emergence of GDAs that allow S. Typhimurium and E. coli to tolerate albicidin concentrations significantly exceeding those conferred by mutations in the well-studied nucleoside transporter Tsx. This high-level resistance resulted from the presence of several copies of the transcription regulator STM3175 in the genomes of evolved strains. GDAs allow rapid transcription regulation independent of transcription factors, which enables bacteria to quickly adapt to growth limiting factors [37] such as heat [38], lack of nutrients [39], or heavy metals [40]. Compared to point mutations, spontaneous duplications occur without selective pressure and at significantly higher rates (10 −4 -10 −2 /cell/division) with further increases of the copy number at rates of 10 −2 /cell/division [3,41]. This intrinsic genetic instability allows amplification-mediated gene expression tuning and enables populations to quickly respond to changes in environmental conditions [37,41]. The presence of GDAs can complicate treatment of bacterial infections as it can result in heteroresistance [42], in which subpopulations are less susceptible to the treatment, ultimately leading to treatment failure. A number of different mechanisms are now known that can increase gene copy number in bacterial chromosomes. In principle, 2 main mechanisms are described: an amplification dependent or independent on the influence of the recombinase protein RecA [4]. We tested RecA dependence of STM3175 GDA formation by using a tsx/recA-deficient double mutant strain. Similar to the corresponding tsx-deficient parental strain, a significant increase in MIC upon evolution against increasing albicidin concentrations was detected. However, qPCR studies using STM3175-specific primers showed very clearly that the increase in MIC against albicidin was not accompanied by an increase in gene copy number of STM3175 (Table I in S1 Text). Thus, albicidin-induced establishment of STM3175 GDAs clearly follows RecAdependent mechanisms. RecA-dependent mechanisms of GDA formation usually indicate the presence of repetitive sequences at the ends of duplicated regions. However, we did not detect any such sequences using extensive in silico analyses. It is known that initiation of GDA events sometimes proceeds from RecA-independent duplications These duplications are then used as repetitive sequences for subsequent RecA-dependent gene multiplications [3,43,44]. Future experiments are required to show whether this mechanism also plays a role in the formation of STM3175 GDA events. Frequently, the duplications clearly serve to increase the cellular levels of a gene product that can directly counteract the induced stress, such as efflux pumps in case of antibiotics or heavy metals [40]. In our arabinose-inducible expression systems, the albicidin-neutralizing effect of STM3175 is clearly dosage dependent. Our adapted Salmonella and E. coli strains gain up to 15 copies of the gene and can tolerate more than a 100-fold higher albicidin concentrations However, GDAs are intrinsically unstable, and, in the absence of selective pressure, they are generally rapidly resolved to ameliorate the inherent fitness cost [3,45]. Investigations on the kinetics of STM3175 GDA reversal are a particularly interesting aspect. Our studies and those of others show that this is a highly individual process that occurs primarily at the singlecell level [46]. Advances in innovative sequencing technologies, such as MinION nanopore sequencing, will allow us in the future to delve deeper into the mechanisms of formation and reversion of GDAs. Ultimately, these insights will allow us not only to describe in detail kinetics of bacterial subpopulations but also to predict the emergence of successful (antibiotic resistance) clones. Under ongoing selective pressure, GDAs can provide a basis for the evolution of genes with new functions, as the additional gene copies could potentially evolve independently, acquiring new functions or specificities [40,47], e.g., to diversify or specialize an existing resistance mechanism or generation of new fusion proteins. In this study, it is conceivable that overexpression of STM3175 by GDA events paves the way for fixed, genetic albicidin resistance by subsequent mutations in the Tsx nucleoside channel, as shown for E. coli [22]. It appears unlikely that YgiV and STM3175 are resistance traits that evolved specifically to counteract albicidin. The lack of cross-resistance in our tests with numerous antibiotics and the high affinity for albicidin, however, suggest specificity of the mechanism. In this regard, it would be key to see how the ex vivo-determined albicidin binding affinities compare to intracellular albicidin levels. However, due to various challenges, e.g., hydrophobicity, we were not able yet to determine these with confidence. In the MIC assays, albicidin concentrations range from 0.0012 μM to 12 μM, which is in a similar order of magnitude as the dissociation constants we obtained from Trp fluorescence quenching experiments (0.17 μM for STM3175 and 0.35 μM for the LBD). However, we doubt that the STM3175 binding is sufficient to trap all albicidin molecules sufficiently in order to prevent inhibition of gyrase, rather that other additional mechanisms are involved, e.g., the transcriptional up-regulation of efflux systems or STM3175-mediated autoregulation. On the other hand, GyrI-like domain containing transcription factors like STM3175 have been implicated in polyspecific binding and multidrug resistance [40,47], and STM3175 might have the ability to protect cells from other stressor molecules. A role in DNA protection has been described for the homolog GyrI. Interestingly, the protein GyrI, which was originally discovered as a gyrase inhibitor, protects DNA gyrase from the peptide toxins microcin B17 and CcdB and offers partial protection from quinolones [32]. GyrI also imparts resistance against alkylating agents mitomycin C and N-methyl-Nnitro-N-nitrosoguanidine, which act independently of gyrase [48]. DNA gyrase inhibitors generally fall into 2 categories: those that inhibit the binding of ATP and interrupt supercoiling (e.g., aminocoumarins or cyclothialidines) and those that trap enzyme-DNA intermediates (e.g., quinolones, CcdB, and microcin B17). Albicidin belongs to the second category [3][4][5]; hence, a resistance mechanism comparable to that of microcin B17 seems reasonable. However, the molecular details behind the mechanism of action of albicidin have not yet been determined, and this similarity to microcin B17 and CcdB might provide further insights. Members of the GyrI superfamily are prevalent among bacteria, archaea, and eukaryotes [17]. PFAM lists GyrI-like family members as stand-alone domains or fused to other functional domains, such as DNA-binding or enzymatic domains. In our experiments, we observed higher resistance in overexpression experiments of full-length STM3175 compared to YgiV or STM3175 without DBD. Notably, this difference was not observed in agar diffusion experiments, which allows 2 possible explanations: first, full-length STM3175 can bind DNA and activate other defense mechanisms, such as an elevated expression of efflux pumps (as shown for the closely related transcription factor Rob) [49], or second, STM3175 can undergo autoregulation leading to a potentiation of the protein itself. It would be interesting to see in future experiments which promoter regions are recognized by the DBD and which genes are directly affected. A number of structures of proteins with GyrI-like domains are available in the PDB, but only a few of the deposited structures are of di-domain proteins: ROB [28], BmrR and EcmrR, 2 multidrug sensing regulators of the MerR family that consist of an N-terminal helix-turnhelix DBD with a dimerization motif and a GyrI-type LBD. The domain architecture of STM3175 resembles that of ROB, which also consists of an N-terminal AraC DBD fused to a GyrI-like domain [28]. Despite its relatively low resolution, the crystal structure of STM3175 agreed very well with predictions and available GyrI-family structures in the PDB. And while we were not able to obtain crystals of albicidin bound to STM3175, our models suggest a binding mode that is consistent with structural data of various drug-like compounds in complex with GyrI-like proteins [32]. In summary, we demonstrated that exposure of S. Typhimurium and E. coli to increasing concentrations of the gyrase poison albicidin results in rapid adaption via chromosomal duplication-amplification events. The affected region harbors the GyrI-like domain containing transcription regulator STM3175 (YgiV), which we identified as a critical factor involved in high-level resistance against albicidin. The protein binds albicidin with high affinity in an equimolar stoichiometry. We further showed that this resistance mechanism/gyrase protection mechanism is ubiquitous in Enterobacteriacea with STM3175 homologs conferring resistance in Escherichia, Vibrio, and Pseudomonas. Supporting information S1 Text. Fig A. Agar diffusion assays using natural albicidin. (A) Assay scheme illustrating the sample arrangement on agar plates. The negative and positive controls contain only protein (−) or albicidin (+) in buffer with 5% DMSO, respectively. (B) STM3175 with albicidin in a 1:1 molar ratio (in triplicates I-III). Fig B. Reduction in GDA copy number after albicidin removal. (A) Scheme of the experimental setup. The respective evolved strains harboring multiple GDAs were incubated in LB medium without albicidin supplementation. Bacteria were diluted in fresh growth medium every 24 h. After 6 days, bacteria were harvested and a qPCR using STM3175-specific primers were performed to determine the STM3175 copy number. (B) The fold change value represents the relative ratio of the GDAs of the evolved or backevolved strain (3 different clones) to the unevolved parent strain. Normalization done using the housekeeping gene trp . Fig (A) was created with BioRender. Source data are provided as a source data file (S1 Data). Fig C. MIC detection of evolved S. Typhimurium strains. Adaptation to albicidin in 14/90 strains after overnight incubation with the 4-fold albicidin wild type MIC (WT; ATCC 14028; black bar). Nine evolved strains (dark gray bars) showed mutations in the tsx gene and a comparable MIC to the tsx mutant (white bar, 0.25 μg mL −1 ). Five evolved strains showed 100% tsx gene identity to the WT. The strain S41 (red bar) has an increased MIC of 2 μg mL −1 , corresponding to an increase by almost 70-fold to the WT (0.00156 μg mL −1 ). This strain was investigated in further experiments. Data represent mean and standard deviations of 6 biological replicates that were performed in 3 technical replicates (WT, Δtsx, S41). The data of the other strains represent means and standard deviations of 2 biological replicates that were performed in 3 technical replicates. Strains without error bar have the same result in every single experiment. Source data are provided as a source data file (S1 Data). (A) Mapping of GDA strains after treatment with albicidin (increasing albicidin conc. from 0.0156 μg mL −1 to 8 μg mL −1 in 10 passages), which are different in copy number and size (Table B). (B) The common approximately 600-bp-long region (blue) includes the gene ygiV. Fig T. Comparison of noninduced and arabinose-induced expression systems of STM3175 homologs from E. coli, V. vulnificus, and P. aeruginosa in S. Typhimurium. (A, B) Arabinose induction results in elevated albicidin MIC in S. Typhimurium cells where the homolog YgiV-Ec from E. coli is expressed but not when YgiW-Ec is expressed. (C, D) Arabinoseinduced overexpression of the STM3175 homologs YgiV-Vv from V. vulnificus, its LBD (YgiV-LBD-Vv), and AraC-Pa from P. aeruginosa leads to increased albicidin MICs. The increased MIC for YgiV-Vv even without arabinose induction is due to its DBD, which is an AraC homolog and might results in autoregulation under albicidin treatment. Sensitivity of Ygiv-Vv to albicidin without arabinose induction is restored after cloning of only LBD of YgiV. The data represent means and standard deviations of 3 biological replicates that were performed in 3 technical replicates, except YgiW-Ec, which was performed in 2 biological and 3 technical replicates. Source data are provided as a source data file (S1 Data). Fig U. Alignments of STM3175 homologs. Shown are representatives from different genera of the top 100 homologs identified by BLAST of the LBD against the RefSeq database excluding Salmonella sp. Proteins investigated in this study are indicated in red. Amino acids with 100% consensus are shown in red, and the beta-strand forming the base of the binding groove is highlighted by a red box. Brombach (Institute of Microbiology and Epizootics, Freie Universität Berlin) for excellent technical assistance. We accessed beamlines of the BESSY II (Berliner Elektronenspeicherring-Gesellschaft für Synchrotronstrahlung II) storage ring (Berlin, Germany) via the Joint Berlin MX-Laboratory sponsored by the Helmholtz Zentrum Berlin für Materialien und Energie, the Freie Universität Berlin, the Humboldt-Universität zu Berlin, the Max-Delbrück-Centrum, the Leibniz-Institut für Molekulare Pharmakologie and Charité-Universitätsmedizin Berlin. We are grateful to Elvenstar's Pukipon for the kind donation of cat whiskers. For mass spectrometry, we would like to acknowledge the assistance of the Core Facility BioSupraMol.
2022-09-19T13:36:32.667Z
2022-09-15T00:00:00.000
{ "year": 2023, "sha1": "c5e106a0a2cb73ee73cd3ac0005ded0194456bf5", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "51255f47dbbd12aa1f358926e2998353b1977421", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
10028893
pes2o/s2orc
v3-fos-license
Microfabricated Modular Scale-Down Device for Regenerative Medicine Process Development The capacity of milli and micro litre bioreactors to accelerate process development has been successfully demonstrated in traditional biotechnology. However, for regenerative medicine present smaller scale culture methods cannot cope with the wide range of processing variables that need to be evaluated. Existing microfabricated culture devices, which could test different culture variables with a minimum amount of resources (e.g. expensive culture medium), are typically not designed with process development in mind. We present a novel, autoclavable, and microfabricated scale-down device designed for regenerative medicine process development. The microfabricated device contains a re-sealable culture chamber that facilitates use of standard culture protocols, creating a link with traditional small-scale culture devices for validation and scale-up studies. Further, the modular design can easily accommodate investigation of different culture substrate/extra-cellular matrix combinations. Inactivated mouse embryonic fibroblasts (iMEF) and human embryonic stem cell (hESC) colonies were successfully seeded on gelatine-coated tissue culture polystyrene (TC-PS) using standard static seeding protocols. The microfluidic chip included in the device offers precise and accurate control over the culture medium flow rate and resulting shear stresses in the device. Cells were cultured for two days with media perfused at 300 µl.h−1 resulting in a modelled shear stress of 1.1×10−4 Pa. Following perfusion, hESC colonies stained positively for different pluripotency markers and retained an undifferentiated morphology. An image processing algorithm was developed which permits quantification of co-cultured colony-forming cells from phase contrast microscope images. hESC colony sizes were quantified against the background of the feeder cells (iMEF) in less than 45 seconds for high-resolution images, which will permit real-time monitoring of culture progress in future experiments. The presented device is a first step to harness the advantages of microfluidics for regenerative medicine process development. Introduction Over the last ten years, bioreactor miniaturisation for traditional biotechnology has made significant progress. What began with a proof-of-concept study [1] is now a field of its own, broadly encompassing miniaturised stirred tank, microwell format-based and microfabricated bioreactors [2,3,4,5]. Favourable comparisons with larger scale bioreactors have been successfully demonstrated with bacterial, yeast and Chinese Hamster Ovary cells, and these mini-and micro-bioreactors have been operated in batch, fed-batch and chemostat mode. Automated, parallelised and instrumented, miniaturised bioreactors deliver quantitative data on the growth kinetics in real time, from culture volumes as small as 5 microlitres [6]. Several systems are now commercially available and could underpin the implementation of the Process Analytical Technologies and Quality by Design initiatives [7]; in short, bioreactor miniaturisation has changed the way early stage process development can be approached in traditional biotechnology. In traditional mammalian cell culture applications, cells are typically adapted to grow in suspension, either freely or attached to microcarriers. However, regenerative medicine presents the bioprocessing industry with a new production challenge, in which the cells themselves are the product. While some progress has been made towards the development of microcarrier-based expansion of human embryonic stem cells (hESC) [8], early clinical trials of stem cell medicines rely on more traditional adherent culture [9,10,11]. To deliver a range of potential clinical applications [10,12,13,14,15] it will be necessary to reliably, safely and efficiently produce high quality cells in adherent cultures [16,17,18]. To optimise the numerous biological, physical and chemical factors that synergistically combine to control stem cell fate [19], a large amount of process development is necessary. Consequently, due to the high cost of media components and the slow growth rate of stem cells, it is obvious that regenerative medicine process development will benefit from a similar technology drive towards miniaturisation. Present smaller scale culture methods limit stem cell process development. In culture flasks and dishes, the high cost of the growth factor-containing media constrains the number of experiments that can be performed. On the other hand, microwell plates, which operate with smaller amounts of media, are susceptible to well-to-well variability, medium evaporation and edge effects [20]. Additionally, all these devices typically lack instrumentation, giving a reduced understanding of the impacts of process variables. There are also problems with variations during manual processing, which can affect the phenotype of stem cells [21,22]. Microfabricated devices show potential to overcome these issues. A number of publications have clearly demonstrated that stem cell culture can be performed with fewer resources at a microfluidic scale [23] and different platform technology and parallelisation approaches have been reported [20,24,25]. Furthermore, instrumentation for on-line monitoring allows for automated and data-rich experimentation. Crucially, microfabricated devices will allow thorough investigation of the effect of perfusion culture during process development with minimum use of expensive media [26,27,28,29]. In larger reactors, perfusion cultures have shown improved expansion yields over static culture conditions for haematopoietic [30,31] and embryonic stem cells [32,33]. However, particular considerations must be made when designing a microfabricated bioreactor for regenerative medicine process development so that a link is maintained with conventional culture methods and production systems for the purposes of validation and scale up studies. Firstly, the hydrodynamic shear inherent in perfused systems can cause cell wash out of weakly adhering cells at flow rates as low as 0.05 ml/hr [34]. Furthermore, the effect of hydrodynamic shear may need to be decoupled from the effects of media replenishment and the removal of secreted factors. Secondly, dynamic seeding may result in nonuniform and poorly defined seeding densities, the presence of cells outside of the intended cell culture area, and damage to cells seeded in colonies (such as hESCs). Finally, the properties of the culture substrate and the extracellular matrix (ECM) affect cell adhesion, which in turn affects cell proliferation and cell differentiation. In current cell culture protocols, cell growth surfaces typically consist of a tissue culture polystyrene (TC-PS) culture substrate coated with an ECM. However, integration of TC-PS with microfabricated devices is difficult, since TC-PS is not compatible with conventional bonding and microfabrication techniques. In this contribution, we start to address the above issues by presenting a novel, autoclavable, microfabricated culture device, with a re-sealable culture chamber. This re-sealable culture chamber allows traditional static seeding in an otherwise fully assembled device. Additionally, the device reversibly seals with a TC-PS microscope slide (or any other standard sized slide), allowing the use of traditional growth surfaces. Using computational fluid dynamics software, we analyse how hydrodynamic shear stress can be adjusted by recessing the cell culture area. We demonstrate the benefits of the device, by seeding feeder cells and hESC colonies in static conditions onto gelatine-coated TC-PS. We also demonstrate the use of low hydrodynamic shear stress perfusion in the culture of hESC colonies that maintain an undifferentiated morphology, and retain the expression of pluripotent markers under continuous perfusion culture. Finally, using a novel image processing algorithm, we show that hESC colonies can be detected against a background of feeder cells. In the future, this will allow real-time quantification of hESC colony sizes during cell culture. Microfabricated Modular Scale-down Device The microfabricated culture device (Figures 1 and 2) consisted of a lid made from polycarbonate (PC), two interconnects made from aluminium (Al), a top and bottom frame (PC), a gasket and a microfluidic chip made from poly(dimethylsiloxane) (PDMS), and a TC-PS slide (16004, Nunc, Denmark). The top frame included an opening to accommodate the lid as well as two recesses. The first positioned the microfluidic PDMS chip with respect to the top frame, and the second deeper recess accommodated the gasket. A set of bores in the top frame enabled the mounting of the two interconnects. The bottom frame had the same outer dimensions as the top frame and a recess dimensioned to hold the TC-PS slide. An opening in the centre was designed to bring objectives from an inverted microscope into close proximity with the TC-PS slide for cell culture imaging. The top and bottom frame were clamped together with five M3 hex screws distributed down each side of the frame. The central pair of screws also attached the lid when in use. All the screws were tightened to 2 N.cm forming seals between the components by compression of the PDMS. To facilitate rapid set-up of cell culture experiments and achieve leak-free long-term operation, an easy and robust interconnection with the macro-world is required [35]. The cylindrically shaped interconnects (Figure 1(b)) contained a 1 mm diameter bore in their centre to link external tubing with the microfluidic chip. At the bottom, the interconnects formed a boss that compressed the microfluidic PDMS chip to form a seal. At the top, the bore was threaded to accept M6 Upchurch fittings and therefore permit simple connection with tubing for the provision and removal of media. The mean burst pressure of the culture device was 59 kPa with a standard deviation of 18 kPa (n = 36) and the lowest recorded burst pressure was 35 kPa. The pressure drop across the device at a flow rate of 500 ml.h 21 (3 orders of magnitude higher than the perfusion flow rate) was measured as 20 kPa. The lid was T-shaped with the upper 'horizontal' bar acting as a bed stop when the lower 'vertical' bar was pushed into the opening of the top frame. This defined the height of the culture chamber below (450 mm). The 'vertical' bar formed a press-fit with the gasket to seal the chamber. The dimensions of the 'vertical' bar matched the footprint of the culture chamber of the microfluidic PDMS chip. The re-sealable lid provides a simple means to open and close the culture chamber. This enables operation of the device in a so-called 'open' configuration for cell seeding, and a 'closed' configuration for medium perfusion. Analysis of variance shows there is no statistically significant relationship between burst pressure and the number of times the lid is removed and reinserted for up to 30 repetitions (a = 0.05, p = 0.99, n = 3). The PDMS microfluidic chip controls the flow of culture medium in the device. The microfluidic chip was made out of two PDMS layers with both containing a rectangular culture chamber measuring 4 mm in the direction of flow by 13 mm across the flow. The top layer (Figure 1(c)) contained the 200 mm deep flow channels connecting the inlet and outlet ports to the culture chamber. The flow is expanded from a narrow inlet prior to the culture chamber and condensed back to a narrow outlet after the chamber by 3 merging channels on each side. The top layer also included flow equalisation (or perfusion) barriers on each side of the culture chamber, each 200 mm wide and 1000 mm long. The apertures between the barriers itself had a rectangular cross-section (400 mm 6 200 mm). The second layer ('spacer') elevated the first layer above the cell culture area by 120 mm (Figure 1(d)). Modelling Velocity Fields and Hydrodynamic Shear To evaluate the design, we analysed the velocity fields and shear stress produced at a flow rate of 300 ml.h 21 . This flow rate corresponds to replacing 13.8 ml of media per day for each square centimetre of culture area, a rate 50 times higher than typical in hESC culture. It is therefore unlikely cells would ever be subjected to a higher shear stress. The uniformity of the velocity field in the culture device was investigated at various heights above the culture plane (i.e. above the TC-PS slide). 15 mm above the cell culture plane, the average fluid velocity is approximately a factor of 10 lower than at 200 mm above the cell culture plane, which is in line with the inlet and outlet channels (Figure 3 (a, b)). The microfluidic chip design produces a relatively even velocity field across the majority of the culture chamber ( Figure 3(b, c)). An increased velocity at the boundaries of the culture chamber can be observed due to the larger gap between flow restrictor and the boundary. This effect was deliberate and intended to remove air bubbles, entrapped during closing or filling. Hydrodynamic shear stress was also calculated 15 mm above the cell culture plane for a flow rate of 300 ml.h 21 . An average of 1.1610 24 Pa and a standard deviation of 0.14610 24 Pa were obtained from the model. The calculated value of 1.3610 24 Pa, using an analytical solution for shear stress at the culture surface, supports the result obtained through finite element modelling. Static Seeding and Perfusion Culture of hESC Colonies To test the suitability of the device for hESC culture, we seeded culture devices, assembled from autoclaved parts, according to the protocol employed in our regenerative medicine laboratory (see Materials and Methods). As a control, we seeded three single-well dishes in parallel to the culture devices. In the culture devices and the control dishes, the inactivated mouse embryonic fibroblasts (iMEF) started to attach within 2 hours. After one day, the cells had attached and spread in both systems. hESC colonies seeded onto the iMEF layer attached within 1 day. Colonies maintained an undifferentiated morphology comparable to the colonies in the control dishes (Figure 4(a,d)). A day after hESC seeding, the culture devices were closed and media was continuously perfused at 300 ml.h 21 , resulting in a residence time of approximately 5 min. In the control dishes, the media was replaced once a day in line with standard manual cell culture practice. During perfusion, dissolved gases were supplied via the media having been absorbed from the incubator through the inlet tubing. After 1 day, the cells within the colonies were small and tightly packed together; a characteristic morphology of undifferentiated hESCs (Figure 4(b,e)). After 2 days of continuous perfusion, hESC colonies maintained an undifferentiated morphology in both the culture device and in the control dishes ( Figure 4(c,f)). Representative higher magnification images are available in Supporting Information S1. Immunostaining was carried out to test for the expression of several pluripotency markers. The hESC colonies from one culture device were co-stained for Oct-4 and Tra-1-81, and the cells from a second device were stained for SSEA-3 (both were co-stained with DAPI). The immunostaining sequences of antibody incubation and washing buffers were performed in the device using the 'open' configuration, i.e. after removing the lid. As can be seen in figure 5, the hESC colonies stained positively for Oct-4 (c), Tra-1-81 (d) and SSEA-3 (g) with the correct localisation, nuclear, surface and surface respectively. The percentage of cells staining positive for Oct-4, in images of individual colonies, was 91% in the culture device and 94% in the control well with standard deviations of 2% and 5% respectively (n = 3 colonies, ,1,500 cells total). In a repeat experiment, a culture device and a control dish were stained with Annexin V and propidium iodide (PI) to detect apoptotic and necrotic cells. The numbers of cells staining positive were very low (Supporting Information S2). Rapid Quantification of hESC Colony Size An image processing algorithm was developed which permitted the detection of hESC colonies co-cultured with iMEF feeder cells. In brief, the texture of the local neighbourhood of a pixel was characterised at four scales (corresponding to various levels of spatial coarseness) and a random forest statistical classifier [36] used this information to label the pixel as being either part of a hESC colony or of the background (which included the iMEF cells). The resulting binary images can be used as a basis for the computation of the confluency (ratio of hESC pixels to total number of pixels) or the area occupied by the cells. This approach was used to monitor the culture in the microfabricated culture device. Figure 6 shows tracking of a single colony in the culture device from 1 day after seeding to the end of the 2 day perfusion period. The number of colonies, the total area occupied and the mean colony area were computed based on phase contrast images acquired at various stages of the expansion (Table 1). Differences in colony size and area can be attributed to the difference in the microenvironment of the cells. These include the medium Performance of the algorithm was evaluated against 20 unseen phase contrast images from the microfabricated culture device which showed a uniform range of confluencies from 3% to 82% (Supporting Information S3). To characterise performance we report F-scores, which are a standard metric to express the correctness of the pixel classification (and indicate overlap). The mean F-score was 90% with a standard deviation of 7% (n = 20), the worst F-score obtained over the testing set was 71%. In addition, the error in confluency estimation is assessed by comparing the confluency computed from the expert annotation and that derived from the algorithm output. The accuracy (bias) for the confluency estimates was 20.6% with a 95% confidence interval of [22.1%, 1.0%]. The precision of the estimates, as determined from the root mean square error (RMSE) and the estimate bias was 3.2%. The algorithm takes up to 45 seconds for a high resolution image (12806960). Additional metrics typically employed in image processing for pixel classification performance are reported in Supporting Information S4. Discussion We present a microfabricated adherent culture device that starts to address the requirements of regenerative medicine process development and demonstrate the potential of the device by culturing feeder-attached hESC colonies. The culture of feederattached hESC colonies is an appropriate model system for multiple reasons. hESCs are more difficult to culture than other common model systems such as Chinese Hamster Ovary cells, mouse embryonic stem cells, mouse embryonic fibroblasts and human foreskin fibroblasts. Furthermore, hESCs are a clinically relevant cell type and co-culture techniques, which are inherently more complicated than monoculture, are common in regenerative medicine. Thus feeder-attached hESC culture is a more rigorous test than many other culture processes and it is assumed a device suitable for feeder-attached hESC culture would be suitable for most other adherent cell cultures. Design Integration of TC-PS with microfluidic devices would normally be difficult, as TC-PS is not compatible with conventional air plasma or thermal bonding. Consequently, microfabricated devices for adherent cell culture make ubiquitous use of glass or poly(dimethylsiloxane) (PDMS) growth surfaces [37]; neither of which are commonly employed in regenerative medicine [38]. Indeed, to introduce novel growth surfaces and ECMs to processes for medical application, or even to compare them accurately to existing materials, would require extensive testing and validation. In our device, we successfully demonstrated the integration of gelatine-coated TC-PS through compression of the PDMS components against smooth surfaces resulting in an average burst pressure of 59 kPa. TC-PS is the most widely employed cell growth surface in stem cell biology, including T-flasks and Cell Factories, making later translation to larger production scales straightforward. With our modular design, materials other than TC-PS can easily be integrated as long as they have a smooth, flat surface and the dimensions of a standard microscope slide [39]. This makes a number of materials immediately available for investigation. As a result, this device could be employed to test growth surface candidates from microarray screening [40], under the defined culture conditions obtainable in the microfluidic chip. This is analogous to the scale-up train in traditional biotechnology, where 'hits' from high-throughput screening plates are first investigated in shaker cultures or small-scale bioreactors. The minimum size of the culture chamber is limited, not by the methods of microfabrication, but the number of cells required for analysis. We have designed our culture chamber to be as large as possible within the constraints of a microscope slide. The chamber was 13 mm wide and 4 mm in the direction of flow giving a culture area of 0.52 cm 2 (between 96-well and 48-well plates). This is sufficient for immunostaining or quantitative PCR. Further, the form factor of the culture chamber must also be considered. When investigating the effects of specific process variables it is important that these variables are uniform across the entire cell culture area. However, in long, narrow perfusion chambers, the consumption and secretion of soluble factors by cells near the inlet alter the conditions for the cells downstream. This effect is exacerbated at lower flow rates. Consequently, when defining the culture area, the width dimension of the culture chamber was maximised, within the limits of the slide's width, to minimise the length of the chamber. These dimensions are distinctly different from all other microfabricated devices for hESC culture [26,27,28,41]. To promote uniformity across the culture chamber further, the top layer of the microfluidic chip (Figure 1(c, d)) included flow dividers and rows of flow equalisation barriers on either side of the culture chamber. The efficacy of flow equalisation barriers at creating uniform flow velocity fields was previously demonstrated with slightly larger apertures [42], and with smaller rectangular apertures [43]. We demonstrate the effectiveness in our design through the generation of a relatively uniform velocity field (Figure 3(c)). The barriers thus minimise non-uniform cell growth patterns which can arise from variations in velocity fields and which are difficult to interpret [44]. Such growth patterns could be caused by spatial differences in shear stress or spatial differences in the exchange of soluble factors. Seeding Seeding density is a critical variable to both the expansion and differentiation of stem cell populations. Additionally, weakly adhering cells like hESC colonies typically require long incubation times (up to 2 days) to achieve secure attachment [45]. During this period, a culture medium overlay (typically a few millimetres) balances the oxygen and nutrient demands of the cells. Further, due to the low number and high value of some starting stem cell populations, a cell-efficient seeding method is required. Compared with dynamic seeding [46], static seeding gives more accurate control over starting cell density and distribution as it avoids cells settling and adhering in inlet and outlet channels. Additionally, the exposure to hydrodynamic shear stress occurring with flow-based, dynamic seeding methods is avoided. Minimising exposure to hydrodynamic shear is particularly crucial for the handling of embryonic stem cells, since shear stress during seeding can affect the phenotype [22] and could potentially dissociate multi-cellular hESC colonies. Finally, a device where standard seeding protocols can be adhered to reduces differences between scales and paves the way for robust and reproducible culture processes. We therefore sought to integrate a standard seeding method with our device to facilitate operation with a wide range of cells and seeding parameters. To address this goal our device includes a re-sealable lid. The lid provides a simple means of opening and closing the culture chamber. Thus, the device can be operated on both open and closed configuration during culture protocols (Figure 7). In the closed configuration, the height of the culture chamber is repeatably defined by the re-sealable lid. Additionally, the hard material of the lid does not deform during medium perfusion ensuring reproducible fluid flow patterns. In the open configuration, the culture chamber is directly accessible with laboratory pipettes facilitating pipette-based methods typically employed in laboratory scale stem cell maintenance including static seeding, static cell recovery and immunostaining. A further advantage of our device is that, in open configuration, the depth of media in the culture chamber is similar to the depths used in T25-flasks or culture dishes. Thus, during cell settling and attachment, the cells experience a similar microenvironment to traditional culture systems, addressing our objective of maintaining a link to conventional culture methods for validation. Previously presented re-sealable systems required seeding before assembly [47], which is cumbersome and results in poorly defined culture areas, or limited the height of the culture chamber to the total thickness of the device [48], potentially leading to excessive media hold-up times during perfusion. We successfully seeded both inactivated mouse embryonic fibroblasts (iMEF) and colonies of human embryonic stem cells (hESC) utilising standard static seeding protocols. The iMEF cells started to attach within 2 hours and had attached and uniformly spread after one day. hESC colonies attached to the uniform iMEF layer attached within 1 day and maintained an undifferentiated morphology comparable to control dishes ( Figure 4(a, d)). Furthermore, there was no statistically significant relationship between repeated removal and reinsertion of the lid and the burst pressure of the device up to 30 iterations. These results confirm the suitability of the re-sealable lid in facilitating static cell seeding. Perfusion Culture Previous reports indicated that shear stress is a critical parameter that can lead to cell dislodgement during medium perfusion [34,49], which we confirmed in our own experiments (data not shown). The hydrodynamic shear stress of 1.1610 24 Pa achieved in our design at 300 ml.h 21 is an order of magnitude below 5610 23 Pa and three orders of magnitude below 1610 21 Pa, the critical values previously reported by Korin et al [34] for hESCs and Toh et al. [29] for mESCs, respectively. Therefore, cell washout or significant shear impact are unlikely in our device. The low shear stress is primarily achieved through the large cross section of the culture chamber. However, shear stress on the culture plane is reduced further by recessing it below the inlet and outlet channels. The effectiveness of this technique has been previously demonstrated in straight channels with grooves [49], round wells [48] and rectangular chambers [42]. In our design, the PDMS 'spacer' layer ( Figure 1(d)) elevates the main plane of medium flow above the cell growth surface. Since the thickness of the layer is determined by spin-coating parameters, the elevation can easily be changed. An example application is the optimisation of shear stress levels at a fixed flow rate or vice versa. Supporting our predictions from fluid dynamic modelling, we did not observe washout of hESC colonies at the relatively high flow rate of 300 ml.h 21 . We successfully demonstrated a 2-day, continuous perfusion culture of feeder-attached hESC colonies in a microfluidic device without washout of the colonies. Both the low shear chip design and the use of traditional substrate may have contributed to the continued adherence and growth of the cells in these conditions. While these results must be further verified with feeder cell densities that match more closely the densities from the control dishes, the results demonstrate the suitability of the microfabricated device as a culture system for hESCs. Further, the lack of infection after 3 days of culture, along with additional E. Coli clearance studies (Supporting Information S5, S6), demon-strates the effectiveness of sterilisation by autoclave. Finally, the positive staining results, in combination with the morphology observations, are evidence supporting a maintained, undifferentiated hESC state during seeding and continuous perfusion. Monitoring Adherent cell cultures are by nature difficult to monitor: whereas suspension cultures can be characterised by sampling small culture volumes for offline analysis or by using indirect cell density measurements such as optical density, no standard approach is readily available for adherent systems. However, to accelerate regenerative medicine bioprocessing, there is clearly a need for a quantitative method for online characterisation of adherent cell cultures in general and that of co-cultures in particular. Such an online characterisation method will allow accurate and reproducible measurement of the effect of changes in experimental conditions (e.g. culture substrate and ECM used, medium formulation). To this effect, an image processing approach was developed to automate the detection and characterisation of hESC colonies co-cultured with iMEF feeder cells ( Figure 6) without the addition of dyes or markers to the culture medium. Conventional microscopy image processing methods, which are based on the detection of local changes in intensity are unable to distinguish between two cell populations as they present similar intensity profiles. Instead, our approach relies on the detection of differences in texture between hESC colonies and fibroblast cells. The random forest classifier is essentially a set of complex rules that in this case are used to label each of the pixels according to their texture features. Using information from the neighbourhood of a pixel at multiple scales is necessary for a robust characterisation of texture. The process mimicked how a human expert would distinguish the two cell types by evaluating multiple features in local regions of the image. The algorithm achieved a high pixel classification performance, which resulted in a low confluency estimation error. Our confluency estimates were shown to have no significant bias (mean = 20.6%, 95% CI = [22.1%, +1.0%]), and a precision of 3.2%; together these show that it produces estimates in good agreement with that of a human expert. Some of the discrepancies in detection results can be attributed to limitations of the current algorithm or to inadequate human annotations. Indeed, it is often challenging to classify pixels in ambiguous regions of the image such as colony borders or dense iMEF clusters. However, the random forest classifier was chosen to alleviate issues with ambiguous choices and the results reported here-in demonstrated the versatility and the accuracy of the approach. Furthermore, we show the algorithm can be used to generate metrics of colony number and size. Combined with the low computational complexity of the algorithm, this makes the method suitable for on-line monitoring of hESC culture confluency. To process the 18 46images required to cover our culture chamber takes less than 15 min. As a subject for future studies, culture results with adult stem cells need to be developed to address the full scope of regenerative medicine. Furthermore, the current study could be strengthened by the addition of further online and end-point measurements, including in vivo functional evaluation of the cells expanded in this device. This would enhance comparison with other more conventional methods of stem cell expansion. We are currently integrating further monitoring capabilities, such as the detection of bulk and peri-cellular dissolved oxygen concentrations, and building a fully automated parallelised platform that fits on a microscope stage. This will permit real-time data-rich experimentation for regenerative medicine process development. Burst Pressure Measurements To measure burst pressure a 10 ml plastic syringe was connected to one interconnect via tubing and a 3-way valve (98-2750, Harvard Apparatus, UK). Tubing connected to the other interconnect was blocked with a Luer lock plug. The third port of the 3-way valve was connected to a pressure sensor (40PC100G, Honeywell, USA) glued into a fitting (P-207, Upchurch Scientific, USA) with epoxy glue. A syringe drive was used to pump air into the device at 5 ml.min 21 and the pressure was logged via a LabView TM routine (LabView 2011, National Instruments, USA) and data acquisition card (USB-6229BNC, National Instruments, USA). The burst pressure was taken as the highest recorded applied pressure for a given experiment. The burst pressure was recorded 3 times per assembly for 12 different assemblies. Additionally, for the last 3 assemblies, single burst pressure measurements were made following iterations of lid removal and reinsertion. Measurements were made following 1-10, 20 and 30 iterations. Lid replacement burst pressures were normalised against the initial burst pressure before applying analysis of variance to investigate a relationship between burst pressure and lid replacement. All device components used for burst pressure experiments had previously been autoclaved. Fluid Dynamic Modelling The Navier-Stokes equations were solved by using the finite element method (FEM) software package Comsol Multiphysics 3.5a (COMSOL, Cambridge, UK). A fully developed steady-state flow with no slip condition at the boundaries was assumed. Water at 37uC was used as working fluid with interpolated values for density and dynamic viscosity of 993.2 kg.m 23 and 6.96610 24 Pa.s, respectively [50]. The boundary conditions were set at the inlet to an average velocity calculated from the flow rate (300 ml.h 21 ), and at the outlet to zero pressure. Due to the longitudinal symmetry of the microfluidic chip, only half of the chip was incorporated in the model to minimise computational time. Tetrahedral elements were employed to mesh the 3-D domains of the culture device (mesh sizes between 2.5 to 7.5 mm, 527539 elements). The model was solved with a built-in linear system solver UMFPACK. Hydrodynamic shear stress was calculated from the simulated velocity profile using the equation. where t h is the shear stress at a height h from the surface, m the dynamic viscosity and c the shear rate. To verify and compare the calculated shear stress from the model, the analytical solution of the equation for the wall shear stress between infinite parallel plates was used: t w is the shear stress at the wall, h the height of the culture chamber, w the width of the culture chamber, m the dynamic viscosity and Q the volumetric flow rate. Ethics Statement Mouse embryonic fibroblasts (MEFs) were derived from mouse embryos, which were harvested at day 12.5-13.5 of pregnancy (E12.5-13.5) from a naturally mated CD-1 female mouse. The pregnant female and the embryos were humanely sacrificed following Schedule 1 of the Animals (Scientific Procedures) Act 1986, for which specific ethical approval and licence are not required according to UK regulations. Cell Culture Prior to each cell culture experiment, all parts of the culture device and all tubing and tools required for assembly were autoclaved. The culture device was then assembled with a sterile TC-PS slide in a laminar flow hood. For substrate coating and cell seeding, the lid was removed ('open' configuration, Figure 7 The dish and culture device were transferred to incubator (37uC, 5% CO 2 ). For transfer between laminar flow hood and incubator, the culture device was placed in a large sterile glass Petri Dish (2175553, Schott, USA). 1 day later the MEF media was replaced with hESC media, dissected hESC colonies were seeded in the culture device and the control dishes, and both were incubated (37uC, 5% CO 2 ) for a further day. After 1 day of static culture, the medium in the culture chamber was aspirated and the culture device closed with the re-sealable lid ('closed' configuration, Figure 7(b)). An autoclavable tubing (R1230, Upchurch Scientific, USA) with Upchurch fittings (P207, Upchurch Scientific, USA) and a gas-permeable silastic tubing (R3607, Tygon, USA), connected the syringe with the culture device ( Figure 8). The two types of tubing were attached to each other via Luer adapters (F331 and P659, Upchurch Scientific, USA). The gas permeable tubing was included to adjust gaseous tension levels in the media before entering the culture chamber. The culture device was manually primed with culture medium using a syringe after which, the syringe was placed on a syringe drive (Model100, KD Scientific, USA) and culture medium perfused for 2 days at 300 ml.h 21 . The entire setup was placed in an incubator to maintain the culture temperature and atmospheric composition. Medium in the control dishes was exchanged every day. Cell Staining and Imaging Daily cell culture inspections and end-point assay imaging were performed with an inverted microscope (Eclipse TE2000-U, microscope camera DS-Fi1, Nikon, Japan). Cell staining in the culture device was performed in the open configuration. For apoptosis/necrosis staining, cells were washed once with DPBS then incubated for 5 min with Annexin V-FITC and propidium iodide (PI) each diluted 1:100 in binding buffer (K101-25, BioVision, USA). For immunostaining, hESC colonies were fixed with 4% (v/v) paraformaldehyde (PFA) in phosphate buffered saline (PBS) for 20 minutes then washed three times. All washing was with PBS supplemented with 10% (v/v) FBS to block nonspecific binding. Cells to be stained for nuclear marker Oct-4 were permeabilised by incubating with 0.2% Triton X-100 for 15 min at room temperature before washing a further 3 times. We incubated cells with primary monoclonal antibodies Oct-4 (SC-5279, Santa Cruz, USA) or SSEA-3 (ab109868, Abcam, UK), at a dilution of 1:200 in blocking solution, for one hour at room temperature. The cells were then washed three times and incubated with secondary antibodies that had an excitation wavelength of 488 nm (A11017, Invitrogen, USA and A21212, Invitrogen, USA respectively) for one hour at room temperature. Cells stained for Oct-4 were then washed three times and costained for Tra-1-81 by repeating the primary/secondary staining procedure above with a Tra-1-81 primary (ab16289, Abcam, UK) and a secondary with an excitation wavelength of 555 nm (A21426, Invitrogen, USA). Finally, the cells were washed with DPBS and stained with 4',6-diamidino-2-phenylindole (DAPI) (D1306, Invitrogen, Carlsbad, CA, USA). DAPI at a dilution of 1:200 was incubated with cells at room temperature for 10 minutes. Three experts counted cells staining positive for DAPI and Oct-4 in images of individual and partial colonies. Cells in images of three different colonies were counted in both the culture device and the control dish. The average counts across the three users were used to calculate the percentage of positive cells in each colony image. Automated hESC Colonies Characterisation All image processing was done using MATLAB (version R2011a, MathWorks Inc., USA) and 12806960 images taken using a 46 objective. Images were first converted to a doubleprecision floating point greyscale representation by computing a weighted average of the three image channels (weighted 0.290, 0.570 and 0.140 for red, green and blue components respectively). The basic image features (BIFs) of the image were computed from the responses of derivative-of-Gaussian filters according to the scheme of Crosier and Griffin [51]. Briefly, the BIF approach classifies each pixel of the image into one of seven classes based on approximate symmetry in its neighbourhood. The parameter s defines the scale of the derivative-of-Gaussian filters employed. The readiness of the algorithm to ignore local structure and classify a pixel as 'flat' is controlled by the parameter e. BIFs were computed at scales s base , 2s base , 4s base and 8s base (s base = 0.7), with e kept constant at 0.11. For each scale, a local histogram of the counts of different BIFs appearing in a 25625 pixels uniformly weighted window was built for each pixel of the image. The resulting features vector for each pixel contained 28 elements (4 concatenated local BIF histograms of 7 bins each). A MATLAB implementation of the random forest classifier [52] was trained using 1.52610 7 pixels annotated by a human expert. Each pixel was labelled as either hESC or background (which also included fibroblast cells). The random forest consisted of 20 trees with 5 variables randomly sampled at each split. When processing an image, the features associated with each pixel were computed as outlined above and the random forest classifier was used to predict the class labels. The result was a binary image with hESC pixels equal to 1 and the rest to 0. Finally, small objects were discarded as detection noise (size ,4000 pixels) and holes were filled (size ,6000 pixels) using binary morphological operations. Detection performance was evaluated by comparing the output of the image processing algorithm to results of a human expert. The testing set included 20 representative images (cropped to 5006500 pixels each) of typical hESC cultures in the microfabricated culture device at different time points. These images were independent from those used for training. The F-score was computed as following: where TP was the number of true positives, FP the number of false positives, and FN the number of false negatives. The confluency was computed as the ratio of the number of pixels set to 1 (hESC pixels) to the total number of pixels. The area of detected colonies was computed by multiplying the number of pixels set to 1 by a calibration factor relating pixels to distance (for a 46 lens, at a resolution of 12806960 pixels, 1 pixel was equal to 2.86 mm 2 ). See Supporting Information S4 for the full set of pixel classification metrics. Supporting Information Supporting Information S1 Representative higher magnification phase contrast images of hESC colonies in the culture device. Phase contrast images of hESC colonies after (a) 1 day of static culture and (b) 1 and (c) 2 days of perfused culture in the microfabricated culture device. All images were taken with a 106 objective, scale bar is 200 mm. (TIF) Supporting Information S2 Images from viability staining of hESC colonies following perfusion culture. Images of a hESC colony after 2 days perfused culture in the microfabricated culture device. Supporting Information S7 Fabrication process of a mould and a microfluidic chip. (1) A sheet of DuralH was machined with a micromilling machine to create a mould (2). (3) PDMS was cast into the mould and then degassed. A PC sheet was placed on top of the mould to clamp the mould. Concurrently, a silanised silicon wafer was spin coated with PDMS to form a membrane. The PDMS-coated wafer and the clamped mould were then cured for 1 hour at 80uC in an oven. (4) The microfluidic manifold layer was released from the mould and the culture chamber body was cut out. (5) The microfluidic manifold layer and the PDMS membrane were exposed to an air plasma and immediately brought into contact for bonding. (6) The membrane at the bottom of the culture chamber body was cut out and the microfluidic chip was cut in shape and released from the wafer. Schematic representation is not to scale. (TIF) Supporting Information S8 Scanning Electron Microscopy images of the mould for the microfluidic chip. The negative flow equalisation barriers were milled with a 200 mm end mill (a). Burrs were not observed at the edges of the mould, for example at the edges of the flow equalisation barriers (b). (TIF)
2017-04-19T03:09:14.796Z
2012-12-19T00:00:00.000
{ "year": 2012, "sha1": "fe2312cc8dd5175ce920e7a8d870450f0e9e06ad", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0052246&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fe2312cc8dd5175ce920e7a8d870450f0e9e06ad", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
3666035
pes2o/s2orc
v3-fos-license
Gene and Chromosomal Copy Number Variations as an Adaptive Mechanism Towards a Parasitic Lifestyle in Trypanosomatids Trypanosomatids are a group of kinetoplastid parasites including some of great public health importance, causing debilitating and life-long lasting diseases that affect more than 24 million people worldwide. Among the trypanosomatids, Trypanosoma cruzi, Trypanosoma brucei and species from the Leishmania genus are the most well studied parasites, due to their high prevalence in human infections. These parasites have an extreme genomic and phenotypic variability, with a massive expansion in the copy number of species-specific multigene families enrolled in host-parasite interactions that mediate cellular invasion and immune evasion processes. As most trypanosomatids are heteroxenous, and therefore their lifecycles involve the transition between different hosts, these parasites have developed several strategies to ensure a rapid adaptation to changing environments. Among these strategies, a rapid shift in the repertoire of expressed genes, genetic variability and genome plasticity are key mechanisms. Trypanosomatid genomes are organized into large directional gene clusters that are transcribed polycistronically, where genes derived from the same polycistron may have very distinct mRNA levels. This particular mode of transcription implies that the control of gene expression operates mainly at post-transcriptional level. In this sense, gene duplications/losses were already associated with changes in mRNA levels in these parasites. Gene duplications also allow the generation of sequence variability, as the newly formed copy can diverge without loss of function of the original copy. Recently, aneuploidies have been shown to occur in several Leishmania species and T. cruzi strains. Although aneuploidies are usually associated with debilitating phenotypes in superior eukaryotes, recent data shows that it could also provide increased fitness in stress conditions and generate drug resistance in unicellular eukaryotes. In this review, we will focus on gene and chromosomal copy number variations and their relevance to the evolution of trypanosomatid parasites. INTRODUCTION Trypanosomatids are widespread parasites, which infect a wide range of vertebrates, invertebrates and plants [1][2][3][4][5][6]. Within the Trypanosomatidae family, Trypanosoma cruzi, Trypanosoma brucei and different species of the Leishmania genus stand out due to their importance in public health, being respectively the etiological agents of Chagas disease, African trypanosomiasis and Leishmaniasis, infecting around 20 million people worldwide [7]. As these parasites have a life cycle that alternates between vertebrate and invertebrate hosts, their distinct life stages are adapted to survive in each of these different environments. In the mammalian host, T. cruzi is able to invade any nucleated cells, while Leishmania is restricted to phagocytic cells and T. brucei does not invade cells, surviving and proliferating in the host bloodstream. The Leishmania genus is further divided into three distinct subgenera: Leishmania, Viannia and Sauroleishmania. The Leishmania subgenus comprises Old and New World species that are associated with visceral or cutaneous forms of leishmaniasis, and develop in the midgut and foregut of the sand fly vector [30]. Viannia subgenus is constituted by New World species that only cause cutaneous forms of the disease and develop in the vector hindgut [30]. Finally, Sauroleishmania is a subgenus composed by species that infect lizards, and are incapable of replication within mammalian macrophages [31][32][33]. T. brucei is divided in three subspecies: T. b. brucei, T. b. gambiense and T. b. rhodesiense. Approximately 90% of the human infections in 24 countries of central and west Africa are caused by T. b. gambiense, while T. b. rhodesiense is responsible for acute cases in east and southern Africa and T. b. brucei is usually associated with animal infections, although sporadic human infections have been reported [34]. T. cruzi taxa is further divided into six Discrete Typing Units (DTUs), named TcI-TcVI, where TcV and TcVI are hybrids resulting from the fusion of parental strains from the TcII and TcIII DTUs [35][36][37][38]. Since 2005, the sequencing of the Trypanosomatidae genomes provided important datasets for comparative analysis, shedding light on our understanding of mechanisms associated to parasitism and genome evolution within this family [12,[39][40][41][42][43][44][45]. Even though they diverged around 200 to 500 million years ago, trypanosomatid species contain large syntenic genomic regions mostly composed by housekeeping genes, suggesting that there is a strong selective pressure to keep the gene order in these parasites [12]. These syntenic regions are interspersed by clusters of species-specific multigene families related to the parasitic lifestyle, structural RNAs and retroelements, which vary greatly in size and content among species and even within species [12,18,[41][42][43]. While these species-specific clusters are enriched in subtelomeric regions in T. brucei, they are distributed in internal chromosomes and subtelomeric regions in T. cruzi CL Brener and Leishmania [12]. The advent of next-generation sequencing technologies allowed several trypanosomatid genomes to be sequenced in high depth coverage at a low cost, providing insights into copy number variation events in these parasites [26,27,29,[46][47][48]. In fact, for several trypanosomatid genes there is a correlation between copy number and expression rates [11,49,50], which is especially interesting in these organisms that mostly regulate their gene expression post-transcriptionally. Chromosomal Copy Number Variation (CCNV) is a new level of copy number variation, in which a whole chromosome is duplicated or lost. This phenomenon has already been described in several Leishmania species and T. cruzi DTUs [26][27][28][29]46], as well as in yeast and mammalian cells [51][52][53][54][55][56], and could be related to a rapid adaptation to new environments and stress conditions. In this review, we delve into the gene and chromosomal copy number variations in Trypanosomatids and their biologic implications. Trypanosomatid lifecycle involve the transition between different hosts and a series of morphological changes in response to the distinct pressures within their hosts. Recent studies have suggested that in response to these transitions, these parasites have developed adaptive mechanisms such as changes in mRNA levels due to gene dosage. These mechanisms exploit unique characteristics related to the genome organization of these parasites. So far no canonical RNA polymerase II promoter has been identified in these organisms and their genes are transcribed as long polycistronic units [11,74,75]. Genes derived from the same polycistron may not be functionally related and may have very distinct mRNA levels. The polycistrons are processed as mature monocistronic mRNAs by the addition of a 39 nt spliced leader RNA to the 5' of the transcript and polyadenylation to its 3′ end [17,76,77]. These features imply that Trypanosomatids mainly rely on post-transcriptional mechanisms to control gene expression. These mechanisms include mRNA processing and stability, translation efficiency, and protein stability [20,[78][79][80]. Genomic plasticity in trypanosomatids in response to changes in the environment conditions can affect whole chromosomes or be restricted to specific genomic regions and includes aneuploidy, gene amplification or deletion [58,62]. This phenomenon also occurs in fungi and cancer cells where changes in gene copy number can modulate drug susceptibility, virulence and proliferation [56,81]. In Leishmania, it has been documented that increased gene copy number modulate gene expression in response to environmental changes within its host as well as after exposition to stressors, such as drug selection [26]. In trypanosomatids, the evolution of parasitism appears to have been shaped by a series of gene losses and gains [61,62,82,83]. Among the genes that were likely lost during the origin of trypanosomatids are diverse enzymes involved in polysaccharide degradation like β-glucosidases or glucoamylases that are required for processing of bacterial prey [83]. In addition, this process of reduction affected several pathways of catabolism, macromolecular degradation and transport. Adaptation to a parasitic lifestyle in trypanosomatids also included gain of novel functions that could allow survival inside a host. This is revealed by the abundance of gene families that are unique or specific to each of these parasites [83]. Many of these gene families encode surface proteins, which are the main interface between the host and the parasite, suggesting that the divergence of trypanosomatid genomes was ruled at least partially by the evolution of multigene families [82,83]. In Leishmania parasites, these multigenic families include amastins, GP63, cysteine peptidases and surface antigen proteins [82,83]. Amastins are poorly functionally characterized surface glycoproteins that are preferentially expressed in the intracellular amastigote stage. Phylogenetic analyses have revealed that this family consists of four subfamilies termed α, β, γ and δ amastins and that an expansion of the latter appear to be key for the evolution of parasitism in Leishmania [84]. The composition of the amastin family varies between species with an increased δ-amastin repertoire in L. (Viannia) brazil-iensis, L. (Leishmania) mexicana and L. (Leishmania) major than in the other species [82]. These differences in amastin content on Leishmania genomes may be a reflect of the distinct environments for which different human-pathogenic Leishmania species have adapted. Another important cell surface family that participates in parasite invasion and immune evasion is the zincmetallopeptidase GP63. This multigene family has a wide substrate specificity and is preferentially expressed in Leishmania promastigotes [85,86]. Among the diverse functions of this family are midgut attachment, protection of parasites against digestive enzymes inside the sand fly host [85,87] and inactivation of complement mediated lysis by cleavage of C3b and inhibition of antimicrobial agents' production by the interaction with JAK kinases in the vertebrate host [88][89][90]. The GP63 family appears to be organized into three distinct subfamilies that have undergone expansion events during the evolution of trypanosomatids with clear differences in the number and structure of these genes between species [82,91]. For instance, GP63 of the L. (Sauroleishmania) tarentolae are smaller, lack extracellular regions, present a smaller catalytic domain and appeared to have limited protease activity compared with other species [82,92]. This limited GP63 functionality in this species is in accordance with its lack of intracellular development, different vector and vertebrate host (lizard). In contrast, the GP63 repertoire of the Viannia species is greater than the other two subgenera and is composed of tandem gene arrays [82]. The presence of these genes in a headto-tail fashion may correspond to an adaptive mechanism to rapidly increase gene dosage. Peptidases are key parasite components that mediate differentiation and immune evasion during the initial steps of infection through the interaction with proteoglycans at the host cell surface [93]. These peptidases play immunomodulatory roles, suppressing Th1 type response delaying parasite clearance. Although trypanosomatids appeared to have lost various ancestral cathepsin genes, the remaining cathepsin-L family appeared to have been key for these parasites given the series of expansions it underwent during parasite evolution [83]. Among the trypanosomatids, T. cruzi presents the largest expansion of multigene families, many of them encoding surface proteins, which accounts for its larger genomic size (~55MB), compared to the genomes of L. major (~33MB) and T. brucei (~26MB) [12]. As these surface proteins are directly related to host-parasite interactions, the increased variability of T. cruzi multigene families could be a consequence of the broader range of mammalian hosts that this parasite can infect, as well as its unique ability among the Tritryps to actively infect any nucleated host cell [94][95][96][97]. This vast number of host cells/species exposes T. cruzi to different evolutionary pressures, which could be a driving force for diversification [98]. In fact, a significantly larger fraction of T. cruzi protein coding genes shows evidence for positive selection to diversification when compared to Leishmania genes [98]. The content of multigene families also varies among T. cruzi DTUs, where the expansion of these families account for the 5.9 Mb increased size in the genome of the CL Brener (TcVI) strain when compared to the Sylvio X10/1 (TcI) strain [42,43,99]. Among the most studied T. cruzi multigene families are the trans-sialidase, MASPs and mucins, which respectively have approximately 1500, 1400 and 850 genes in the hybrid CL Brener genome [41,64]. Trans-sialidases catalyze the transfer of sialic acid from host glycoconjugates to β-galactopyranose acceptors in the mucin proteins, generating a negative coat that confers protection against the alternative complement and antibody opsonization [99][100][101][102][103]. This family is also enrolled in cellular adhesion and invasion of epithelial [104], neural and glial cells [105] and stimulates anti-apoptotic signals in these infected cells [106,107]. Mucin glycoproteins covers the whole parasite outer membrane with 2x10 6 copies [101,108], protecting the parasite from both the insect and mammalian host's defense mechanisms and acting in cellular adhesion/invasion processes [99,101]. MASPs are characterized by conserved N-and C-terminal regions, which respectively encode a signal peptide and a GPI anchor addition site, flanking a hypervariable central region [41,64]. As these flanking regions are cleaved from the mature protein, only the hypervariable region is exposed at the parasites' surface [41,64]. Although the biologic function of this family is still unknown, there is evidence that it could act in cellular adhesion/invasion processes [64,109,110], or in immune evasion strategies as expressed members of this family vary in a given population [111] and after successive passages in mice [110]. Interestingly, the T. cruzi closely related parasite T. rangeli, which is not able to establish a productive infection in mammals, has a massive reduction in the number of members of these three multigene families [45], supporting the importance of these families to T. cruzi mammalian cell infection. Differently to what is found in T. cruzi CL Brener [112] and Leishmania major [12,40], T. brucei multigene families' clusters are enriched in sub-telomeric regions [12,39]. These clusters are mainly composed by Variant Surface Glycoproteins (VSGs) and Expression Site Associated Genes (ESAGs), which are enrolled in T. brucei antigenic variation [12,39]. VSGs are highly antigenic surface proteins, which contain Nterminal hypervariable domains that are exposed extracellularly and conserved C-terminal domains that are buried into the parasite's coat [113,114]. T. b. brucei genome has ~800 different VSGs, where 7% of these genes are functional, 9% are atypical genes with folding limitations, 18% are gene fragments and 66% are full-length pseudogenes (with frameshifts or in-frame stop codons) [12,39]. T. brucei explores this pseudogene sequence arsenal to increase VSG variability by creating functional chimeric genes through gene conversion [115][116][117][118]. As this parasite develops exclusively extracellularly during the mammalian host infection stage, it is constantly exposed to the host's humoral immune response. To evade antibody complement mediated lysis, T. brucei relies on cell specific monoallelic expression of VSGs, a process called antigenic variation. In a population, each T. brucei cell covers its entire surface with ~10 7 copies of a single VSG, which shield other invariant antigens present in the parasite surface [39,115,119]. As the majority of the population express the same variant, a strong immune response focused on a specific VSG is established and greatly reduce the parasitemia. However, a subset of the population express a different variant, which is not promptly recognized by the hosts' immune system, and therefore expands in the population. This process generates successive waves of parasitemia and clearance as novel antigenic determinants spread in the parasite population [39,115,119]. VSGs sequence also vary among T. brucei subspecies. A comparison between the gene sequences of T. b. brucei VSGs with the unassembled reads from T. b. gambiense revealed that these subspecies shared ~43% of similarity in the VSG repertoire, which suggests that there is a strong positive selection for diversification, or at least a relaxation of purifying selection in T. brucei genes encoding surface proteins [48]. The massive expansions seen in multigenic families in trypanosomatids constitute an important evidence of specialization that these parasites have to undertake to survive, as they have to deal with different stressors within their hosts. While T. cruzi and Leishmania use this arsenal of proteins to invade and survive inside the host cell, the largest T. brucei multigene families are involved in antigenic variation to evade humoral response and survive into the host bloodstream. The continuous selective pressure imposed by the immune system, together with the adaptation to different hosts, provided an ideal environment for microevolution, challenging the ability of the host to control the infection and of the parasite to ensure its survival. CHROMOSOMAL COPY NUMBER VARIATION IN TRYPANOSOMATIDS Aneuploidy, the presence of abnormal number of chromosomes in a cell, is usually lethal or results in severe abnormalities in superior multicellular eukaryotes, greatly reducing organism fitness [120]. Among its most common examples are the trisomy of the chromosome 21 in Down syndrome, aneuploidies in sexual chromosomes (as the triple X, Klinefelter and XYY syndromes) and tumorigenesis [121][122][123]. However, some unicellular organisms such as Saccharomyces cerevisiae and Candida albicans appear to rely on aneuploidy as one of the mechanisms to allow rapid adaptation to changing environments, suggesting that the variation in chromosome number could also have a positive fitness effect in stress conditions and drug resistance [53][54][55]. Extensive aneuploidies have been described in eukaryotes such as mice hepatocyte cells [52], yeast, carcinomatous lung cells [51,[53][54][55], trypanosomatid parasites of the Leishmania genus [26,27,59,91,[124][125][126] and in several Trypanosoma cruzi strains [28,29]. As cytogenetic analyses are hampered in trypanosomatids by the lack of chromosome condensation during mitosis, there are currently two complementary and alternative methodologies to estimate CCNV in these parasites: Fluorescence in situ hybridization (FISH) [26,124], and Whole Genome Sequencing (WGS) followed by Read Depth Coverage (RDC) and allele frequency [26,27,29] analyses. While FISH allows the simultaneous identification of aneuploidies in each cell in a given population, it is laborious and usually restricted to few chromosomes [124,127]. On the other hand, WGS followed by RDC analyses provides a population level snapshot of all the chromosomes, as well as the simultaneous comparison of CCNV, genotype and allele frequency. However, it lacks the single-cell resolution provided by FISH [27,127]. Both techniques were employed in parasites from the Leishmania genus, showing that variations in the ploidy appear to be widespread among and within species [27], strains [26,27] and even within the same parasite population [59,124,125]. Based on the read depth coverage and allele fre-quency, the chromosomal copy number variation among L. major (strains Friedlin and LV39), L. mexicana (strains U1103, M379), L. infantum (JPCM5), L. donovani (strains BPK206/0 and LV9) and L. braziliensis (M2904) was estimated, showing extensive variation in chromosomal gain/loss within Leishmania species [27]. In fact, with the exception of the two L. major strains (Friedlin and LV39), all Leishmania isolates presented a different pattern of aneuploidies, suggesting that chromosomal duplication/loss is a common phenomenon in this parasite. Interestingly, chromosome 31 was consistently polysomic in all the evaluated Leishmania strains [26,27,46,91]. Gene ontology revealed that this chromosome is enriched with genes enrolled in iron metabolism, electron carrier activity and calcium-dependent cysteine-type endopeptidase activity [91]. Iron-sulfur proteins are crucial for the parasite survival, as it mediates oxi-reduction reactions during mitochondrial electron transport, been also involved in the synthesis of amino acids, biotin and lipoic [128]. The expansion of these genes could be one of the driving forces for chromosome 31 amplification in Leishmania [91]. RNAseq analysis in L. mexicana showed that chromosome 30, which is also polysomic in this species and corresponds to chromosome 31 in other Leishmania species, is enriched for amastigote upregulated genes. A total of 79 (19%) genes had at least a >2-fold increase in mRNA levels in amastigotes compared to promastigotes, including amastins, amino acidand other transporters as well as hypothetical proteins [129]. Several of these genes are known to be involved in survival strategies in the mammalian host, such as amino acid transporters, tryparedoxin, ABC transporters and aquaglyceroporin, suggesting that the duplication of this chromosome could favor parasite adaptation to the vertebrate infection [129]. Leishmania chromosome 31 is also syntenic to a duplicated region in T. brucei genome, which comprises the right end of both chromosomes 4 and 8 [62]. Although there is a widespread gene loss in these duplicated regions of T. brucei chromosomes 4 and 8, approximately 47% of the duplicated genes are retained in both chromosomes as paralogous loci [62]. Among these retained paralogs, there is an over representation of secreted and surface-protein encoding genes, suggesting a bias towards important factors at the host-parasite interface [62]. To date, the strain L. braziliensis M2904 is the only trypanosomatid predominately triploid [27], while all the other evaluated strains are usually diploid with some or several duplications or deletions of chromosomes [27,46,91]. This suggests that, even though aneuploidies are widespread in Leishmania, the overall ploidy of the population has a tendency to diploidy. Dowins and co-workers showed that the aneuploidy pattern also diverges in 17 L. donovani strains that were recently isolated from patients in Nepal and India [26]. Interestingly, all the 17 Leishmania isolates had a different karyotype, while they presented only 0.011% of variation in their nucleotide sequences. This implies that CCNV events occur frequently during parasite evolution [26,59]. Besides, the identification of aneuploidies in natural populations reinforces the notion that changes in chromosomal numbers are not caused by long-term maintenance of the parasites in culture [59]. Some of the CCNV of Leishmania and T. cruzi strains estimated based on RDC showed an intermediate value of ploidy, for instance between 2 and 3, which may be a result of a mixed cell population with disomic and trisomic chromosomes [27,29,59,91]. In fact, FISH analysis of seven chromosomes in the L. major strains Friedlin, LEM62 and LEM265 showed that the CCNV pattern of each of these chromosomes were observed in at least two states, among monosomic, disomic or trisomic within a given population [124]. The evaluation of aneuploidy based on FISH analysis of chromosomes in clones and sub-clones from L. major Friedlin, LEM62 and LEM265, revealed that they present a similar chromosomal ploidy pattern as their parental strains, for some, but not all chromosomes [124]. Moreover, the aneuploidy pattern varied among single cells in the same population, a phenomenon that was named mosaic aneuploidy [124,125]. After its observation in different clones and strains from L. major [124], the occurrence of mosaic aneuploidy was also described in several Leishmania species, such as L. infantum, L. donovani, L. amazonensis and L. tropica, showing that this phenomenon is widespread in the parasite genus [126]. To explain the mechanism behind the generation of mosaic aneuploidy, a model based on mis-segregation or stochastic replication of chromosomes was proposed [124][125][126]. In this model, there is an asymmetric allotment of chromosomes during mitosis, resulting from replication defects that generate aberrant copy numbers of a given chromosome [59,124]. Chromosomal segregation errors have yielded asymmetric even chromosome numbers (3+1 or 4+0) in SMC3 (central component of a complex that holds together sister chromatids during mitosis) knockdown Trypanosoma brucei [130]. However the total chromosome number in Leishmania in asymmetric sets was usually odd (1+2, 2+3 or 3+4) [124,125], supporting replication defect events as the major driving force generating Leishmania aneuploidies (Fig. 1). The proportion of different somies observed in interphase and mitotic cells in Leishmania suggests that duplication or loss of chromosomes due to a failure in the replicative process is a usual phenomenon in Leishmania [124,125]. In fact, it is estimated that only 10% of the cells in a Leishmania population would have the same genotype [124,125]. Among the Tritryps, CCNV was also identified in the Leishmania close related parasite T. cruzi, based on tilling arrays [28] and WGS followed by RDC and allele frequency analyses [29]. Due to the presence of large gaps, as well as the extensive content of repetitive sequences in the 41 T. cruzi CL Brener putative chromosomes, the estimation of chromosomal gain/loss in T. cruzi based on RDC was performed by normalizing the sequence coverage using single-copy genes as markers for unique genomic sequences, as proposed in the SCoPE methodology [29]. Similarly to Leishmania, the aneuploidy pattern in T. cruzi also varies among and within DTUs, as seen by variations in TcI, TcII and TcIII strains [29]. Similarly to Leishmania species, the chromosome 31 was the only one consistently duplicated in all the T. cruzi strains [29]. Gene ontology analysis revealed however that this T. cruzi chromosome harbor different set of genes than Leishmania chromosome 31. In T. cruzi, this chromosome is enriched in genes related to glycoprotein biosynthesis and glycosylation processes, especially UDP-GlcNAc-dependent glycosyl-transferase, an enzyme related to the initial steps of mucin glycosylation [29,101]. As mentioned before, mucins are heavily glycosylated proteins that are present in ~ 2x10 6 copies covering the whole surface of the parasite. These proteins protect the parasite from both the vector and mammal immune systems, as well as act to ensure the anchorage point and invasion of specific cells and tissues by trypomastigotes [99,101]. The amplification of chromosome 31 could be related to the need to glycosylate this large set of mucins, and also highlight the importance of glycosylation for T. cruzi biology. Although aneuploidies are frequent events in T. cruzi strains and Leishmania species, is it still unknown whether the mechanism behind the generation of chromosomal duplication/loss is similar in both parasites. Interestingly, T. brucei appears to be strictly diploid [47]. Whole genome sequencing and RDC estimations of 85 T. b. gambiense group 1 field isolates from East and West Africa revealed no evidence of aneuploidies [47]. Similar results were obtained for the other two T. brucei subspecies, T. b. brucei and T. b. rhodesiense (Almeida et al., unpublished data). This absence of aneuploidies in T. brucei is not a consequence of sexual reproduction, as T. b gambiense group 1 appears to be strictly clonal [47]. One of the restrictions to aneuploidy in T. brucei could be the chromosomal size. T. brucei genome is divided into 11 megabase sized chromosomes, which vary from 1 to 6 Mbp, while T. cruzi and Leishmania genomes are distributed into ~ 34-47 chromosomes, with a size varying from 0.3-3 Mbp [39,41,112,[131][132][133]. Changes in the copy number of specific chromosomes in T. cruzi and Leishmania would therefore alter the dosage of a restricted set of genes, avoiding detrimental consequences of large-scale dosage alterations [59]. This suggests that aneuploidies would be better supported in organisms that have its genome divided into a large number of small chromosomes; however, the evaluation of a CCNV in a broader set of species is necessary to confirm this hypothesis. Also, T. brucei appears to have a similar number of origins of replication (~170) per haploid genome as L. major, L. mexicana and L. donovani (~150-180) [134]. They also share a similar DNA replication speed rate that is 2.4-2.6kb/min for Leishmania and 1.9kb/min for T. brucei, which is in range to the highest replication fork ever reported: 1.9kb/min in transformed JEDD lymphoblastoid cells [134,135]. Therefore, as there are no significant differences in the chromosomal replication speed and number of origins of replications in Leishmania and T. brucei, mosaic aneuploidy in Leishmania is probably caused by a leaking in the regulation of DNA replication, rather than by unique replication parameters [134]. As DNA replication and transcription share profound functional overlaps in trypanosomatids [136], and as they both frequently start at strand switch regions [136][137][138], a clash between the transcription and replication machinery could also be relevant to generate the aneuploidies in Leishmania and T. cruzi. In fact, although Leishmania appears to have more than one DNA replication starting site [134], there could be a preferential origin per chromosome, as suggested by genome-wide mapping analysis [138]. If this is the case, a clash between the transcription and replication machinery at the preferential starting site could result in a failure of duplication of a given chromosome. For this reason, an extra copy of a chromosome could be important to assure that at least one copy would be duplicated, avoiding detrimental haploidies. Genetic exchange events have been documented during the evolution of the parasites Leishmania and T. cruzi [139][140][141][142][143][144][145][146]. As these parasites present an aneuploid number of chromosomes, one of the mechanisms that could be enrolled in their genetic exchange events is a parasexual cycle, similar to what is observed in Candida albicans [53]. According to this model, the fusion of parental aneuploid cells is followed by karyogamy, resulting in a polyploid progeny that undergo reductional mitotic divisions and genome erosion, generating different degrees of chromosomal duplications/losses [125,147]. This assumption is further supported by the subtetraploidy found in T. cruzi experimental hybrids [148,149], as well as by their ~70% higher DNA content when compared to the parental strains [150]. Although usually detrimental in the majority of superior eukaryotes [121][122][123], CCNV observed in Leishmania species and T. cruzi DTUs could provide several evolutionary advantages to these parasites (Fig. 2). A correlation between antibiotic resistance and CCNV based on transcriptional profiling by microarrays, southern blotting and comparative genomic hybridization was suggested in L. major and L. infantum [49,50,58]. Interestingly, these chromosomes reverted back to disomy in the absence of drug pressure, corroborating the extensive genomic plasticity of these parasites [49,50]. On the other hand, RDC analysis found no clear link between aneuploidy and drug resistance in clinical isolates from L. donovani [26]. Mis-segregation and stochastic replication of chromosomes may also alter gene dosages in a single generation, allowing the parasite to quickly adapt to new environments as well as to the transition between the mammalian and invertebrate hosts [59]. Similarly, a mosaic aneuploidy state could generate a pool of phenotypes in the same population providing an increased adaptability, as distinct combinations of CCNV could be advantageous in different environments. If polysomy is stable for long evolutionary periods, it could also provide a selective advantage to the generation of novel functions in the duplicated chromosomes, as the ancestral gene function would be maintained in the original copy of the chromosome. Homologous recombination could then shuffle genes in aneuploid chromosomes, generating a variety of different phenotypes. As seen in yeast, deleterious mutations found in monosomic chromosomes could be rapidly eliminated [151,152], while beneficial mutations would be retained or even expanded to triploidy and tetraploidy states [124,125]. Thus, the presence of mosaic aneuploid provides the parasite the means to rapidly adapt to different environments and stress conditions, as well as combine the advantages of monosomies and polysomies in the same population. The presence of aneuploidies in Leishmania and T. cruzi, but not in T. brucei, suggests that the latter has lost the ability to maintain an aberrant copy number of chromosomes. The evaluation of CCNV in other pathogenic and non-pathogenic protozoan species, such as Crithidia, Leptomonas, Endotrypanum, T. rangeli and T. cruzi marinkellei could provide a better view of the distribution and frequency of aneuploidies, as well as its correlation with chromosomal size, further elucidating the mechanisms behind its evolution in trypanosomatids. The simultaneous evaluation of the ploidy of all chromosomes in several cells in a cloned population by Single-Cell Genome Sequencing (SCGS) could provide an accurate estimation of the ratio of CCNV events. Besides, the evaluation of CCNV by SCGS in populations exposed to different stressors could identify aneuploidy trigger agents. Finally, the combination of CCNV and RNA-seq analysis, with DNA and RNA extracted from the same population, could provide insights in the implication of aneuploidies in gene expression. CONCLUSION Over the last decade, advances in the determination of the biological relevance of gene and chromosomal copy number variations in trypanosomatids have been achieved, contributing to a better understanding of the driving forces behind the evolution of parasitism in this group of organisms. These parasites have developed a massive expansion of species-specific multigene families related to their adaptation to invade and survive in an intracellular host environment or to endure the hosts' humoral response in the bloodstream. The absence of orthologs for several of these large gene families in the highly syntenic genomes of the parasites T. brucei, T. cruzi and the Leishmania genus denotes the involvement of CNV in species-specific adaptations that happened during trypanosomatid evolution. Besides, the substantial reduction of these families in the nonpathogenic T. rangeli further reinforces the importance of the expansion of multigene families for the establishment of a productive infection in the mammalian host. CCNV is well toler-ated and widespread in Leishmania and T. cruzi, allowing variations in gene dosage in a single replication cycle and providing rapid adaptation to different environmental stressors. Different patterns of aneuploidies are found among strains from the same species/DTUs and even within a same population, reinforcing that CCNV generation events occur frequently in these parasites. Interestingly, there is no evidence of aneuploidies in T. brucei, which could be a consequence of its large chromosomes sizes when compared with the fragmented chromosome architecture of T. cruzi and Leishmania. However, if T. brucei lost the ability to sustain aneuploidies, or if Leishmania and T. cruzi independently developed this feature is still unknown. The assessment of aneuploidies in other parasites such as T. rangeli, T. cruzi marinkellei, Crithidia, Leptomonas and Endotrypanum could shed light into the occurrence of CCNV during trypanosomatids' evolution and its implications for the biology of these parasites. CONSENT FOR PUBLICATION Not applicable. CONFLICT OF INTEREST The authors declare no conflict of interest, financial or otherwise.
2018-04-03T01:06:56.285Z
2018-01-12T00:00:00.000
{ "year": 2018, "sha1": "fcd641e277f04f2fa0b45797f3fff419622b4016", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc5814966?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "fcd641e277f04f2fa0b45797f3fff419622b4016", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
189305219
pes2o/s2orc
v3-fos-license
Module of physics bases on process image for learning of momentum and impulses in senior high school The necessity of this research was aimed to develop momentum and impulse module form based on process image which valid, practice, and effective at physics learning in senior high school to understand the students. The aim of developing module is one of instructional material can help the student thinking systematic about the physics concepts so that it can learn by students independently. Design of the study is research and development with stages Define, Design, Develop, and Disseminate. The limited test was conducted to 10 students and the test of class level was conducted on class XMIPA7 SMAN 2 Jember in 2018. The results validation of the product, through the average of three experts’ judgment stated that the product was valid. The practicality of product showed that students gave positive responses and learning can be very good implemented. The effectiveness of the products is known from an increased understanding the topic of momentum and impulses by using a normalized gain. Expectations of the research are produced appropriate (valid, practice, and effective) students module based on process image for learning of momentum and impulse topic on students grade ten at senior high schools. Introduction Physics consists of the concepts. The concepts basically categorizing something into presenting non verbal, so the concepts usually abstract so that the mental image needed. Physics concepts has a character appropriate with physical and mathematical logic so that both of them have an individual character. The character of that knowledge was not easy for student so that the students must have basic knowledge. Basic knowledge can formed by new experience to the environment daily based on Siregar (Sutarto, 1999). For students, physics module can be assumption that help their process on learning physics systematically about physics concepts, so that it can be learned by self. For teacher, module form can make physics learning easily for planning and implementation on learning because at module form there are include indicators, learning goals, material, and evaluation of learning goals. Physics module will valuable if the students easily on learning by it self. Learning by module can accelerate students capability for their studies and finishing completely base competencies on learning among students. So that module must be describe some base competencies will be achieve by students, serve with a good language, interest-learning was needed. Instructional material there were a package of material which systematically for helping students learning. Instructional material divided by five group it is instructional material by printed, for example student module, handout, a book, module, brochure, leaflet, chart, etc.; instructional material on audio visual for example film/ video and VCD; Instructional material audio form, for example cassette, radio, CD; Visual, for example image, picture, models ; Multimedia, for example CD inter- (Hamdani, 2011). Printed material can be showed by some formed. If the printed material systematically, so that material will give some advantages there are statement from Steffen Peter Ballstaedt (Majid, 2012) it is printed media usually showing list components so that make easier for teacher to teach their students; The cost was cheap; Printed media use full ad flexible; More creativity for each students; use full everywhere; can motivated the reader to act some activities like write notes or sketch. Module is one form of teaching materials that are packed in a complete and systematic, in it contains a set of planned learning experience and is designed to help students specific learning objectives (Daryanto, 2013). The module serves as an independent learning tool, enabling a learner who has a high speed in learning to moreing, and complete illustrations. The views about urgency of classifying methods or another learning media for solving the weakness of duration on learning was short and another media for supporting teaching and learning of physics suitable with process product by nature of physics. Student center learning was needed for learning physics used module. Process image was suitable for physics module because of this character it is two dimension (include the paper), so that developing process image for helping the implementation of teaching and learning physics and student center quickly complete one or more basic competencies compared to the other participants. Thus, the module should describe the basic competencies to be achieved by the learners, presented using a good language, interesting and equipped with illustrations. A module will be meaningful if learners can easily use it. Learning by module allows a learner who has a high speed in learning will be faster to complete one or more Basic Competencies (KD) compared to other learners. Thus, the module must describe the KD that will be achieved by learners and presented using a good lan-guage, interesting, and equipped with a clear illustration (I Ketut Mahardika, 2012). According to Indriyanti (2010) the advantages gained from learning with the application of modules are as fol-lows: Increase the motivation of students, because every time doing a lesson that is clearly defined and in accordance with the ability; After the evaluation, teachers and students know correctly, the modules that have been successful and on the module which they have not been successful; Students achieve results according to their abilities; The subject matter is more evenly distributed in one semester, and Education is more efficient, because the subject matter is organized according to the academic level. Tompkin (on Akbar, 2013) Identifies the module composing steps as follows: (1) prewriting by limiting topics, formulating goals, defining the form of writing, determining who the reader is, choosing materials, and organizing ideas; (2) drafting-pouring ideas related to the topic of writing by letting in advance technical and mechanical matters; (3) revising -reviewing the text by focus-ing on the contents of the text by adding, moving, deleting and re-writing; (4) editing -editing spell-related writing, word choice, sentence structure, and others with improved formatting; (5) pub-lishing-publish writing to obtain reader response, revision, editing, and publishing. Images can be developed as a medium to support the implementa-tion of learning, known as the image media. Images include visual media that can: 1) facilitate understanding of a complex or complex subject matter, 2) present an interesting elaboration of the structure or organization of a thing, thus strengthening the memory, and 3) foster student interest and clarify the relationship between the content Learning with the real world (Sadiman, et.al .: 1996). The image media among others have advantages: 1) Its concrete, more realistic picture shows the subject matter compared to the verbal media alone; 2) Images can overcome boundaries of space and time; 3) The picture media can overcome the limitations of observation; 4) Can clarify a problem, in any field; And 5) Cheap price, easy to get / held, and usefull. (Purwanto & Alim: 1997). The process image are identified with the chart, with the notion of a series of images that can visualize a basic fact or idea in a logi-cal, orderly way, and help the reader to understand quickly, to show relationships, comparisons, relative numbers, developments, processes, classifications, and Organization (Sudjana, 1996; Ha-malik, 1989; Arsyat 1997). Based on the definition of the image, the understanding of the process, and added with the understand-ing of the image of the process, the process image can be inter-preted as a series of images of objects (objects, events, or phe-nomena), the images in the series between each other always look no relative difference in Things (status, position, form, or combi-nation) which as a whole describes a coherent stage and a unified whole. Research Methods The first was Define, at this define stage, the activities carried out include: Analysis Media model implemented in KBM; Conducting an analysis of the learning models that have been implemented in High School; Learning Model Analysis needed for the implemen-tation of SCL-based and product-based school learning; Perform characteristic analysis of teaching materials model based on SCL and students able to produce the product. The second was Design. Review the theories of the experts relevant to the model to be developed (expert against the model); Validate initial pattern de-sign (prototype) to perfect the design so it is ready to apply; Pre-liminary study to define the developed model; Undertake studies related to the model to be developed and preliminary studies to identify material characteristics; Design the initial pattern of the image process module (prototype) that is tailored to the material characteristics and test is limited to the school; The initial pattern design of the process-image process module is based on relevant theories; Gather theories that support the design of the developed model; Prepare a design to review the model (module-process image); Assess the model (process-image module) based on con-formity with the designed design. The third was Develop. Creating an planning of learning (RPP) with a model design (modulepro-cess image); Constructing a customized RPP with the model (pro-cess image module); Implementation test on physics subjects; Testing (process-image module) through the action research cycle includes: plan, action, observe, and reflection to see the design consistency (prototype); Teaching Materials Products; Prepare the teaching materials so that the model guidebook (process module-image). The fourth was Disseminate. Testing module-image process that aims to see the consistency of products produced through action research cycles include: plan, action, observe, and reflec-tion through deployment to school. The overall design as follows. Validity is a reference to declare an instrument can measure what should be measured. Validation of image-based module process on collision material is a module that has been through the valida-tion phase by several experts and has been declared categorized as valid. This research uses a modified R and D design from 4D model (Thiagarajan, et al., 1974) as in Figure 1. This research uses a modified R and D design from 4D model (Thiagarajan, et al., 1974) to 3D ie; Define, Design, Development as in Figure 1. Validity of logic or validation of experts is a validation performed after the instrument concerned has been designed and prepared properly. Logical validation can be achieved if the instrument is compiled in accordance with existing provisions. Thus logical validation is obtained after the completed instrument is completed. According Thiagarajan et. Al (1974: 28) expert validation is still divided into two namely instructional validation and technical validation. Here are some aspects that are in each of the instruc-tional validation and technical validation Empiric validity or development test is the validation obtained after tested. Empirical validity can not be obtained simply by the preparation of instruments under the provisions only, but must be proven through experience in the form of field trials. An instru-ment is said to have empirical validation when it has been tested from experience (Arikunto, 2011 :66). The population of this study is a sample of students of class XI in Senior High School 2 Jember. The sampling technique used is random sampling. The sample is analyse needs using the consider-ation that the resource person is the one who knows best about the information that the researcher expects. Activities undertaken to analyse the resource is to make observations about the needs, characteristics of students, academic value and provide tests to measure the ability of students' procedural knowledge skills. Based on the observation result, the sample used for valid imageprocessing test of SCL learning and the effect of the process-image module implementation on the students' skills skill is class X in Senior High School. Data types, data collection techniques, data collection instruments, and data analysis techniques are presented in Table 1. competence which are ex-pected. Internal/logical validation indicates the extent to which the student module science based on process image are structured on the basis of relevant theory and existing provisions. Assessment guidelines and complete engineering techniques are included on the validation sheet. The data is loaded in a table of eligibility scores and a description of suggestions. Assessment includes: (a) content feasibility, (b) presentation components, (c) language, (d) image feasibility. Furthermore, the description of the suggestion is summarized and summarized and described narratively as the basis for revision of each component of the student module sci-ence based on process image that has been developed and devel-oped. The results of the validation student module science based on process image were analysis with the following calculations and criteria: Student module science based on process image is con-sidered valid if at least meet the criteria "enough valid" so it is worth using. The trial I (limited) at the development stage was con-ducted in 10 students of grade X SMAN 2 Jember in 2018. The trial I student module science based on process image was ana-lyzed based on data observation of the implementation of learning by two observers and the mean value is analyzed to determine the result of the assessment by equation (Arikunto: 2012): With: P = percentage of learning activity 6A = number of aspect scores performed 6N = Total score of all aspects observed Implementation of learning using student module sci-ence based on process image is determined by comparing the re-sults obtained and the criteria in table 3, the following: Trial II (class scale) was conducted in class XMIPA7 SMAN 2 Jember year 2018 as many as 34 students using one group pretest-posttest design, ie the research was conducted in one class group by looking at differences in preliminary and final test results. Analysis of concept comprehension aims to determine the level of understanding of momentum and impulse concept by students as a basis for determining the level of mastery learning. The concept comprehension test is structured to determine the effectiveness of the use of student module science based on the process image of momentum and impulse concept in learning. Improved conceptual understanding is measured based on pre test and post test values, and analyzed through normalized gain. N-gain is used to analyze achievement criteria before and after learn-ing (adapted from Hake, 1998). With N-gain level achievement criteria as Table 4: The use of N-gain to avoid the tendency that students who have a small pre test score will get a large actual gain and vice versa that has a large pre test value will obtain a small actual gain (Hake, 1998). The effectiveness of the student module science based on process image is also analyzed based on the response of the product users, both by teachers and students. Assessment of stu-dent module science based on process image by users with indica-tors: (1) writing approach, (2) language, (3) clarity of sentence, (4) implementation, and (5) physical appearance (picture / graph). The scoring uses numbers with the likert scale of 5 choices, namely: 5= very good, 4 = good, 3 = good enough, 2 = less good, and 1 is not good. Instruments that have been filled then sought the aver-age score according to the equation and the following criteria: ‫݁݃ܽݎ݁ݒܽ‬ ‫ݎ݁ݏݑ‬ ‫݁ݏ݊ݏ݁ݎ‬ ‫݁ݎܿݏ‬ (ܴ) = ‫݊ܽݐ݈ݑݏܴ݁‬ ‫݂‬ ‫ݏ݁ݎܿݏ‬ ‫݉ݎ݂‬ ‫ݏݎ݁ݏݑ‬ ݅݊ ݁ܽܿℎ ‫ݐܿ݁ݏܽ‬ ‫݉ݑ݉݅ݔܽ݉‬ ‫݁ݎܿݏ‬ ‫݂‬ ‫ݐ݊݁݉ݏ݁ݏݏܽ‬ ‫ݐܿ݁ݏܽ‬ Result of Research Student module science based on process image is valid and feasi-ble to be used in the learning based on the assessment of two ex-pert validator/lecturer of science University Science Teacher, namely component of student module science based on process image including feasibility of content, presentation, language, and kegrafisan. These four components determine the quality and fea-sibility of student module (Hartono, et al., 2013). The results of the validation as table 6 below. Based on validation result of two validator got result from each aspect of assessment have criterion "very valid". So as a whole it can be said that student module science based on process image that have been made can be used for learning momentum and impulse concept in senior high, especially on momentum and impulse concept. But from the validator there is a little revision that must be fixed, namely; (1) cover images to suit the concept and the work itself, (2) on one page please can use the colors that are not contrast with each other, (3) the image of the process pre-sented to be able to use different colors, so that mentally (4) the font used between headings, subheadings, and contents to be dis-tinguished, (5) note the description of an object or process of events to consider the scale used, so as not to cause a double per-ception or Confusion of students, and (6) provide sufficient space for writing / drawing results of student discussions and answers. Result of trial I conducted to know the implementation of learning using student module science based process image. Trial I conducted on 10 students grade X SMAN 2 Jember. Based on the observation sheet of the implementation of the experi-mental results I found that the student module science based pro-cess image momentum and impulse material process can be used for learning in senior high school, but still need improvement between that is; (1) ambiguous sentences, (2) the need for wider space or space for drawing, and (3) reducing the competency test by not reducing the material / concept essence given to the stu-dents, since the time provided 3 x 45 minutes sufficient. These results are also supported by interviews conducted on the students. Students stated that they were happy with the science lesson using student module science based on process images because they can know the process of momentum formation of objects through the image in the student module, so that they can understand the con-cept more clearly and clearly. One student stated that he enjoys learning by using student module science based on process imag-es because they do not need to do lab work in the laboratory to understand the concept of momentum and impulse, but they declare that the 3 hour lesson used to solve the student module is less, because they require precision and accuracy in drawing, especially In describing the concept to avoid misconceptions. Learning outcomes can be seen in table 7. Based on the result of trial I, it can be seen that the percentage of learning using student module science based on process drawings has increased from first learning using student module 1, student module 2, student module 3, and student module 4. Mean percent-age of learning activity is 82,89%, this means criteria 'very good'. So that the student module science based on process image can be continued for Test II with little improvement. Trial II student module science based on process image is imple-mented on the students of class XMIPA SMAN 2 Jember, amounting to 34 students by using one group pretest-posttest de-sign. The effectiveness of based process image is analyzed based on the result of concept test before and after learning. The average of pretest, posttest, and normalized gain results in the comprehen-sion of the concept of momentum and impulse material obtained will be compared with table 4. Learning is said to be effective if N-gain is minimal in the medium criterion. In addition, learning is also said to be effective if eligible mastery of learning outcomes. Students are said to be thorough learning when reaching the min-imum criterion value of completeness (KKM) is set, that is 75. Classical completeness is achieved if the number of students who are able to reach KKM at least 85% of the number of students (Mulyasa, 2007: 99, Depdiknas, 2008. At the time this article was written experimental trials effectiveness is on the way, so can not be presented results from trial II. Conclusions Student module physics based on process image of momentum and impulse concept stated 'very valid' and can be used for learn-ing in senior high, this is based on the average assessment of two validator of 4.34. Student module science based on process image of momentum and impulse concept has practicality based on the percentage of learning activity in the first test of 82.89%. This is supported by the results of student interviews stating that they are happy with the learning of science using student module science based on process image because they can know the process of momentum forming objects through images in the student module physics where the vector momentum and impulse concept can not be seen in real, so with the process image of their process Can understand the concept with more real and clear. The effectiveness of module physics based on process image is known from the improved understanding of the concept of momentum and impulse using the N-gain average in trial II.
2019-06-13T13:23:31.775Z
2019-04-09T00:00:00.000
{ "year": 2019, "sha1": "d6e483cb99b37ea92fd689980dcff9073b54a051", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/243/1/012139", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "bd09ca08edaf4716a3382a4aef2ec60c19de8b0a", "s2fieldsofstudy": [ "Physics", "Education" ], "extfieldsofstudy": [ "Physics" ] }
198966414
pes2o/s2orc
v3-fos-license
Tigray Orthohantavirus Infects Two Related Rodent Species Adapted to Different Elevations in Ethiopia Abstract Orthohantaviruses are RNA viruses that some members are known to cause severe zoonotic diseases in humans. Orthohantaviruses are hosted by rodents, soricomorphs (shrews and moles), and bats. Only two orthohantaviruses associated with murid rodents are known in Africa, Sangassou orthohantavirus (SANGV) in two species of African wood mice (Hylomyscus), and Tigray orthohantavirus (TIGV) in the Ethiopian white-footed rat (Stenocephalemys albipes). In this article, we report evidence that, like SANGV, two strains of TIGV occur in two genetically related rodent species, S. albipes and S. sp. A, occupying different elevational zones in the same mountain. Investigating the other members of the genus Stenocephalemys for TIGV could reveal the real diversity of TIGV in the genus. Cases of genetically related orthohantaviruses occurring in genetically related mammalian host species have been well reported (Milholland et al. 2018). However, such cases are limited to African orthohantaviruses in rodents and shrews. The description of the sister lineage to the Sangassou orthohantavirus from Hylomyscus endorobae in Kenya in a rodent host phylogenetically related to the Hylomyscus simus in Guinea represents a case of virus lineage divergence sep-arated by large geographical distance (Tesikova et al. 2017). Kang et al. (2014) described two new hantaviruses from the Geata mouse shrew (Myosorex geata) and Kilimanjaro mouse shrew (Myosorex zinki) captured in Tanzania. In this study, we provide additional evidence for such pattern in an African orthohantavirus. Two strains of TIGV are reported to occur in two genetically related rodent species occupying different elevational zones in the highest mountains in Ethiopia. Materials and Methods We obtained 94 dry blood spots on calibrated prepunched filter paper (LDA 22, Ploufragan, France) from two species of Stenocephalemys rats sampled from four elevations within Simien Mountains National Park (SMNP) in Ethiopia, between September and November 2015 during a small mammal biodiversity survey by the Field Museum of Natural History, Chicago (USA) and Mekelle University (Ethiopia). The FIG. 1. Phylogenetic tree estimated from the Bayesian analysis of the complete (when available) L coding part of representative orthohantaviruses and the new Tigray sequences (347 nucleotides long) using the GTR+I+G model of evolution. Since tree topologies were very similar between PhyML and MrBayes, only MrBayes tree is shown. Analyzed viruses with GenBank accession numbers are given in Supplementary Data S2. Numbers above branches represent Bayesian posterior probability/ML bootstrap support (1000 replicates). The small scale bar represents the number of nucleotide substitutions per site. Names of viruses are given in full followed by the host genus and ISO code of the country. Sequences from this study are in bold. Sequences were edited in Geneious 8.0.5 and aligned with the full coding part of L segment sequences of representative orthohantaviruses. Phylogenetic analyses were performed by the maximum likelihood (ML) approach in PhyML 3.1 (Guindon et al. 2010) and Bayesian inference was implemented in MrBayes 3.2.2 (Ronquist et al. 2012) using GTR+I+G substitution model. For the ML tree, support was evaluated by 1000 replicate bootstraps. In MrBayes, we used the default priors for all parameters and two independent runs were conducted with 1,000,000 generations per run; trees and parameters were sampled every 500 generations. Bayesian posterior probabilities were used to assess branch support. Thottapalayam and Uluguru viruses were used as out-groups. Trees were visualized and annotated in FigTree, version 1.4.1. (http://tree.bio.ed.ac uk/software/figtree/). Genetic p distances were estimated in MEGA7 (Kumar et al. 2016). Host identification was performed by genotyping of nuclear and mitochondrial markers as described in Bryja et al. (2018). Interestingly, positive individuals were found at all four sampling elevations. Bryja et al. (2018) showed the existence of six genetically supported species under the genus Stenocephalemys. Geographically, S. albipes is the most widespread species in Ethiopia, whereas S. sp. A is reported only from the highest elevations in four isolated mountain tops of northwestern Ethiopian highlands. In the SMNP, only two species, likely evolved by ecological speciation (Bryja et al. 2018), were genetically identified: S. albipes occurring from Montane forest to the ecotone extending to the Erica shrub at the lower elevations and S. sp. A in the belt of Erica shrub and the Afroalpine meadow at higher elevation. The ML and Bayesian analyses using representative sequences from other orthohantaviruses, four TIGV sequences already available in GenBank and the six sequences from this study, produced trees with similar topologies (one exception being a soricomorpha-borne orthohantavirus clade-see Fig. 1). The sequences grouped by host species, the TIGV from S. sp. A clade being sister to the TIGV from S. albipes clades from Tigray and SMNP. The number of nucleotide and amino acid differences per site from averaging overall sequence pairs between the two TIGV clades from S. albipes is 19.04% -1.94% and 4.82% -1.95%, respectively. The divergence between TIGV clades from S. albipes and S. sp. A is 22.32% -1.88% and 7.65% -2.34% at nucleotides and amino acids, respectively. Finally, the divergence between S. albipes and S. sp. A clades from SMNP is 23.12% -2.15% and 8.46% -2.64% nucleotides and amino acids, respectively. Therefore, these orthohantaviruses seem to be two strains of TIGV carried by two genetically related rodent species occurring on the same mountain but at different elevations. Conclusion We report the occurrence of TIGV orthohantavirus in two sister host species that evolved by ecological speciation at different elevational zones in the Simien Mountains. It should be noted that both allopatric (i.e., Hylomyscus) and ecological (i.e., Stenocephalemys) speciation of rodents can be accompanied by codivergence of their orthohantaviruses. Investigating whether the other members of the genus Stenocephalemys could also carry TIGV would be an area of future interest to understand how the virus is maintained in multiple species. Eventually, TIGV could be used as a model candidate virus to investigate host-virus codivergence scenarios.
2019-07-30T13:04:48.768Z
2019-11-27T00:00:00.000
{ "year": 2019, "sha1": "a9fc861f7983e256b2600429db2c68e1d910e5c0", "oa_license": "CCBY", "oa_url": "https://www.liebertpub.com/doi/pdf/10.1089/vbz.2019.2452", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c7688b7f40f016a37ef875acb942dc29d5298091", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
17573702
pes2o/s2orc
v3-fos-license
Gluino Annihilations and Neutralino Dark Matter We consider supersymmetric scenarios, compatible with all cosmological and phenomenological requirements, where the lightest SUSY particles (LSPs) are the neutralino and a quasi degenerate gluino. We study the neutralino relic abundance, focusing on gluino (co-)annihilation effects. In case the neutralino is bino-like, in a wide mass window the relic abundance is naturally driven in the correct range for the LSP to be the main cold dark matter constituent. We show that the gluino is the strongest possible coannihilating partner of a bino-like neutralino in the general MSSM. Moreover, contrary to other coannihilation scenarios, gluino pair annihilations always dominate over coannihilation processes, even at relatively large gluino-neutralino mass splittings. Finally, we present prospects for neutralino dark matter detection in the outlined framework. Introduction Supersymmetric models with a stable lightest supersymmetric particle (LSP) have been shown to be strongly constrained by requiring that the thermal relic abundance of the LSP falls within the upper bound on the dark matter density provided by cosmological observations [1]. The leading supersymmetric particle candidate for dark matter is generally considered to be the lightest neutralino [2]. Demanding that the relic abundance of neutralinos falls in the correct range amounts to setting upper bounds on the lightest neutralino mass, and thus on the lowest mass scale of supersymmetric particles. In this respect, at the dawn of the LHC era, it is of the utmost importance to explore the largest possible range of low-energy realizations of supersymmetry (SUSY) compatible with a supersymmetric dark matter scenario. The relic abundance of neutralinos is set by two features of the SUSY spectrum: first, the composition of the neutralino in terms of its gauge eigenstates (bino B, wino W and Higgsino H), and, second, the possible presence of peculiar mass spectrum realizations (as the approximate mass degeneracy of other SUSY particles, giving rise to coannihilation effects [3], or the occurrence of resonant neutralino annihilations through s-channel heavy Higgs-exchange [4]). In particular, if the neutralino is wino-(or higgsino-) like, the effect of a large annihilation cross section and the presence of chargino (and of next-to-lightest neutralino) coannihilations drive the relic density to relatively low values. On the other hand, a bino-like LSP tends to have an excessive relic abundance. Higgsinos or winos may be the main dark matter constituents only if one invokes either non-thermal production [5] or modifications to the standard cosmological scenario [6], or if the lightest neutralino is heavy, killing, in this latter case, any hope of detecting SUSY at the LHC 1 [7,8]. As regards a bino-like LSP, as it is the case in many popular frameworks, e.g. in most of the minimal supergravity (mSUGRA) parameter space, the over-production of relic neutralinos must be compensated, outside a strongly constrained low-masses bulk region, by the mentioned coannihilation or resonance effects. Since the discovery of coannihilation processes [3], many dedicated studies analyzed a rather wide plethora of coannihilating partners: the lightest stau [9] (e.g. in mSUGRA at low m 0 and A 0 ), the lightest stop [10] (again in mSUGRA, at large A 0 ), the lightest chargino [7,11] (when the LSP is wino-or higgsino-like, chargino coannihilations are always present; this takes place, for instance, in the focus point region of mSUGRA, or in models with non-universal gaugino masses), the next-to-lightest neutralino (e.g. for higgsino-like LSP), and the lightest sneutrinos [12,13] and bottom squarks [13]. In the present note we address the possibility that the coannihilating partner of the LSP is the gluino (GC, Gluino Coannihilation, Model). The low-energy condition for having gluino coannihilation processes is that m χ ≃ m 3 ≡ m g . Being a strongly interacting particle, we expect in particular gluino-gluino annihilations to be greatly effective in reducing the LSP relic abundance. In view of what we outlined above, if the LSP is to be the main dark matter component, this scenario will be of particular interest in case the lightest neutralino is bino-like: gluino coannihilations will then provide, depending on the bino-gluino mass splitting, the required relic density suppression mechanism to obtain the correct dark matter thermal relic abundance. In particular, due to the very large gluino-gluino annihilation cross section, and to the presence of coannihilation processes which couple neutralino and gluino freeze-out, the net effect of neutralino relic abundance suppression is mainly driven by the gluino effective annihilattion cross section, even for relatively large gluino-neutralino mass splittings. We emphasise that this feature is peculiar of the Gluino Coannihilation scenario, since for any other coannihilating partner, for sufficinetly large mass splittings, the coannihilation amplitude dominates over the coannihilating partner pair annihilations. In case the lightest neutralino is higgsino or wino-like, gluino coannihilations will only be helpful at very large masses (m χ ∼ 1 ÷ 2 TeV). We will show that, though featuring larger annihilation cross sections, the resulting relic abundance of winos and higgsinos which coannihilate with a quasi degenerate gluino is larger than that of coannihilating binos, due to the presence of additional effective degrees of freedom brought by charginos, and by the next-to-lightest neutralino in the case of higgsinos. The main results we will present in the remainder of the paper are: 1. The gluino is a phenomenologically perfectly viable coannihilating partner; the required mass spectrum is found for instance in superstring inspired models [14] or in gauge mediated SUSY breaking scenarios [15]. 2. Gluino coannihilations are the strongest possible bino coannihilation processes in the general MSSM. 3. The mass range for a bino-like lightest neutralino, in the presence of gluino coannihilations, extends in the multi-TeV region; in the light of point 2., the maximal bino mass in presence of a single coannihilating partner is precisely found in the gluino coannihilation region we investigate in the present note. In what follows we will detail and motivate these results, and we will further comment about dark matter detection perspectives for the model under scrutiny. The Model In the conventional mSUGRA model, gaugino masses at low energies (m i ) are proportional to the corresponding α i obeying: This relation is a consequence of the assumed gaugino mass universality at high energies (M i = M 1/2 at M GU T ), and, if valid, implies that the gluino is the heaviest gaugino. However, there are plenty of well motivated models which do not satisfy Eq. (1) (see, for instance, [16]). In particular, we are interested in low energy realizations of the MSSM with gluino-neutralino quasi-degeneracy and, therefore, with gluinos lighter than what expected from Eq. (1). The high energy setup and some phenomenological implications of such models, as well as that of related scenarios with a gluino LSP, have been considered previously in the literature [14,15,17,18]. One of these is the so-called O-II superstring inspired model, in the limit in which supersymmetry breaking is dominated by the universal size modulus. Gaugino masses, which are determined by the standard RGE coefficients and by the Green-Schwartz parameter, arise at one-loop and in the preferred range of the model typically yield either a (heavy) gluino LSP, or neutralino-gluino quasi degeneracy [14]. plane, compatible with the WMAP estimate of the cold dark matter content of the Universe. The red shaded region at negative values of (m 3 − m 1 )/m 1 is not allowed because the neutralino is no longer the LSP: a heavy gluino LSP is ruled out by anomalously heavy isotopes searches [23]. Another example emerges in the context of Gauge Meditated SUSY Breaking (GMSB). In some GUT models, as a result of the doublet-triplet splitting mechanism and due to the mixing between the Higgs and the messengers, gluino masses are suppressed [15]. Notice that in this model a smooth change in the parameter B, the ratio of the doublets and triplets masses, easily leads from a gluino LSP to a neutralino-gluino quasi degeneracy. In the present paper, we will focus on gluino (co)-annihilations processes as an effective mechanism that suppresses the neutralino relic density. It is known that bino-like neutralinos tend to produce a relic abundance well above the WMAP preferred range, whereas wino-and higgsino-like neutralinos have the opposite behavior. Hence, relic density suppression mechanisms, such as gluino coannihilations, are particularly interesting for the case of bino-like neutralinos and we will devote most of this study to that situation. The Gluino Coannihilation (GC) model we propose is defined as any realization of the minimal supersymmetric extension of the standard model (MSSM) satisfying the conditions where m χ and m 3 are respectively the neutralino and the gluino masses at low energy, and m susy stands for all other SUSY particle masses. Let us mention that in previous studies gluino-photino processes were considered within low gaugino mass models [19,20]. There, however, the coannihilating partner was not the gluino, but rather the R 0 gluon-gluino hadronic bound state. Furthermore, since gaugino masses were radiatively induced in the absence of dimension-3 SUSY breaking operators, the mass range of the models was limited to the few GeV's region [20]. Therefore, the whole phenomenological setup was largely different from the one we describe here. In what follows we will study the parameter space of the GC Model using as parameters m χ and the ratio m 3 − m χ m χ , arbitrarily setting m susy = a · m χ , with the numerical factor a > 1, in order to single out the specific features of gluino coannihilations. For definiteness we fixed all the (flavor diagonal) scalar soft breaking masses ms = a · m χ , with a = 3. Let us remark that any other free parameter of the MSSM, as the sign of µ, tan β, the scalar trilinear couplings A i and possible phases are largely irrelevant to the following analysis: in this respect we fixed µ > 0, tan β = 30 (when not otherwise specified), A i = 0, and any imaginary phase to zero. The numerical study is performed through the most recent versions of the packages micrOMEGAs [21] and DarkSUSY [22], as respectively pertains the relic density computations and the dark matter detection rates 2 . We do not include here the perturbative cross sections for gluino-gluino and neutralino-gluino processes, which can be found elsewhere in the literature (see e.g. [17]). Changing the parameter a slightly affects the computation of the relic density, since it varies the masses of the SUSY particles exchanged in the treelevel (co-)annihilation processes, but it leaves our analysis and our conclusions absolutely unchanged. In fig. 1 we show, in the m 1 , m 3 − m 1 m 1 plane, the parameter space of the GC scenario for a bino-like neutralino (m 1 ≈ m χ ). The region shaded in green corresponds to a value of the relic density compatible with the WMAP result Ω CDM h 2 = 0.1126 +0.0161 −0.0181 [1]. Below the green strip the relic density is over-suppressed. We show in this region isolevel curves corresponding to Ωh 2 = 0.01 , 0.001. As expected, along the allowed region, the larger the neutralino mass, the smaller the mass splitting which ensures the needed relic density suppression. Notice that the neutralino, which is bino-like, can be as heavy as m χ ∼ 3.3 TeV without entering in conflict with the constraint on the relic abundance. We recall that in mSUGRA models the upper bound on the mass of a bino-like neutralino is found to be m χ 600 GeV [24]. Let us mention that for all parameter space points we considered, direct SUSY particle searches and indirect accelerator limits on rare processes put weaker bounds than that coming from cosmology. Fig. 2 shows the relic density as a function of the mass splitting between a (bino-like) neutralino and the gluino ∆m g ≡ (m g −m χ )/m χ for different values of the neutralino mass. As ∆m g increases, Ω χ h 2 approaches its asymptotic value in the absence of coannihilations. In particular, a neutralino with a mass m χ = 200 GeV requires a gluino with a mass splitting of about 16% (m g ≈ 232 GeV) in order to obtain a relic density within the WMAP range. If the neutralino mass is m χ = 1600 GeV, the required splitting falls to about 5% (m g ≈ 1680 GeV). Finally, a 3 TeV neutralino needs a nearly complete gluino degeneracy in order to fulfill the upper bound on the relic abundance, as also emerging from fig. 1. Figure 3: Relative contributions to the relic density of χχ annihilations, χg coannihilations and gg annihilations as a function of the mass splitting between the gluino and a bino-like neutralino. Gluino (Co-)annihilations When the gluino is quasi-degenerate with the neutralino there are three sets of processes that contribute to the evaluation of the neutralino relic density: a) The usual neutralinoneutralino (χχ) annihilations. b) The neutralino-gluino (χ g) coannihilations. c) The gluino-gluino ( g g) annihilations. In fig. 3 we show the relative contribution to the effective cross section which determines Ωh 2 of these three channels as a function of the gluino mass splitting ∆m g in the case of a bino-like lightest neutralino 3 . The rest of the spectrum is taken to be decoupled (a = 3). As seen in fig. 3, the g g process dominates at small mass differences, whereas the χχ process dominates at larger ones. The transition between these two regimes takes place at ∆m g ≈ 23%. Remarkably, the χ g coannihilations play only a minor role and never contribute more than 1.5%, as shown in the blown up region. This fact is a very peculiar feature of gluino coannihilations. For all other possible coannihilating partners in the MSSM there is always a region, at moderate mass splittings (∆m ≈ 10-20%), where coannihilations (in the strict sense) are the dominant processes. Table 1 shows the different final states of χ g coannihilations and of g g annihilations, as well as their relative importance. χ g coannihilations are tan β dependent, and are investigated for tan β = 50 in (a) and tan β = 5 in (b). Notice that the tt channel, due to the large top Yukawa coupling, always gives the largest contribution. The bb channel, on the other hand, is very sensitive to the value of tan β, approaching the tt contribution at large tan β. As expected, the results for the first and second generations are identical. Let us stress that, in view of the gluino-gluino dominance shown in fig. 3, the inclusion of quark Yukawa couplings is largely irrelevant in the present scenario. Since g g annihilations are driven by strong interactions, they do not depend on tan β. In (c), the possible final states for the g g annihilations are shown. Notice that the purely gluonic g-g final state gives the lion's share of the effective annihilation cross section. The other final states are quark-antiquark pairs, and all of them give the same contribution. The fact that the annihilation cross section of a gluino is by far larger than that of a neutralino holds true not only if the neutralino is bino-like, but also if it is wino or higgsino-like. In this respect, we now turn to the comparison of the relic abundance of higgsinos, winos and binos which coannihilate with gluinos. We focus for clarity on the fully degenerate mass case (m χ = m g ). We plot in fig. 4 the relic abundances for the cases of bino, wino and higgsino-gluino coannihilations. We also plot the relic density of a gluino LSP, Ω g h 2 . All other relevant SUSY masses are set to 5 times the LSP mass (this maximizes the gluino cross section, suppressing the negatively interfering t and u channel squark exchanges), and tan β is set to 30, though, clearly, the gluino cross section does not depend on it. Due to the gluino dominance in the effective cross section, the relic abundance of binos turns out to be the most suppressed one, as shown in fig. 4. This depends on a suppression factor originating from the effective degrees of freedom which enter in the number density computation, and which, due to the neutralino and chargino mass matrix structure, depend, in their turn, on the dominant gauge component of the lightest neutralino. We emphasise that the following discussion relies on the results found in the previous section, namely on the dominance of gluino annihilation processes over coannihilations in a wide range of mass splittings, and on the presence of inter-conversion processes between the two species, which is mandatory to enable gluino annihilations to drive the neutralino relic abundance to low values affecting the neutralino freeze-out effective (co-)annihilation cross section. The computation of the degrees of freedom suppression factor goes like this: the LSP relic density scales as the inverse of the thermally averaged effective (co-)annihilation gluino. The rest of the spectrum is set to be 5 times larger than the lightest neutralino mass, and tan β = 30. We also depict the relic density of a gluino LSP, and the cold dark matter range favored by WMAP. The relic abundance of the neutralino is driven by that of the gluino, modulo an overall factor which depends on the effective degrees of freedom carried by the neutralino (see the text). cross section In its turn, where A is the annihilation rate per unit volume at a given temperature, and n eq is the equilibrium number density, which, to a very good approximation, follows a Maxwell-Boltzmann distribution [11]. The annihilation rate scales with additional degrees of freedom, in presence of coannihilation processes, as where the sum is extended over all annihilation and coannihilation channels, g i and g j are the degrees of freedom of the given (co-)annihilating partner, and g 1 are the LSP degrees of freedom. Comparing the pure gluino with the neutralino-gluino coannihilation case, since, as shown above, the annihilation rate gives a first factor Further, the equilibrium number density reads where K 2 is the modified Bessel function of the second kind of order 2; since m χ ≃ m g , n eq gives a second factor n χ− g eq n g− g eq = g χ g χ + g g . Combining both contributions, we obtain the resulting neutralino relic density in terms of that of the gluino Ω g h 2 (this result holds actually in the generic case of a quasi degenerate coannihilating partner whose annihilation cross section is much larger than that of the LSP): In the case of the bino, since g g = 16 and g χ = 2 one gets a net increase factor equal to 1.27. The stated result is easily generalized to the case of other coannihilating partners P i besides the gluino, again featuring an annihilation cross section much smaller than that of the gluino, and explains why further coannihilating partners actually rise, in this case, the final relic density: For instance, in the case of the higgsino one has 6 additional degrees of freedom from the next-to-lightest neutralino and from the lightest chargino, while in that of the wino there are 4 further chargino degrees of freedom. This translates into a relic density which is respectively 2.25 and 1.89 larger than that of a pure gluino. Remarkably, the numerical results nicely agree with the stated predictions (see fig. 4). We emphasize that in the present computations we neglected non-perturbative effects in the gluino-gluino scattering cross section [25,15,17]: the evaluation of the effects of multiple gluon exchanges between interacting gluinos has in fact been shown to be highly model-dependent [17]. We must however warn the reader that the mentioned nonperturbative effects could enhance the gluino annihilation cross section by even orders of magnitude, and that therefore the relic density may be much smaller than what we show. In this respect, our results must be effectively regarded as conservative upper bounds on the final gluino relic density (and therefore on the coannihilating neutralino relic abundance as well). The same applies for the comparison of the efficiency of coannihilation effects we carry out in next section, as well as for the determination of an upper bound on the neutralino mass: when taken into account, non-perturbative contributions may considerably enlarge the cosmologically allowed mass ranges. Comparing the Efficiency of Coannihilations in the MSSM In this section we show that the gluino is the strongest possible coannihilating partner of a bino-like neutralino within the MSSM. The choice of a bino is motivated on the one hand by the fact that only in this case there are no automatic chargino coannihilations; on the other hand binos require relic density suppression mechanisms in order to produce a sufficiently small thermal relic abundance, and constitute therefore the physically more interesting case. In order to single out the coannihilation effects, we define ∆Ω as the difference between the relic density obtained without taking into account coannihilation processes and the overall relic density (∆Ω = Ω without coan. − Ω) [11]. We show in fig. 5 a plot of ∆Ω/Ω as a function of ∆m for all possible coannihilating particles in the MSSM 4 : g,ũ R ,d R , Q L ,ẽ R ,Ẽ L and χ ±,0 . Again with the purpose of focusing on particular coannihilating channels, we fixed all masses of non-coannihilating super-partners to be three times the neutralino mass (a = 3) 5 . The figure shows as expected that coannihilation effects are Boltzmann-suppressed by a factor which scales as ∼ e −∆m/T . It is clearly seen in the plot that gluino coannihilations are always the most effective ones, even neglecting nonperturbative contributions. For instance, if ∆m = 5%, neglecting coannihilations would amount to an error in the computation of the relic density of about 10% for sleptons, 100% for squarks and χ ±,0 , and 1000% for the gluino. The largest possible mass of a bino-like LSP in presence of single-partner coannihilations is therefore set, in the general MSSM, by the gluino-gluino effective annihilation cross section 6 . Dark Matter Detection and Accelerator Searches The detection of dark matter through direct scattering on nucleons or through the detection of neutralino annihilation products from the Sun, the center of the Earth or from the center of the Galactic Halo is a rich and rapidly evolving field of research [2]. Analyzing models which could provide the correct dark matter content, as it is the present case, forces therefore one to draw some conclusions on dark matter detection perspectives. The GC scenario does not provide any significant enhancement neither in direct nor in indirect detection, since a light gluino does not affect the relevant interaction cross sections. Henceforth, our results mainly coincide with that of a SUSY model with a purely bino, wino or higgsino-like LSP and a heavy SUSY particle spectrum, with the additional possibility of having a heavy LSP, even in the multi-TeV region, thanks to gluino coannihilations. Since both direct and indirect detection rates are typically suppressed with growing LSP masses (unless peculiar cancellation mechanisms apply [26]), we expect that dark matter detection in the GC scenario will not offer particularly rich perspectives. We plot in fig. 6 the neutralino-proton spin-independent cross section for the purely degenerate case m χ = m g in the three cases of bino, wino and higgsino-like LSP. Since the gluino does not enter in the game, the possible gluino mass splitting would not affect our results, and the only relevant physical variable is the neutralino mass. The resulting scattering cross section is compared in the figure against the current and planned experimental sensitivities, computed with a standard iso-thermal profile for the dark matter halo [27]. Only future experiments will be able to probe the low-mass range of GC models, whose scattering cross section lies orders of magnitude below the current experimental limits. As expected, a larger higgsino or wino content yields a larger scattering cross section, as the strongest neutralino-nucleon interaction channels are those with a (light and heavy CP-even) Higgs exchange . As regards indirect dark matter detection, the gluino mass degeneracy is not expected to yield any enhancement in the detection perspectives. In fact, we checked that in one of the most promising indirect channels, the detection of neutrinos from the decay of muons produced in neutralino annihilations captured in the center of the Sun or of the Earth, the flux typically lies, even in the low neutralino mass range, well below the planned sensitivity of future neutrino telescopes [28]. Concerning the discovery prospects of a light, coannihilating, gluino at accelerators, early detailed analysis may be found e.g. in [17,18]. In particular, the case of wino LSP has been discussed in Ref. [18]. In case the LSP is a bino, the only open channel for gluino detection would be into jets plus missing transverse energy. Particle detection at future hadronic accelerators would then be more problematic than in standard scenarios with universal gaugino masses. The LHC reach for generic GC models would then strongly depend on the specific experimental cuts adopted. A detailed assessment of the LHC SUSY discovery potential for the outlined framework, though of great interest, would however lie beyond the scope of the present analysis.
2014-10-01T00:00:00.000Z
2004-02-19T00:00:00.000
{ "year": 2004, "sha1": "be0a6ff73309aafd0f7a90ce85fbc71d05b4d59f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0402208", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "be0a6ff73309aafd0f7a90ce85fbc71d05b4d59f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259030129
pes2o/s2orc
v3-fos-license
Pixelated Photonic Crystals Photonic crystals can prevent or allow light of certain frequencies to propagate in distinct directions in anomalous and useful ways for use as waveguides, laser cavities, and topological light propagation. However, there exist limited approaches for fundamental reconfiguration of photonic crystals, such as changing the unit cell to various and on‐demand geometries and symmetries. This work introduces the concept of pixelated 2D photonic crystals where the variability of the dielectric profile is achieved by a pixelated matrix of the material. Specifically, the cross sections of dielectric cylindrical pillars distributed in a photonic crystal lattice are replaced with pixelated circles using different resolutions and the corresponding band diagrams are calculated. The comparison to the band diagrams of the original structure shows that the original—and today typically used—cylindrical design can be well approximated by as few as 5×5$5 \times 5$ square switchable pixels while retaining less than 1%$$ change in the photonic band structure. Experimental realization of switchable pixelation is proposed based on the liquid crystal display (LCD) technology with high birefringence materials. More generally, the demonstrated approach to reconfigurable 2D photonic crystal based on switchable pixels can enable realization of diverse fundamentally reconfigurable advanced optical materials. DOI: 10.1002/adpr.202300082 Photonic crystals can prevent or allow light of certain frequencies to propagate in distinct directions in anomalous and useful ways for use as waveguides, laser cavities, and topological light propagation. However, there exist limited approaches for fundamental reconfiguration of photonic crystals, such as changing the unit cell to various and on-demand geometries and symmetries. This work introduces the concept of pixelated 2D photonic crystals where the variability of the dielectric profile is achieved by a pixelated matrix of the material. Specifically, the cross sections of dielectric cylindrical pillars distributed in a photonic crystal lattice are replaced with pixelated circles using different resolutions and the corresponding band diagrams are calculated. The comparison to the band diagrams of the original structure shows that the original-and today typically used-cylindrical design can be well approximated by as few as 5 Â 5 square switchable pixels while retaining less than 1% change in the photonic band structure. Experimental realization of switchable pixelation is proposed based on the liquid crystal display (LCD) technology with high birefringence materials. More generally, the demonstrated approach to reconfigurable 2D photonic crystal based on switchable pixels can enable realization of diverse fundamentally reconfigurable advanced optical materials. column radius and the unit cell size for given dielectric contrast, full bandgaps are observed for in-plane (k z ¼ 0) propagation of transverse magnetic (TM) modes withẼ ¼ ð0, 0, E z Þ and H ¼ ðH x , H y , 0Þ. Here, we will analyze the effects of rough pixelation of the columns on the band structure and bandgaps of the photonic crystal. The unit cell of such photonic crystal has a shape of a square with a cylindrical column placed in the middle, as shown in Figure 1a. In order to analyze the effects of pixilation, we approximate the cross section of the column by an array of squared pixels with the diameter D of the pixelated circle ranging from 1 to 16 pixels, as shown in Figure 1a. "Full" pixels in black represent the dielectric material with ε ¼ 12 and "empty" pixels in white the surrounding, here vacuum with ε ¼ 1. The best approximations of the circular cross section of a cylinder for each D are determined by minimizing the polar second moment of the array, as described in ref. [30] with the constraint to respect the mirror symmetries of the square. The arrays used in calculations are shown in Figure 1b. In order to ensure symmetrical placing of the pixelated array in the middle of the unit cell, the lattice constant A in the units of pixels is determined by the following formula which allows to place D=2 empty pixels for even D and ðD þ 1Þ=2 empty pixels for odd D symmetrically on each side of the array (see Figure 1a). Band structures of pixelated approximations are compared to the band structures of the cylindrical columns with the same area-i.e., same amount of dielectric material in the unit cell. Radius of cylindrical column, measured in the units of the lattice constant A, is determined as where N is the number of full pixels in the array. Figure 1c shows how R depends on D for selected pixelated arrays and converges www.advancedsciencenews.com www.adpr-journal.com toward R ¼ 0.25A. The even-odd effect, showing in larger R for even D, occurs due to additional pixel being added to the surrounding for odd D in Equation (1), to ensure that the unit cell preserves symmetries of a square (see Figure 1a). Alternatively, a pixel could be subtracted, leading to the opposite effect, but same conclusions. Numerical calculations were performed by using MIT Photonic Bands (MPB) software package. [31] The resolution in calculations was 200 grid points per unit cell in each direction and was slightly adjusted for each D so that each pixel in the array consisted of the integer number of grid points and no additional interpolation was needed. Band diagrams for photonic crystals consisting of pixelated and circular columns for D ¼ 1 and D ¼ 5 are shown in Figure 2. Surprisingly, already for D ¼ 1 (Figure 2a), where circle is approximated by a square, a good agreement in terms of band diagrams is achieved. In fact, there are no distinguishable differences between the two for the first four bands, which lay beneath the second bandgap, marked with blue shades. To quantify that, we plot the relative difference between the frequencies of the bands in both diagrams (ω ðpixÞ À ω ðcircÞ Þ=ω ðcircÞ . The results show that the relative difference in frequency is in the range of 1% for the bands below the second bandgap and even below 0.5% for the first three bands, regardless of the direction of propagation. The difference gets gradually larger for higher bands. Similar results are obtained for D ¼ 5, where relative differences for first four bands stay in the range of 0.5%; however also matching of higher bands is improved, namely, relative difference for bands 5 and 6 in this case also drops below 1%. Importantly, for D ¼ 5, only three bands lay beneath the second gap, compared to four for D ¼ 1. The reason is considerably smaller amount of dielectric material in the unit cell for D ¼ 1, which can also be seen from RðDÞ plot in Figure 1c. For every D > 1, the number of bands below the second gap is 3. Next, we analyze the positions and sizes of first two full bandgaps in the system. The results are shown in Figure 3 and 4. Position of bandgap is determined by its central (midgap) frequency, which is calculated as the average of the upper and lower limit frequency. Positions of bandgaps for different values of D are shown in Figure 3a. Once again, we observe that D ¼ 1 is a standout due to lower amount of dielectric material in a unit cell. Similarly, the reason why for odd D bandgaps systematically occur at slightly higher frequencies than for even D is that the radius of the associated cylindrical column is systematically smaller for odd D as shown in Figure 1c. In Figure 3b Sizes of bandgaps are shown in Figure 4a in terms of gapmidgap ratios. Again, we can notice that the sizes of bandgaps directly reflect the radius of the cylinders. Relative differences of bandgap sizes are shown in Figure 4b. First, we notice that the relative difference in bandgap size is considerably larger than the relative difference in bandgap position. For the first bandgap, the reason for this is that the frequency of the first band in point M, which determines the lower end of the gap is in general larger for pixelated case than cylindrical. Oppositely, the frequency of the second band in point X, which determines the upper end of the gap, is lower for the pixelated case. The gap is therefore effectively shrunken. For the second gap, the first reason is simply lower accuracy for higher bands and also that the gap is relatively thinner, meaning that the same absolute difference in band frequencies will lead to larger relative changes in sizes. In the calculations presented so far, the dielectric contrast between the pillar and the surrounding was rather high (Δε ¼ 11), compared to the contrasts between the ordinary (n o ) and extraordinary (n e ) refractive index-the birefringence Δn ¼ n e À n o ¼ ffiffiffiffi ε e p À ffiffiffiffi ε o p -found in LCs. Values of birefringence of the state-of-art LC materials used for technological applications are usually in the range of 0.2 À 0.3. When lower contrast between "empty" and "full" pixels is used, the gaps between the bands in the band structure essentially shrink and only partial bandgaps (existing for a range of wavevectors) are observed instead of full ones. Additionally, if the effect of "empty" and "full" is to be achieved by reorienting the direction of the optical axis of birefringent LC, sharp boundaries between the regions are desired. To experimentally realize 2D reconfigurable pixelated photonic crystal by using LCs, we suggest the use of dielectric shield walls, as reported in ref. [32] Using such setup allows for driving the voltage and reorienting LC in each individual pixel separately, without leakage of the electromagnetic field. The size of individually driven pixels separated by dielectric walls can be as small as 1 μm. [32] To demonstrate the optical features of such system, we calculated the band diagram, using the refractive indices of 5CB LC (n 2 o ¼ ε o ¼ 2.25, n 2 e ¼ ε e ¼ 3.24), which are comparable to the refractive indices of LCs in the THz regime [33] and dielectric walls with ε wall ¼ 3. For the demonstration, we selected a pixelated unit cell with D ¼ 2, which is shown in Figure 5a. The thickness of the walls is 0.2D, which corresponds to wall thickness of 200 nm and pixel size of 1 μm, as reported in ref. [32]. Consequently, the size of the unit cell is A ¼ 4 μm. Note that we assume sharp change of the refractive index between pixels, whereas experimentally, possibly, the actual realized profile of such change could lead to scattering or coupling of TE and TM modes and would need to be optimized for actual application. The corresponding band diagram is shown in Figure 5b. A partial bandgap with the gap-to-midgap ratio of 0.06 occurs at the M point and has the midgap frequency of 0.423c=A, corresponding to the wavelength of % 10 μm. Figure 5c,d shows how the band diagram would change for a different distribution of "empty" and "full" pixels within the unit cell. In this particular case, nontrivial photonic effects occur for propagation with k-vector pointing along the diagonal of the unit cell (M-Γ high symmetry points on the band diagram). While presence of the partial bandgap confirms that LCs can in fact be used to create 2D pixelated photonic crystals with nontrivial optical properties, full bandgaps are usually desired. Larger bandgaps can be achieved by increasing the birefringence of the LCs. Additionally, higher dielectric contrast is also needed for realizing full 3D photonic bandgaps. [1] High birefringence LCs, for example, with Δn up to 0.7, have already been reported. [34][35][36][37] Birefringence in THz regime can be also improved by using LC nanoparticle composites. [38,39] With the increased interest in THz spectral range for 6 G wireless communication, [40] also in combination with LCs for the design of active devices, [41] we expect that new and better materials will emerge in the future. By using high birefringence materials or composite materials, self-assembled 2D LC structures with smaller dimensions [42,43] could serve as photonic crystals as well, possibly leading to manipulation of shorter light wavelengths. It is also worth mentioning that only a very simple photonic crystal unit cell has been used in this work as a proof of concept. Therefore, we expect that larger and full bandgaps could also be achieved by further optimizing the configuration of "empty" and "full" pixels or by including insertions (i.e., static pixels) made of materials with higher refractive indices. Conclusion To conclude, we have shown that the pixelation of the dielectric features within the unit cell of photonic crystals does not have a significant impact on the band structure. Already a lowresolution pixelation which only roughly matches the original structure turns out to be a good approximation for the frequencies of the lowest photonic bands. This shows that pixelated unit www.advancedsciencenews.com www.adpr-journal.com cells could be used to simplify the design of photonic crystals. Additionally, the bandgaps are still present in the system even if the contrast in dielectric constants between the pillar and its surrounding is small, namely, comparable to the difference between the ordinary and extraordinary refractive indices in some of the soft birefringent materials. Using those in combination with pixelated design could lead toward fully reconfigurable photonic crystals.
2023-06-03T15:07:40.510Z
2023-05-31T00:00:00.000
{ "year": 2023, "sha1": "75333ea6568c16e9a883aac83123e249baf449ec", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/adpr.202300082", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "f7e968215bd509a12d3e87cd5451ecb5bf5fea0e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
259242171
pes2o/s2orc
v3-fos-license
Two-Stream Network One-Class Classification Model for Defect Inspections Defect inspection is important to ensure consistent quality and efficiency in industrial manufacturing. Recently, machine vision systems integrating artificial intelligence (AI)-based inspection algorithms have exhibited promising performance in various applications, but practically, they often suffer from data imbalance. This paper proposes a defect inspection method using a one-class classification (OCC) model to deal with imbalanced datasets. A two-stream network architecture consisting of global and local feature extractor networks is presented, which can alleviate the representation collapse problem of OCC. By combining an object-oriented invariant feature vector with a training-data-oriented local feature vector, the proposed two-stream network model prevents the decision boundary from collapsing to the training dataset and obtains an appropriate decision boundary. The performance of the proposed model is demonstrated in the practical application of automotive-airbag bracket-welding defect inspection. The effects of the classification layer and two-stream network architecture on the overall inspection accuracy were clarified by using image samples collected in a controlled laboratory environment and from a production site. The results are compared with those of a previous classification model, demonstrating that the proposed model can improve the accuracy, precision, and F1 score by up to 8.19%, 10.74%, and 4.02%, respectively. Introduction Defect inspection is important in the manufacturing industry and is required to ensure consistent product quality and improve the costs and efficiency of the entire manufacturing process. Human visual inspection, however, is time-consuming, labor-intensive, and prone to human errors. In contrast, machine vision inspection using cameras, optics, and inspection software enables fast and robust low-cost inspection. Therefore, it has been increasingly adopted in various manufacturing industries [1][2][3][4][5]. For decades, numerous studies on machine vision inspection have been conducted [6][7][8][9][10], but traditional inspection techniques still face challenges in dealing with variations in environmental conditions and part appearance. Recently, inspection algorithms integrating artificial intelligence (AI) techniques have shown promise and improved the accuracy and robustness of defect inspection. These algorithms have been employed in various manufacturing industries, including textile [11], fabric [8][9][10], and steel surface [4,12]. Defect inspection using deep learning algorithms achieved enhanced accuracy and robustness by learning features from the large training dataset. A number of prominent architectures and pre-trained models, such as AlexNet [13], VGGNet [14], ResNet [15], and MobileNet [16], have emerged, and these are accompanied by various techniques to enhance inspection performance. Wei et al. achieved an inspection accuracy of 98.5% using convolutional neural network (CNN)-based algorithms with image preprocessing, such as noise reduction and binarization, to detect defective products in the textile industry [17]. Yang et al. used the you only look once (YOLO) v5 object detection algorithm to detect and identify welding defects on steel pipes. The proposed model achieved an accuracy of 97.8%, demonstrating its potential for real-time welding defect detection [18]. Kim et al. presented a skip-connected convolutional autoencoder for advanced printed circuit board (PCB) inspection. The proposed unsupervised autoencoder model delivered promising performance, with a detection rate of up to 98% in 3900 defect and non-defect images [19]. Tang et al. proposed a skip autoencoder to improve the accuracy of anomaly detection and address labeling issues. Leveraging a pre-trained feature extractor and skip connections, the proposed method achieved better performance, showing a maximum area under the curve (AUC) of 0.98 [20]. Upadhyay et al. developed a U-Netbased deep learning framework to detect engine defects. They applied a hybrid motion deblurring method for image sharpening and denoising, combined with a customized generative adversarial network (GAN) model, to remove the blur effect based on classic computer vision techniques. The deep learning framework achieved precisions and recalls of over 90% [21]. Yoon Jong-Pil et al. presented a defect classification approach based on a convolutional variational autoencoder (CVAE) and deep CNN for metal surface defect inspection. The proposed conditional CVAE achieved a maximum completion of 0.9969 [22]. Although AI-based inspection provides superior performance compared to traditional methods, several limitations remain in applying this approach to practical situations. One major challenge is the performance degradation caused by data imbalance. AI-based inspection requires a large training dataset. However, practically, the collected data often suffer from class imbalance, where certain classes have considerably fewer samples than others. In defect inspection, collecting sufficient defective samples is difficult because the defect rate is quite low (usually under 1-5%) in general manufacturing processes. To address this issue, various methods have been proposed, including data augmentation [23][24][25], synthetization [19,26], and an adjustment of the weight or loss function of the network [27]. Wang et al. proposed a novel loss function called 'mean false error' together with its improved version called 'mean squared false error' for deep network training using imbalanced datasets [28]. Mao et al. improved data imbalance by extending the training dataset using a GAN model and achieved up to 86.8% accuracy [29]. One-class classification (OCC), which identifies objects belonging to a specific class given only positive samples of that class, is attracting attention as a solution to this problem [30][31][32][33][34][35][36][37][38][39][40]. Unlike general machine-learning-based classification algorithms, the OCC model aims to learn a classification boundary that separates the target class from other classes in the input space. OCC can thus be utilized effectively to solve data imbalance problems, as it does not require negative samples and can be trained only using positive samples. Shin et al. proposed a one-class support vector machine (SVM) model to detect mechanical defects in electronic devices, achieving up to 93.9% accuracy compared to the multilayer perceptron method [31]. Ruff et al. proposed a deep support vector data description that extracts the similarity between patterns of general categories and new data. The proposed method achieved up to 99.7% average AUCs on MNIST and CIFAR-10 [34]. Lee et al. proposed a one-class deep-learning-based fault-detection module for imbalanced industrial time-series data. Using four different networks, i.e., MLP, ResNet, LSTM, and ResNet-LSTM, for prediction, they achieved an excellent fault prediction accuracy of 96% [36]. Goyal et al. developed a deep robust one-class classification (DROCC) to help address the representation collapse problem. The DROCC achieved an average accuracy of 74.2% using the CIFAR-10 dataset [37]. The representation collapse problem is a major issue in OCC, and it can arise when the diversity of the training data is insufficient, or the data follow a repetitive pattern. In such cases, the decision boundary is fitted too tightly to the training dataset, leading to a decrease in the generalization performance for new data. In practical applications, the environmental conditions for collecting training and test samples may not be the same, which can lead to false positive errors, resulting in overall performance degradation. In this paper, we propose a two-stream network OCC model for defect detection that attempts to address the representation collapse problem, which has been a critical issue when applying the OCC model to practical applications. The proposed two-stream network model alleviates the representation collapse problem by introducing two feature extractor networks, i.e., global and local feature extractor networks. The global feature extractor network, which is designed to learn a general feature of the target class, can extract a feature vector that is not affected by variations in environmental conditions. The local feature extractor network is designed to capture features specific to the training dataset, and it extracts the target class-oriented feature vectors. Two feature vectors output from each network are merged and passed through the following classification layer for the final decision. Three types of classification layers, i.e., a one-dimensional (1D) convolution layer, a fully connected layer, and an SVM layer, were tested for the target class classification to determine the optimal classification layer. The proposed two-stream OCC model was verified by using an image dataset obtained from the practical application of automotive airbag bracket inspection. The main contributions of this paper are as follows: • A two-stream network architecture composed of global and local feature extractor networks is proposed to resolve the representation collapse problem of the OCC model. • The classification performances of three types of classification layers, i.e., 1D convolution, fully connected, and SVM layers, are described to elucidate the type that yields the optimal classification performance. • The performance of the proposed OCC model is verified using the practical application of automotive airbag bracket inspection. Two-Stream Network OCC Model OCC involves training a model using data from a single class and capturing its feature vectors. Although OCC is effective in capturing the distribution of given target data, its ability to recognize new data with different characteristic distributions may be diminished. To address this limitation, which is called representation collapse, this paper proposes a two-stream network OCC model. The main idea is to introduce a global feature extractor network to alleviate the issue of decision boundary collapse relative to the training data. By merging a global feature vector representing object-oriented general characteristics with a class-oriented local feature vector, the two-stream network model prevents the decision boundary from being overfitted to the training data and balances both features to identify an appropriate decision boundary. Figure 1 shows the two-stream network OCC model proposed in this paper. It consists of two types of feature extractor networks, i.e., global and local feature extractor networks. The global feature extractor network is designed to capture all characteristics of the inspection objects, such as geometrical and topological characteristics. Generally, the global feature is an object-oriented characteristic, and it can be consistently extracted regardless of variations in environmental conditions. The local feature extractor is responsible for extracting the target class-oriented characteristics from the training datasets. The local feature describes the surface characteristics of inspection objects, such as colors and textures. Unlike the global feature, the local feature presented in the image can be influenced by environmental conditions. The two feature vectors obtained from each feature extractor network are merged as a single feature vector and passed through the classification layer. The global feature extractor network is implemented using an Inception V3 network model that consists of a deep neural network architecture including 94 convolution layers and 20,861,480 parameters. It includes three inception modules, which are composed of multiple parallel paths with different filter sizes, to create a rich set of features that capture different aspects of the input image. The inception modules and auxiliary classifiers in the global feature extractor network alleviate the overfitting problem and improve the consistency of feature extraction. The details of the global feature extractor network are presented in Table 1. The global feature extractor network is pre-trained using an ImageNet dataset separately from the other parts of the entire two-stream network. In the entire model training process, the weights of the global feature extractor network are fixed as the pre-trained value to prevent the feature vector from being biased relative to the training dataset. The global feature vector, , extracted from the global feature extractor network can be expressed as where represents the image, denotes the global feature extractor network, and is the dimension of the global feature vector. The local feature extractor network is composed of four convolution layers and three max-pooling layers including 3,796,480 parameters as presented in Table 2. A simple CNN structure is used for the local feature extractor network to capture the features specific to the target dataset. The local feature extractor network captures the target data-oriented local feature vector , which can be determined by applying where represents the image, and denotes the local feature extractor network. The dimension of the local feature vector is identical to that of the global feature vector. The global and local feature vectors are merged as a single feature vector, , as follows and passed through the classification layer: where is the unified feature vector. is passed through the classification layer to determine the final decision of the defect inspection. Three types of classification layers, including a 1D convolution layer, a fully connected layer, and an SVM layer, were implemented to validate the effect of the classification layer on the overall inspection performance and to identify the optimal classification layer. The details of each classification layer are presented in Table 3. The global feature extractor network is implemented using an Inception V3 network model that consists of a deep neural network architecture including 94 convolution layers and 20,861,480 parameters. It includes three inception modules, which are composed of multiple parallel paths with different filter sizes, to create a rich set of features that capture different aspects of the input image. The inception modules and auxiliary classifiers in the global feature extractor network alleviate the overfitting problem and improve the consistency of feature extraction. The details of the global feature extractor network are presented in Table 1. The global feature extractor network is pre-trained using an ImageNet dataset separately from the other parts of the entire two-stream network. In the entire model training process, the weights of the global feature extractor network are fixed as the pre-trained value to prevent the feature vector from being biased relative to the training dataset. The global feature vector, F g , extracted from the global feature extractor network can be expressed as where I represents the image, K g denotes the global feature extractor network, and D is the dimension of the global feature vector. The local feature extractor network is composed of four convolution layers and three max-pooling layers including 3,796,480 parameters as presented in Table 2. A simple CNN structure is used for the local feature extractor network to capture the features specific to the target dataset. The local feature extractor network captures the target data-oriented local feature vector F l , which can be determined by applying where I represents the image, and K g denotes the local feature extractor network. The dimension of the local feature vector is identical to that of the global feature vector. The global and local feature vectors are merged as a single feature vector, F u , as follows and passed through the classification layer: where F u is the unified feature vector. F u is passed through the classification layer to determine the final decision of the defect inspection. Three types of classification layers, including a 1D convolution layer, a fully connected layer, and an SVM layer, were implemented to validate the effect of the classification layer on the overall inspection performance and to identify the optimal classification layer. The details of each classification layer are presented in Table 3. Table 2. Detailed configuration of the local feature extractor network. Model Verification The two-stream network OCC model was verified using the image samples collected by the practical vision inspection system of an automotive airbag bracket. The airbag bracket was manufactured using projection welding, joining a nut on a bracket plate. Faults may have occurred in the welding procedure, resulting in several types of defects such as nut omissions, axial twisting, and surface abnormalities, as shown in Figure 2. These types of defects should be detected by the vision inspection system, and this study verifies the performance of the proposed two-stream network OCC model by evaluating the inspection accuracy using positive and negative airbag bracket samples. Data Collection The image datasets for training and performance evaluation were collected in two different environments, i.e., a laboratory and a production site. The vision system, including the camera, lens, lighting, and kinematic configuration, was set identically in both environments, as shown in Figure 3a,b. An area scan monocamera (acA2440-20gm, Basler, Ahrensburg, Germany) with a resolution of 2448 × 2048 (5 MP) and a 16 mm lens (MVL-KF1628M, HIKROBOT, Zhejiang, China) was used as the vision system. The working distance between the lens and the airbag bracket was set to 10.0 cm. A total of 870 images of airbag bracket samples, including 696 positive and 174 negative images, were collected in the laboratory setup, and 136 images, including 122 positive and 14 negative images, were captured in the production site setup. Subsequently, 80% of the images collected in the laboratory were used to train the two-stream network model, and the remaining 20% were used for model verification. The images collected on the production site were used only for model verification. Figures 4 and 5 show the airbag bracket image samples collected in the laboratory and on the production site, respectively. Data Collection The image datasets for training and performance evaluation were collected in two different environments, i.e., a laboratory and a production site. The vision system, including the camera, lens, lighting, and kinematic configuration, was set identically in both environments, as shown in Figure 3a,b. An area scan monocamera (acA2440-20gm, Basler, Ahrensburg, Germany) with a resolution of 2448 × 2048 (5 MP) and a 16 mm lens (MVL-KF1628M, HIKROBOT, Zhejiang, China) was used as the vision system. The working distance between the lens and the airbag bracket was set to 10.0 cm. A total of 870 images of airbag bracket samples, including 696 positive and 174 negative images, were collected in the laboratory setup, and 136 images, including 122 positive and 14 negative images, were captured in the production site setup. Subsequently, 80% of the images collected in the laboratory were used to train the two-stream network model, and the remaining 20% were used for model verification. The images collected on the production site were used only for model verification. Figures 4 and 5 show the airbag bracket image samples collected in the laboratory and on the production site, respectively. Data Collection The image datasets for training and performance evaluation were collected in two different environments, i.e., a laboratory and a production site. The vision system, including the camera, lens, lighting, and kinematic configuration, was set identically in both environments, as shown in Figure 3a,b. An area scan monocamera (acA2440-20gm, Basler, Ahrensburg, Germany) with a resolution of 2448 × 2048 (5 MP) and a 16 mm lens (MVL-KF1628M, HIKROBOT, Zhejiang, China) was used as the vision system. The working distance between the lens and the airbag bracket was set to 10.0 cm. A total of 870 images of airbag bracket samples, including 696 positive and 174 negative images, were collected in the laboratory setup, and 136 images, including 122 positive and 14 negative images, were captured in the production site setup. Subsequently, 80% of the images collected in the laboratory were used to train the two-stream network model, and the remaining 20% were used for model verification. The images collected on the production site were used only for model verification. Figures 4 and 5 show the airbag bracket image samples collected in the laboratory and on the production site, respectively. Training The region of interest (ROI) for the airbag bracket's inspection can be defined as the rectangle centered at the bracket's center that tightly encloses the nut region. The ROI was cropped in raw image samples and resized to 750 × 750 for model training and verification. To enlarge the training dataset, several variations were applied to the raw images: The center of the cropped region was randomly set within 100 pixels at the center of the bracket to reflect possible variations in the bracket position, and each image was rotated by 90°, 180°, and 270° and flipped. A total of 3470 image samples were used for training. Figure 6 shows the dataset enlargement procedure applied for model training. The Adam optimizer and Huber loss function were used for training, and the maximum number of epochs was set to 100. Training The region of interest (ROI) for the airbag bracket's inspection can be defined as the rectangle centered at the bracket's center that tightly encloses the nut region. The ROI was cropped in raw image samples and resized to 750 × 750 for model training and verification. To enlarge the training dataset, several variations were applied to the raw images: The center of the cropped region was randomly set within 100 pixels at the center of the bracket to reflect possible variations in the bracket position, and each image was rotated by 90°, 180°, and 270° and flipped. A total of 3470 image samples were used for training. Figure 6 shows the dataset enlargement procedure applied for model training. The Adam optimizer and Huber loss function were used for training, and the maximum number of epochs was set to 100. Training The region of interest (ROI) for the airbag bracket's inspection can be defined as the rectangle centered at the bracket's center that tightly encloses the nut region. The ROI was cropped in raw image samples and resized to 750 × 750 for model training and verification. To enlarge the training dataset, several variations were applied to the raw images: The center of the cropped region was randomly set within 100 pixels at the center of the bracket to reflect possible variations in the bracket position, and each image was rotated by 90 • , 180 • , and 270 • and flipped. A total of 3470 image samples were used for training. Figure 6 shows the dataset enlargement procedure applied for model training. The Adam optimizer and Huber loss function were used for training, and the maximum number of epochs was set to 100. Evaluation Metrics The performance of the proposed two-stream network model was evaluated by four metrics: accuracy, precision, recall, and F1 score. These evaluation metrics can be determined as follows: where TP, TN, FP, and FN represent the true positive, true negative, false positive, and false negative, respectively. Results The performance of the proposed two-stream network model was evaluated from three perspectives: the effect of the classification layer, the effect of the two-stream network architecture, and performance in comparison with those of previous methods. In the performance evaluation, the two-stream model was trained only with the datasets gathered in the laboratory, and it was tested using two datasets gathered in the laboratory and on the production site. Performance Evaluation in Terms of the Classification Layer The proposed two-stream network OCC model was implemented with three types of classification layers: 1D convolution, fully connected, and SVM layers. Figure 7 and Table 4 present the experimental results of the two-stream network model according to the selected classification layer, as evaluated using laboratory datasets. In total, 140 positive and 34 negative images collected in the laboratory were used in this experiment. The confusion matrices in Figure 6 demonstrate that the SVM and 1D convolution layers achieved the best performance in classifying the TP (136/140) and TN (23/34) labels, respectively. The 1D convolution layer showed the best accuracy, precision, and F1 score of 0.8966, 0.9236, and 0.9366, respectively, whereas the SVM layer yielded the highest recall of 0.9714, as shown in Table 4. Evaluation Metrics The performance of the proposed two-stream network model was evaluated by four metrics: accuracy, precision, recall, and F1 score. These evaluation metrics can be determined as follows: where TP, TN, FP, and FN represent the true positive, true negative, false positive, and false negative, respectively. Results The performance of the proposed two-stream network model was evaluated from three perspectives: the effect of the classification layer, the effect of the two-stream network architecture, and performance in comparison with those of previous methods. In the performance evaluation, the two-stream model was trained only with the datasets gathered in the laboratory, and it was tested using two datasets gathered in the laboratory and on the production site. Performance Evaluation in Terms of the Classification Layer The proposed two-stream network OCC model was implemented with three types of classification layers: 1D convolution, fully connected, and SVM layers. Figure 7 and Table 4 present the experimental results of the two-stream network model according to the selected classification layer, as evaluated using laboratory datasets. In total, 140 positive and 34 negative images collected in the laboratory were used in this experiment. The confusion matrices in Figure 6 demonstrate that the SVM and 1D convolution layers achieved the best performance in classifying the TP (136/140) and TN (23/34) labels, respectively. The 1D convolution layer showed the best accuracy, precision, and F1 score of 0.8966, 0.9236, and 0.9366, respectively, whereas the SVM layer yielded the highest recall of 0.9714, as shown in Table 4. Figure 8 shows the experimental results, as evaluated by using the production site dataset. In total, 122 positive and 14 negative images collected on the production site were used in this experiment. The confusion matrices in Figure 8 demonstrate that the fully connected and 1D convolution layers achieved the best performance in classifying the TP (119/122) and TN (14/14) labels, respectively. The 1D convolution layer showed the best accuracy, precision, and F1 score of 0.9706, 1.0000, and 0.9833, respectively, whereas the fully connected layer achieved the highest recall of 0.9754, as shown in Table 4. Figure 8 shows the experimental results, as evaluated by using the production site dataset. In total, 122 positive and 14 negative images collected on the production site were used in this experiment. The confusion matrices in Figure 8 demonstrate that the fully connected and 1D convolution layers achieved the best performance in classifying the TP (119/122) and TN (14/14) labels, respectively. The 1D convolution layer showed the best accuracy, precision, and F1 score of 0.9706, 1.0000, and 0.9833, respectively, whereas the fully connected layer achieved the highest recall of 0.9754, as shown in Table 4. Figure 8 shows the experimental results, as evaluated by using the production site dataset. In total, 122 positive and 14 negative images collected on the production site were used in this experiment. The confusion matrices in Figure 8 demonstrate that the fully connected and 1D convolution layers achieved the best performance in classifying the TP (119/122) and TN (14/14) labels, respectively. The 1D convolution layer showed the best accuracy, precision, and F1 score of 0.9706, 1.0000, and 0.9833, respectively, whereas the fully connected layer achieved the highest recall of 0.9754, as shown in Table 4. Performance Evaluation of the Two-Stream Network Model The performance of the two-stream network model was compared with those of models without one of the global and local feature extractor networks in this experiment. The 1D convolution layer was used for the classification layer in this experiment. Table 5 presents a comparison of the performance of the two-stream network model with those of the single-stream network. The performance evaluation was conducted for both the laboratory and production site datasets. The global feature extractor network model exhibited the lowest performance for both datasets, with an accuracy of 0.8621, a precision of 0.8580, and an F1 score of 0.9205 for the laboratory dataset; and an accuracy of 0.8971, a precision of 0.9030, and an F1 score of 0.9453 for the production site dataset. The 2S-1DOC model exhibited the highest performance for both datasets, with an accuracy, precision, and F1 score of 0.8966, 0.9236, and 0.9366, respectively, for the laboratory dataset; and an accuracy, precision, and F1 score of 0.9706, 1.0000, and 0.9833, respectively, for the production site dataset. Figure 9 presents the t-distributed stochastic neighbor embedding (t-SNE) plots of the feature vectors output from the local, global, and two-stream network. The t-SNE plot visualizes the similarity between feature vectors by mapping high-dimensional feature vectors to a lower-dimensional space (2D). The feature vectors of the two-stream network, which combines the characteristics of the local and global feature extractor networks, clearly distinguish the true and false samples with a single decision boundary. Performance Evaluation of the Two-Stream Network Model The performance of the two-stream network model was compared with those of models without one of the global and local feature extractor networks in this experiment. The 1D convolution layer was used for the classification layer in this experiment. Table 5 presents a comparison of the performance of the two-stream network model with those of the single-stream network. The performance evaluation was conducted for both the laboratory and production site datasets. The global feature extractor network model exhibited the lowest performance for both datasets, with an accuracy of 0.8621, a precision of 0.8580, and an F1 score of 0.9205 for the laboratory dataset; and an accuracy of 0.8971, a precision of 0.9030, and an F1 score of 0.9453 for the production site dataset. The 2S-1DOC model exhibited the highest performance for both datasets, with an accuracy, precision, and F1 score of 0.8966, 0.9236, and 0.9366, respectively, for the laboratory dataset; and an accuracy, precision, and F1 score of 0.9706, 1.0000, and 0.9833, respectively, for the production site dataset. Figure 9 presents the t-distributed stochastic neighbor embedding (t-SNE) plots of the feature vectors output from the local, global, and two-stream network. The t-SNE plot visualizes the similarity between feature vectors by mapping high-dimensional feature vectors to a lower-dimensional space (2D). The feature vectors of the two-stream network, which combines the characteristics of the local and global feature extractor networks, clearly distinguish the true and false samples with a single decision boundary. Table 6 compares the performance of the two-stream network model and previous image classification models. Six representative classification models, InceptionV3 [41], ResNet101V2 [14], Xception [42], MobileNetV2 [15], VGG-16 [13], and PaDiM [43], were tested for the performance comparison. The two-stream network model presented the highest accuracy and precision of 0.8966 and 0.9236, respectively. However, ResNet101V2, Xception, MobileNetV2, and VGG-16 yielded the highest recall result of 1.000, and PaDiM shows the highest F1 score result of 0.9388. The InceptionV3 model presented the lowest Table 6 compares the performance of the two-stream network model and previous image classification models. Six representative classification models, InceptionV3 [41], ResNet101V2 [14], Xception [42], MobileNetV2 [15], VGG-16 [13], and PaDiM [43], were tested for the performance comparison. The two-stream network model presented the highest accuracy and precision of 0.8966 and 0.9236, respectively. However, ResNet101V2, Xception, MobileNetV2, and VGG-16 yielded the highest recall result of 1.000, and PaDiM shows the highest F1 score result of 0.9388. The InceptionV3 model presented the lowest accuracy, precision, and F1 scores of 0.8621, 0.8580, and 0.9205, respectively. Figure 10 presents the t-SNE plots of the feature vectors of the two-stream network model and previous models. As shown in the figure, the proposed two-stream network most clearly distinguished the true and false samples compared to previous models. accuracy, precision, and F1 scores of 0.8621, 0.8580, and 0.9205, respectively. Figure 10 presents the t-SNE plots of the feature vectors of the two-stream network model and previous models. As shown in the figure, the proposed two-stream network most clearly distinguished the true and false samples compared to previous models. Table 7 presents a comparison of the results obtained using the proposed and previous models and the production site dataset. The two-stream network model presents the highest accuracy, precision, and F1 scores of 0.9706, 1.0000, and 0.9833, respectively. Table 7 presents a comparison of the results obtained using the proposed and previous models and the production site dataset. The two-stream network model presents the highest accuracy, precision, and F1 scores of 0.9706, 1.0000, and 0.9833, respectively. Discussion In the manufacturing sector, defect inspection using AI technology has been extensively studied to optimize labor costs and process automation. However, due to the difficulty of collecting data in the field and data imbalances, OCC has recently attracted attention for various applications. OCC is efficient in applications where data are unbalanced, but it has a critical limitation in that the features are compressed in the training data, resulting in false-positive errors. To overcome this limitation, we developed a two-stream network OCC model consisting of local and global feature extractor networks followed by a classification layer. The performance of the proposed model was validated using a practical example of automotive-airbag bracket-welding defect inspection. The image datasets of the airbag bracket collected in two different environments, i.e., a laboratory and a production site, were used for the training and validation of the proposed model. For the dataset collected in the laboratory, our model achieved results of 0.8966, 0.9236, 0.9500, and 0.9366 for the accuracy, precision, recall, and F1 score, respectively. For the production site dataset, the model achieved results of 0.9706, 1.0000, 0.9672, and 0.9833 for the accuracy, precision, recall, and F1 score, respectively. The inspection performance of the entire model could be affected by not only the performance of the feature extraction layer but also that of the classification layer. Three types of classification layers, 1D convolution, fully connected, and SVM layers, were tested to investigate the effect of the classification layer and to identify the optimal classification presenting the best inspection performance. The 1D convolution layer showed the best accuracy, precision, and F1 score for both laboratory and production site datasets. The fully connected layer yielded slightly better performances than the 1D convolution layer only in terms of recall. In the performance comparison between the laboratory and production site datasets, the SVM layer exhibited a decrease in the accuracy, precision, recall, and F1 score by 9.44%, 0.24%, 8.87%, and 4.58%, respectively, for the production site dataset compared with the laboratory dataset. By contrast, the 1D convolution layer showed an increase of 8.37% in accuracy, 8.54% in precision, 1.77% in recall, and 5.01% in the F1 score for the production site dataset compared to the laboratory dataset. These results indicate that the classification by the 1D convolution layer is more appropriate for alleviating the representation's collapse than that by other layers. Compared with the single-stream network model, the two-stream network model showed an increase of up to 7.35% in accuracy, 9.70% in precision, and 3.80% in F1 score, proving that the two-stream model achieved a better performance than the existing singlestream model. In addition, the proposed two-stream model exhibited performance improvements in the production site dataset's results compared with the laboratory dataset results, with an increase in accuracy of 8.25%, precision of 8.27%, recall of 1.81%, and F1 score of 4.99%, demonstrating that the proposed model maintains the inspection performance for the datasets gathered under different environmental conditions than the training datasets. This finding proves that the two-stream network architecture contributes to reducing the performance degradation caused by representation collapse. The effect of the two-stream network on performance improvement is clearly presented by the t-SNE plots shown in Figure 9. In Figure 9a, the feature vectors produced by the global feature extractor network provide a rough classification of the true and false samples, and there is some overlap observed among certain portions of the samples. The lack of a distinct decision boundary can be attributed to the global feature extractor network's emphasis on capturing general features. In contrast, the feature vector generated by the local feature extractor network depicted in Figure 9b exhibits clear differentiation between true and false samples. Nevertheless, determining a single decision boundary is challenging as false samples are divided into two separate clusters. By combining the characteristics of the global and local feature extractor networks, the feature vector generated by the two-stream network depicted in Figure 9c effectively discriminates between true and false samples using a single decision boundary. The comparison between the proposed two-stream network model and the previous model confirmed its enhanced classification performance. In the performance comparison with the previous model, the proposed two-stream model showed the best performance for most performance indices, including the accuracy, precision, and F1 score for production site datasets. The improvements in accuracy, precision, and F1 score were up to 65.01%, 10.74%, and 40.05%, respectively. The PaDiM method demonstrated proficient classification performance within the laboratory dataset. However, its performance significantly deteriorated when applied to the production site's dataset, which has distinct environmental conditions compared to the training dataset. To understand the rationale behind the performance improvement in the proposed two-stream network model, we examined the t-SNE plots presented in Figure 10. The feature vectors of the previous model did not exhibit clear classification boundaries for true and false samples. In contrast, the feature vectors generated by the proposed model provided the most distinct differentiation between true and false samples. The significance of this enhancement in classification features lies in its ability to alleviate the inherent bias toward true samples, which frequently possess larger datasets in comparison to false samples. The biased predictions of previous models toward true samples had a detrimental impact on precision performance, resulting in its degradation. The two-stream network OCC model proposed in this study exhibited high classification performance with respect to both the laboratory and production site datasets. However, the validation was not sufficient for verifying the classification performance of negative samples because not enough defective samples were collected at the production site. In future studies, sufficient negative samples must be collected, and the performance of the proposed model should be further validated with those samples. Conclusions In this paper, we proposed a two-stream network OCC model to resolve the representation collapse problem of OCC models. The performance of the proposed model was validated in terms of the classification layer and network architecture, and comparisons were carried out using previous methods that implement image samples collected in the practical example of airbag bracket inspection. The performance results clearly indicated that the proposed model effectively addressed the representation collapse problem, resulting in enhanced inspection accuracy in comparison to existing classification models. Moreover, the classification performance of the proposed two-stream model exhibited an impressive improvement of up to 10% compared to previous classification models. This performance improvement can be accomplished using the novel two-stream network, which seamlessly integrates both general and data-specific features. The practical applications of defect inspection can greatly benefit from the implementation of the two-stream network model presented in this paper. Its incorporation is poised to make valuable contributions toward enhancing performances in vision inspection tasks.
2023-06-25T05:08:24.943Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "fa575f5ac9217c6879786643ff1e741b170c2831", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "fa575f5ac9217c6879786643ff1e741b170c2831", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
35774621
pes2o/s2orc
v3-fos-license
Many-Body Approximations in the sd-Shell Sandbox A new theoretical approach is presented that combines the Hartree-Fock variational scheme with the exact solution of the pairing problem in the finite orbital space. Using this formulation in the sd-space as an example, we show that the exact pairing significantly improves the results for the ground state energy I. INTRODUCTION The classical Hartree-Fock (HF) approximation is a prototype of the modern approach to the quantum manybody problem related to the energy density functional [1,2]. When applied to complex nuclei, the density functional theory may provide a universal description across the nuclear chart. The pairing interaction that is present in nuclei as well as in fermionic condensed matter systems is usually included in the Hartree-Fock-Bogoliubov (HFB) form [3]. The well known deficiencies of the HFB approach for mesoscopic systems follow mainly from its non-conservation of particle number. As a result, unphysical features are introduced into dynamics, the superfluid phase transition appears too sharp, and the correlational energy produced by pairing might be severely underestimated. As was shown earlier [4,5], the pairing part of the problem can be solved numerically quite easily with the help of the seniority representation in a spherical basis, and its exact solution significantly improves the results. It was also sketched in [4] how other parts of the interaction can be incorporated into the exact pairing method in the approximate way that reminds the HF approach. This can be done in an iterative fashion: the exact pairing solution using the starting single-particle basis determines the actual occupation numbers; these (in general, fractional) occupancies self-consistently determine, in the HF spirit, an improved single-particle basis where we again solve the pairing problem etc. until convergence. In this way both mean-field features, deformation and pairing, are accounted for. The main purpose of the current work is to further develop this Hartree-Fock plus pairing correlation (HFP) method that essentially is an intermediate step from the HF approach towards the full shell-model (SM) description. On one hand, we would want to keep the simplicity and modest computer demands as inherent properties of the mean field approach. On the other hand, we take into consideration pairing and other physical effects beyond the simple HF, or mean field in general. We check our approach for the sd-shell nuclei, where the SM with large-scale diagonalization works perfectly [6] and can serve as a searchlight illuminating the correct direction of motion. The success of this attempt will allow the extension of the approach to heavy nuclei, where the catastrophic growth of dimensions makes the complete shell model solution unrealistic. II. OUTLINE OF THE METHOD As in most mean-field approaches, we formulate this method as variational one. As in the shell model, we assume a general form of the two-body Hamiltonian that includes the single-particle term t and the (antisymmetrized) two-body interaction V : The variational wave function |Ψ will be defined below. The wave function and all properties of the system follow from the minimization of the expectation value Ψ|Ĥ|Ψ . For our test of the methods, we will take for V the USDB interaction from the sd-shell model [6]. It will allow us to compare the results obtained using our approximate method with the exact shell model calculations in the same single-particle model space. The ground state wave function |Ψ for a fixed particle number N can be presented as a superposition of basis states, where each basis state |d is a Slater determinant which for N fermions can be written as usual: The single-particle states φ ν can be found with the help of the variational principle as it is usually done in the HF method. The approach is actually defined by the selection of the space D spanned by the determinants |d . If we choose only one Slater determinant as our variational wave function (3), we come to the standard HF approximation. If the manifold D includes all possible configurations, then we get the exact shell-model solution. In this article our choice is determined by the pairing phenomenon which smears the Fermi surface and converts the Fermi-gas ground state into a superposition of Slater determinants. In the case of a spherical system with the pairing forces taken as the J = 0, T = 1 part of the two-body interaction (1), we have seniority s as a good quantum number. For an even number of particles, the ground state has s = 0, while for an odd number s = 1. In this simple case we can construct the basis of Slater determinants |d occupying single-particle levels |jm by pairs, where a † jm = (−1) j−m a † j−m is the creation operator for the time-conjugate single-particle state with respect to a † jm . Here we omit all quantum numbers except total angular momentum j and its projection m. The presence of other types of the interaction in general breaks spherical symmetry and brings in the deformed mean field. In the case of a deformed nucleus, even if we had had only J = 0 part of the two-body interaction (1) in the spherical representation, we have to take into consideration a broader class of pairs arising as a result of splitting and mixing of the original spherical states by deformation. Here we limit ourselves by the case of axially symmetric deformation, when the singleparticle orbitals |νm are still characterized by the angular momentum projection m along with other quantum numbers ν. According to the Kramers theorem, the orbitals |νm and |ν − m are degenerate. However, the pairs may also be formed by the states m and −m belonging to different sets of remaining quantum numbers. Thus, for our basis Slater determinants |d we assume the following form: We construct the variational wave function (3) as a superposition of the Slater determinants (6) for a given particle number. Using such a form we hope to correctly account for pairing correlations in the deformed case at the same time crucially reducing the dimension of space D in comparison with the full shell model. Actually prescriptions (5,6) are valid only for an even number of particles. For an odd particle number, we use the same eq. (6) but add one additional creation operator that corresponds to the odd particle. The odd particle can be placed in any empty single-particle state, and the states are divided into classes with a definite value of the angular momentum projection. In the current application of our method we make a simplifying approximation treating protons and neutrons separately. It means that, though we consider the full two-body interaction (1) including T z = 0 part, the variational function (3) is constructed as the product of proton and neutron parts. Clearly we are losing here protonneutron correlations although their mutual contributions to the mean field are fully accounted for. The variation over amplitudes C d with the additional normalization condition of the wave function, Ψ|Ψ = 1, leads us to the usual set of equations, The matrix elements d|Ĥ|d ′ are calculated for the determinants built on a given single-particle basis, and equations (7) are solved numerically. The mean-field basis is found from the self-consistent HF equations: where is the single-particle HF Hamiltonian, ǫ ν are the singleparticle energies, and ρ is the density matrix selfconsistently determined by The mean field potential is given by its matrix elements, In this conventional mean field formulation, the potential (11) contains the direct and exchange contributions. The pairing effects, with strict particle number conservation, are contained in the superposition of the Slater determinants (2) used instead of the single HF determinant. The whole construction can be further improved by using the exact but more complicated variational approach relating the single-particle basis to the full set of the coefficients of the superposition (2). Such a possibility will be considered in the future. The HFP scheme of solution is the following: • Start with the spherical single-particle basis |κm • Choose in this basis the initial diagonal density matrix ρ corresponding to occupation numbers specific for prolate or oblate shapes (pairs with small or large |m|, respectively) • Solve the HF variational equation (8) and get the single-particle spectrum (φ ν , ǫ ν ), in general corresponding to a deformed field • Construct the "paired" class of many-body basis wave functions according to eq. (6) and calculate the matrix elements of the Hamiltonian H • Solve the variational equation (7) and obtain the ground state wave function • Calculate the next-step density matrix (10) • Repeat the procedure starting from the step three until convergence The converged results will certainly be a local minimum of eq. (2). Exploration of different starting choices in step 2 is needed to find a global minimum. In our study here we start with a spherical single-particle basis (because the USDB interaction is so defined) but in principle any convenient axial basis could be used. In the end, the ground state energy can be found as the Hamiltonian expectation value over the resulting ground state wave function |Ψ , III. RESULTS We performed calculations of ground state energies for all nuclei within the sd-shell region, from 17 O to 39 Ca. Our results [7] are summarized in Figs. 1-5. In Figs. 1 and 2 we show the energy gain from HFP compared to HF, Typical values are one to a few MeV. One observes the well-known odd-even staggering that is characteristic of pairing. In the conventional HFB approach the pairing correlation is zero for many of these nuclei, including cases such as 24 O where the spherical shell gap is too large, and cases such as 20 Ne, 24 Mg and 28 Si where the deformed shell gap is too large to support BCS type pairing. The HFP method gives some correlation energy for all sd-shell nuclei for which there are at least two active particles ( 9 < N < 19 or 9 < Z < 19 ). In the practical solution of the equations we find that many sdshell nuclei have two or three energy minima. To have some confidence that we have found the lowest energy solution we start with several initial values of the density matrix including those that are prolate and oblate deformed, spherical and random. In Fig. 1 are 839 states. The HFP method requires 92 determinants for protons and 92 for neutrons. The difference is clearly peaked at the N = Z line that can be explained by the proton-neutron pairing being not accounted for in the calculations. The HFP solution is very close to the exact solution around the edges of the sd shell (see Fig. 4). These nuclei are spherical and the HFP method is equivalent to the spherical exact-pairing method discussed in [4,5]. The largest deviation from exact is for nuclei near the middle of the sd shell. There are still pairing contributions for deformed nuclei, but the result is different from the naive expectation of just adding "spherical" contributions. For example, as shown in Fig. 2, the correlation energy is only about 400 keV for the deformed 20 Ne, compared to a total of about 3.4 MeV that would be obtained just from adding the 1.7 MeV correlation energies obtained for two neutrons and two protons in a spherical basis (e.g. 18 O and 18 Ne). Finally in Fig. 5 we show the intrinsic quadrupole moment obtained for the lowest energy solutions for all sd-shell nuclei. One observes the well known region of strongly prolate nuclei near 24 Mg. 28 Si is the most strongly oblately deformed, and there is an island of weak oblate deformation around 31 Si. It would be interesting to use our sd-shell sandbox to clarify the general question of why most nuclei are prolate deformed [8], by exploring the HFP results with different (but realistic) Hamiltonians. Proton Number Neutron Number IV. CONCLUSION Obviously, the HFP is still far from adequate away from semi-magic nuclei. The angular momentum nonconservation is certainly a significant deficiency of the wave function, that when repaired will introduce additional correlation energy. There are a number of ways that rotational correlation energies can be calculated, and we are optimistic that HFP wave functions can be used as a better starting point. Besides rotational energies, proton-neutron pairing effects are omitted in our wave functions. As known [9], such pairing correlations are quite strong close to the N = Z line; they should be included at the next stage of development. Effective Hamiltonians for the HFB solution are explored in [10]. Finally, some improvement may follow from including the non-axial configurations with the pairing between more general time-conjugate orbitals (most probably, the mean field in 24 Mg is triaxial [11], [12]).
2008-06-20T23:16:03.000Z
2008-06-20T00:00:00.000
{ "year": 2008, "sha1": "8b352ced35fcc5bed326e902ff772c0fababb850", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0806.3488", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6cf10fe950091b13918da0879307fe8e38928d63", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
239422708
pes2o/s2orc
v3-fos-license
Medical management of extra-ocular muscle cysticercosis- a clinical study Introduction: To study clinical diagnosis, results of investigations and role of medical management and their outcome in extra-ocular cysticercosis. Our study also emphasizes on the role of topical cyclosporine eye drops in the management of treatment related severe inflammatory response in extra-ocular cysticercosis. Methods: A total of 10 patients with extraocular cysticercosis were recruited for the study from our OPD, blood investigations, ultra-sonography for both eyes and whole abdomen were done. Computed tomography (NCCT) were done to rule out neurocysticercosis and orbital cysticercosis. Results: The commonest clinical presentation was cyst in the medial rectus muscle with it being the most common presenting manifestation. Conclusion: Extra-ocular cysticercosis can be managed with medical treatment. Oral Albendazole, topical and systemic steroids were given as a part of medical treatment, topical cyclosporin was added to the patients with more severe inflammatory response due to dying cysticercus. Introduction Cysticercosis is a serious problem in developing countries of Latin America, Asia and Africa, especially in areas of poverty and poor hygiene. Taenia solium is a member of Phylum Platyhelminthes, class Cestoda Order Cyclophyllidea and family Taeniidae [1]. In our study, we studied the patients with extra-ocular muscle cysticercosis, its presenting symptoms, most commonly involving extra-ocular muscles after ruling out any possibility of intra-ocular cysticercosis. All the patients were treated with medical therapy. The study also highlights the role of topical cyclosporine in the management of inflammatory response occurring due to dying cysticercus. Medical management of extra-ocular muscle cysticercosis is a good alternative to surgical management as it prevents surgical damage to the muscle involved, post-operative discomfort and long term post-operative medication. Ocular Cysticercosis-Clinical Presentation-Ocular involvement is usually unilateral but bilateral involvement may occur in cases of disseminated cysticercosis [1]. Left eye is more commonly involved in comparison to the right, possibly because larva may be preferentially routed to the left internal carotid artery which directly originates from the aorta. Parasite reaches the posterior segment through the posterior ciliary artery. Intraocular cysticercosis can involve either the anterior or the posterior segment. While the anterior segment cysticercosis is rarely seen. Intra-vitreal cysts-Various modalities have been described in the surgical management of intra-vitreal cysticercosis such as diathermy, photocoagulation and cryo-therapy. Surgical removal of the cyst can be through either the trans-retinal or trans-scleral route. Lid Cysticercosis-Involvement of the eyelids present as a subcutaneous, painless, mobile mass with varying degrees of mechanical ptosis [2]. Subconjuctival cysticercosis- Conjunctival involvement is usually in the form of a painless or painful yellowish, nodular subconjunctival mass with surrounding conjunctival congestion. Rarely subconjunctival abscess or granuloma may occur. Extraocular myo-cysticercosis-Cysticercosis of extraocular muscle usually presents as recurrent pain, redness, proptosis, ocular motility restriction, diplopia and ptosis. One or more extra-ocular muscles may be simultaneously involved. Optic Nerve-Optic nerve compression by the cyst may cause decreased vision, disc oedema and painful ocular motility [3]. Lacrimal Gland-Lacrimal gland cysticercus may cause a chronic dacryo-adenitis and enlargement of the gland. Lacrimal canalicular obstruction due to adnexal cysticercus has also been reported [4]. Subretinal Cysticercosis-Patients with ocular cysticercosis may be asymptomatic or suffer mild to severe vision loss. Patients presented with painless vision loss secondary tpo a parasitic infection may be presumed due to sub-retinal cysticerosis [5]. Posterior Segment Cysticercosis-In the posterior segment of the eye, vitreous cysts are more common than retinal or subretinal cysts and the infero-temporal subretinal cyst is most frequently encountered. The macular region being the thinnest and most vascularised, the larvae lodges itself in the subretinal space from where it perforates and enters into the vitreous cavity [6]. In this process, the parasite can cause a retinal detachment, macular hole or incite an inflammatory response. As the cyst develops, it causes atrophic changes of the overlying retinal pigment epithelium. Sometimes, it may cause exudative retinal detachment and focal chorioretinitis. The central retinal artery is the most likely route for cysticercosis involving the optic nerve head. A dying cysticercosis cyst can incite a severe inflammatory response, due to the leakage of the toxin from the micro-perforations present in the cyst wall. Inflammatory reaction can be present even the living parasite and more so with vitreous cysts than subretinal cysts. Complications of intraocular cysticercosis include severe inflammation (vitreous exudates, organised membranes in vitreous), retinal detachment, complicated cataract, hypotony and phthisis. It is seen that the involvement of the posterior segment is common [7]. Materials and Methods Type of Study-It was a prospective study which comprised of 10 patients with extra-ocular cysticercosis who visited the Out-Patient Department of Ophthalmology. Place of Study-It was conducted in the Department of Ophthalmology at Shri Ram Murti Smarak Institute of Medical Sciences, Bareilly, Uttar Pradesh Sample Collection and Duration of Study -It was randomised clinical study on ten patients who presented to us in the Ophthalmology OPD from January 2017 to June 2018 who were included in this study, having had cysticercosis involving the extra-ocular muscles only. Ethical committee clearance was taken. Informed consent was taken from all the participants. Inclusion Criteria Patients having cysticercosis involving the extraocular muscle only. Patients with neurocysticercosis Patients with intraocular cysticercosis Patients with systemic cysticercosis Patients with conjunctival cysticercosis Sampling Methods-During the first visit, information recorded for each case included: age, sex, occupation, detailed history, regarding symptoms and duration of onset, course of the disease, eye involvement, visual status after the onset of symptoms and at presentation, previous investigations and treatment. Detailed ophthalmic examination was performed on all the patients. Torch light examination and slit lamp bio-microscopy was done for the anterior segment evaluation and indirect ophthalmoscopy for the posterior segment. General physical and neurological examination was also conducted. Depending on the location of the cyst, the relevant clinical tests performed were: diplopia charting, ptosis work-up and Hertel's exophthalmometry. All patients underwent ultrasonography USG A and B scan (eye and orbit) and computed tomography (CT) with 2mm sections, head and orbit; axial and coronal cuts. Typically, A scan ultrasonography shows high amplitude spikes corresponding to the cyst wall and Results Out of 10, nine patients had resolved extra-ocular cysticercosis with this medical management given for 4 weeks with the use of topical cyclosporine, the inflammatory response was better controlled when given in addition to the routine medical regimen. For one patient the treatment was prolonged for 6 weeks, which was later resolved. It was found that the male preponderance was more in comparison to females, with complaint of redness being the most common presenting symptom followed by lid swelling and pain involving levator palpebrae superioris and medial rectus being the most commonly involved muscles. Male patients were more commonly involved in extra-ocular cysticercosis Younger individuals less <10 yrs of age were more commonly affected, poor awareness towards hygiene in this age group can be a related reason to this. Red eye was the most common symptom amongst majority of patients. The incubation period may vary from months to years. The ocular manifestations can be devastating as the cysticercus increases in size, leading to blindness in 3-5 years. Death of the parasite causes marked release of toxic products, leading to a profound inflammatory reaction and destruction of the eye. Appropriate sanitation and personal hygiene are important in control of fecal contamination of water and food. Humans become infected when they ingest raw or undercooked pork that contain viable cysticerci. The cysticercus larvae is semi-transparent, opalescent white and elongate, oval in shape and may reach a length of 0.6 to 1.8 cm. Human cysticercosis occurs when T. solium eggs are ingested via faecal-oral transmission from a tapeworm infected host. The human then becomes an accidental intermediate host. These oncospheres (primary larvae) penetrate the intestinal mucosa and enter the circulatory system. Haematogenous spread to neural, muscular, and ocular tissues occurs. Within these tissues, the oncospheres develop into secondary larvae (cysticerci). The host inflammatory response to cystercerci depends on the parasite's ability to evade host immunity [11]. Ocular involvement is usually unilateral, but bilateral involvement may occur in cases of disseminated cysticercosis [12]. Left eye is more commonly involved in comparison to the right, possibly because larva may be preferentially routed to the left internal carotid artery which directly originates from the aorta; however, this has not been substantially proven [13]. The medial side of the eye has been more commonly involved than the lateral side on account of the anatomic course of the ophthalmic artery, which after giving off the lacrimal branch runs on the medial side of the orbit before diving into the terminal branches. Infestation of the ocular adenexa is probably through the anterior ciliary artery. Parasite reaches the posterior segment through the posterior ciliary artery [14]. Extraocular myocysticercosis: cysticercosis of extraocular muscle usually presents as recurrent pain, redness, proptosis, ocular motility restriction, diplopia, and ptosis. One or more extraocular muscles may be simultaneously involved although a propensity for involvement of the superior muscle complex and the lateral rectus muscles has been reported [15]. T. solium releases three to six proglottids/day, bearing 30,000 to 70,000 eggs per proglottid into the intestine. The adult worm may live in the small intestine for as long as 25 years without symptoms (taeniasis) and pass gravid proglottids intermittently with the faeces [16]. The cysticercus larvae is semitransparent, opalescent white, and elongate oval in shape and may reach a length of 0.6 to 1.8 cm.4 Human cysticercosis occurs when T. solium eggs are ingested via faecal-oral transmission from a tapeworm infected host. The human then becomes an accidental intermediate host. These oncospheres (primary larvae) penetrate the intestinal mucosa and enter the circulatory system. The cysts are usually multiple and may be deposited in any tissue, the eye, orbit, and nervous system being most frequently affected. The embryo forms a globular translucent cyst which causes a foreign body reaction, and if it is ruptured a suppurative inflammation occurs that may destroy the eye. Original Research Article Tropical Journal of Ophthalmology and Otolaryngology Available online at: www.medresearch.in 89|P a g e Diagnosis- The diagnosis of myocysticercosis is based on clinical, serologic, and radiological findings. The clinical findings may occasionally be non-specific and hence, non diagnostic. Serological tests used for the specific diagnosis of cysticercosis are indirect hemagglutination, indirect immuno-fluorescence, and immuno-electrophoresis such as ELISA specific serology. The serology in myocysticercosis may show false positive reports. Thus, imaging studies are the most helpful in establishing the diagnosis of cysticercosis. High resolution Ultrasonography (USG), computed tomography (NCCT) and Magnetic Resonance Imaging (MRI) help in detection of the orbital cyst. Diagnosis of infection with adult T. solium is made by stool examination and finding the eggs of proglottids of the worm. Though stool examination for the adult worm may be performed in cases of suspected myocysticercosis infections, it is not essential that all patients with myocysticercosis have the adult worm in their intestines except in those cases, which are acquired by auto-infection. B-scan ocular ultrasonography reveals a well-defined cystic lesion with clear contents and a hyperechoic area suggestive of a scolex [17]. Typically, A-scan USG shows high amplitude spikes corresponding to the cyst wall and scolex. The scolex shows a high amplitude spike due to presence of calcareous corpuscles [18]. Ocular ultrasonography is a useful tool for diagnosis and monitoring of the cyst during treatment. CT scanning of the orbits is a reliable technique to help establish a diagnosis of ocular cysticercosis. The characteristic feature is a hypodense mass with a central hyperdensity suggestive of the scolex. Usually, a solitary cyst with wall enhancement is observed. Adjacent soft-tissue inflammation may be present. The scolex may not be visible if the cyst is dead or ruptured and has surrounding inflammation. Concurrent neurocysticercosis may be present and should be excluded. MRI reveals a hypointense cystic lesion and hyperintense scolex within the extraocular muscle. Treatment-Albendazole is the larvicidal drug used in the treatment of cysticercosis in human. Once the diagnosis of orbital cysticerocosis is confirmed, it is of utmost importance to rule out intra-ocular and central nervous system involvement. Albendazole is a well tolerated broad spectrum cysticidal drug and destroys approximately 85% of cysts with a single course. Dying cysticercus releases its toxin and incites severe inflammatory reaction leading to vitritis and may lead to blindness. Hence it is mandatory to check for intraocular involvement of cysticercus cyst. Cure rates range from 60 to 85% in the usual dosing with most series showing albendazole 70-90% yielding slightly higher cure rates. Albendazole is converted to its active metabolite, albendazole sulphoxide, in the liver. It is usually given at 15mg/kg/day with a maximum of 400 mg/bid with repeated dosings as clinically warranted. Absorption of albendazole is increased with fatty foods. Treatment may increase inflammation as the cysts involutes, leading to worsening clinical states. Thus, concomitant administration of corticosteroids is recommended to avert an inflammatory response that usually occurs 2-5 days after initiation of therapy. Eye drop cyclosporine in the patient at the time of inflammatory reaction because of the dying cysticercus was also used. Orbital cysts are best treated conservatively with a 4 week regimen of oral albendazole (15mg/kg/day) in conjunction with oral steroids 1.5mg/kg/day in a tapering dose over a 1 month period. In our study, 10 patients with extra-ocular muscles with cysticercosis were included, majority of which were below 10 years and the most common symptoms was redness followed by pain and lid swelling. To patients were also having diplopia and extra-ocular muscle movement restriction. Steinmetz et al.1989 suggested that anti-helminthic drugs like albendazole or praziquantel reduce the number of cysts and the frequency of seizures in neurocysticercosis. [19] However early excision of intra-ocular cysticercosis is the treatment of choice as been quoted by Gemolotto et al in 1955 [20]. Natarajan et al.1999 quoted that if there is co-infection with intra-ocular and intra-cranial cysticercus, the complete intra-ocular cyst must be removed completely by surgery by first , followed by cysticidal drugs and corticosteroids. Anti-helminthic therapy is contraindicated in ocular cysticercosis because lysis and degeneration of intra-ocular cyst may induce intraocular inflammatory response and result in visual loss [21]. The vitreous hemorrhage, a well-known complication of surgery during cyst excision was quoted by Sharma et al.2003 [22]. Cysticercosis can be prevented through practicing good hygiene measures, such as washing hands frequently, washing raw vegetables and fruits well before consumption to prevent faecal-oral transmission and Tropical Journal of Ophthalmology and Otolaryngology Available online at: www.medresearch.in 90|P a g e avoiding consumption of raw or undercooked pork and other meat [23]. Conclusion Extra-ocular cysticercosis can be managed with medical treatment provided. Dilated fundus with B/E USG Bscan is mandatory to rule out any intra-ocular cysticercosis to avoid any vision threatening complication of medical management. Depending upon the response, medical management can be given for 4-6 weeks in addition to topical cyclosporine being added to control the inflammatory response of the dying cysticercus in few patients. Medical management also avoids unnecessary surgical damage to the extra-ocular muscle, severe reaction due to cyst rupture at the time of surgery, long postoperative discomfort and post surgical topical medication. In our study, topical cyclosporin had given wonderful anti-inflammatory response when added to topical steroids. Thus, medical management of extra-ocular cysticercosis can be a better option to surgical intervention. Medical management of Extra-ocular Cysticercosis-A preferable alternate Role of medical management plays a due importance in curing the patients of extra-ocular cysticercosis and the other side of the coin is the use of Cyclosporine, which can be a better option to control inflammatory reaction giving an upper hand over the prolonged use of steroids.
2020-04-23T09:08:11.367Z
2018-12-31T00:00:00.000
{ "year": 2018, "sha1": "48687d6e332b7ab19579c2b62b332bba992a8dd7", "oa_license": "CCBY", "oa_url": "https://opthalmology.medresearch.in/index.php/jooo/article/download/28/47", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "3b9ae36bcbc59651f15ca9d56a1024540cb1acea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
235608525
pes2o/s2orc
v3-fos-license
A Machine Learning Approach to Differentiate Two Specific Breast Cancer Subtypes Using Androgen Receptor Pathway Genes Triple-negative breast cancer is a heterogeneous disease with different molecular and histological subtypes. The Androgen receptor is expressed in a portion of triple-negative breast cancer cases and the activation of the androgen receptor pathway is thought to be a molecular subtyping signature as well as a therapeutic target for triple-negative breast cancer. Thus, identification of the androgen receptor pathway status is important for both molecular characterization andclinical management. In this study, we investigate the expression of the androgen receptor pathway in metaplastic breast cancer and luminal androgen receptor subtypes of triple-negative breast cancer and found that the androgen receptor pathway was downregulated in metaplastic breast cancer compared to luminal androgen receptor subtype. Using random forest, we found that the two subtypes of breast cancer can be molecularly classified with the gene expression of the androgen receptor pathway. Introduction Breast cancer is a heterogeneous disease with different molecular features and prognoses. Among them, triple-negative breast cancer (TNBC) which is defined by the lack of expression in estrogen receptor (ER), progesterone receptor (PR) and human epidermal receptor 2 (HER2) by immunohistochemical staining has the most limited therapy choice and worst clinical outcome. TNBC can be further classified into subtypes according to histological morphology as well as molecular features. The histological subtypes of TNBC are composed of the commonest invasive ductal carcinoma of no special type (IDC-NST) and other special subtypes including metaplastic breast cancer, adenoid cystic carcinoma, medullary carcinoma and secretory carcinoma. Studies have shown that TNBC of special types as a single group has a worse prognosis than TNBC-NST, 1 indicating the prognostic value of histological subtyping. Metaplastic breast cancer (MBC) was a special subtype of breast cancer accounting for less than 1% of all invasive breast cancer, characterized by the presence of metaplastic components in cancer tissue which is most commonly squamous carcinoma, followed by chondroid and sarcoma components. Most MBC were triple-negative, 2 and study has shown that MBC has a worse prognosis in all clinical stages after treatment compared to other TNBC. 3 Due to the limited cases of MBC, our understanding of their molecular characteristics remains largely unrevealed. Molecularly, TNBC can also be classified into various subtypes by different algorithms using gene expression data. [4][5][6][7] Though all of the currently applied subtyping algorithms could distinguish a consistent molecular subtype in TNBC which was the luminal androgen receptor (LAR) subtype. LAR accounted for 15%-20% of all TNBC and was characterized by the high expression of the AR gene and enrichment in hormonally regulated pathways. LAR subtype had a relatively low proliferation rate, decreased relapse-free survival and similar distant metastasis-free survival compared with other subtypes 4,5 and can potentially benefit from anti-AR molecule enzalutamide. 8 Since immunohistochemical stain for AR in TNBC showed that 38%-55% of TNBC has positive AR expression, 8,9 using AR as a surrogate marker of LAR subtype would reveal low specificity. Recent studies reported the percentage of AR-positive expression cases in MBC to be 0%, 10 8.7% 11 and 11% 12 respectively which was significantly lower than that in TNBC-NST, indicating the lack of luminal differentiation in MBC. Genomic mutation characterization of MBC revealed that it harbored a mutation rate of 57% in PI3K/AKT/mTOR pathway 13 which was much higher than the 4% in AR-negative TNBC but closer to 40% in AR-positive TNBC. 14 Thus, whether the low expression of AR in MBC also indicated the downregulation of the AR pathway and the exact molecular difference between the MBC and LAR group remains unknown. In this study, we analyzed and compared the expression of AR pathway genes in MBC and LAR using data from TCGA. A machine learning approach was used to differentiate MBC and LAR with AR pathway genes. Clinicopathological Characteristics of the Studied Cohort A total of 38 cases of LAR and 14 cases of MBC were selected in the TCGA database. The clinicopathological characteristics including age at diagnosis, ethnicity, tumor stage, tumor size and lymph node status were analyzed with no significant difference detected between the two groups ( Table 1). Androgen Receptor Pathway Genes Were Differentially Expressed in MBC and LAR A total of 166 genes were identified as the representative genes in the androgen receptor pathway using the Pathway Commons database (Version 12). 15 In addition, recent research has identified another hormonal receptor gene G-protein coupled estrogen receptor (GPER), which was encoded by GPER1. GPER can be activated by hormonal estradiol. Unlike ERalpha and ERbeta which are mostly known to be nuclear receptors, GPER has a seven-transmembrane domain and many studies have confirmed its membrane localization. It was found to be expressed strongly in triple-negative breast cancer and patients younger than 49-years-old. 16 The expression of GPER has reversely correlated with the expression of androgen receptor in TNBC and at the molecular level AR has a repressed regulation on GPER by binding to the promoter of AR genomic region. 17,18 Thus, GPER1 was also included in our analysis as one of the AR pathway genes. The mRNA expression of genes in the AR pathway was analyzed and compared in the 2 groups. Differentially expressed genes were identified and summarized in Table 2. In total, 32 out of the 167 genes have been found to be differentially expressed between MBC and LAR, including RUNX2, AR and GPER1 ( Figure 1). The top 5 genes with the highest significance were RUNX2, SPDEF, FOXA1, DDC and AR. Except for DDC which was a metabolic enzyme, the other 4 genes were all transcription factors that have previously been shown to act intimately with one another. 19,20 Among them, RUNX2 was the only upregulated gene in MBC and it was reported to inhibit the effect of AR as a transcription factor by promoting the dissociation of AR from the targeted genes. 21 The SPDEF was downstream of AR, whose expression was induced by AR. 22 FOXA1 was the pioneer gene in the AR pathway and acted by loosening the AR-binding DNA region to facilitate the binding of AR. 23 Classification of MBC and LAR Using Random Forest The above results suggested that MBC and LAR were differently regulated in the AR pathway. Next, we try to directly differentiate the two groups using gene expression data of the AR pathway. Whereas, using the expression data of a single gene was unable to classify the two groups at 100% efficacy as shown in Figure 1. The machine learning approach was reported to be able to achieve good predictive performance for sample classification using gene expression data. 24 Thus, we further tried to look at the effect of androgen receptor pathway genes on classifying the MBC and LAR groups via the random forest algorithm. Random forest is an algorithm for classification developed in 2001 that uses an ensemble of classification trees 25 and it was widely used in the classification using microarray data. In this task, the expression of the 167 AR pathway genes was used as continuous variables to classify the sample as either MBC or LAR (Figure 2A). The prediction accuracy using the random forest algorithm was 100% ( Table 3). Genes that contributed to the classification most were listed in Figure 2B and C. The contribution was measured by Mean Decrease Accuracy or Mean Decrease Gini. RUNX2, FKBP4 and UXT were ranked as the top 3 genes by both Mean Decrease Accuracy or Mean Decrease Gini. Interestingly, the UXT gene was not listed in the DEGs between MBC and LAR, Model visualization was performed by displaying decision tree with the most and least nodes ( Figure 3). In the simplest decision tree generated by the random forest algorithm which has three nodes, RUNX2 which has the most significant differential expression between MBC and LAR was used as the root node and no other internal node was used. In the model construction, a 5-fold cross-validation was also performed for 100 times to avoid overfitting. Average crossvalidation error and standard deviation were plotted in Figure 4. It was found that when the number of variables was in the range of 5 to 21, the error of cross-validation reached the minimum value. Discussion AR was expressed in a proportion of TNBC and the activation of AR was thought to be a signature for the LAR subtype of TNBC which can be used as a therapeutic target. Thus, identification of the AR pathway status in TNBC cases was important for both molecular characterization and clinical management. In this study, we showed that the AR pathway was differently regulated in MBC and LAR of TNBC. Moreover, through the random forest, the 2 groups of TNBC can be classified using the expression of AR pathway genes with an accuracy rate of 100%. Although currently, MBC shared the same therapeutic choice with TNBC-NST, The obvious downregulation of the AR pathway in MBC compared to LAR may contribute to its histologic differentiation and aggressive behavior. Also, our research suggests that another hormonal receptor GPER was upregulated in MBC compared with LAR, possibly due to the suppression of the AR pathway. Meanwhile, it also indicated that MBC can possibly be activated by estrogen even though it lacks the expression of ER, PR and AR. Recent studies revealed that MBC has more tumor-infiltrating a The columns of the table are the gene name, the gene id, the estimated contrast, the expression mean over both groups, contrast t-value, contrast P-value and the estimated log-odds probability ratio (B) that the gene is differentially expressed. lymphocytes and showed higher PD-L1 expression in both tumor cells and stromal lymphocytes. 26 Thus, whether MBC has similarity with the immunomodulatory subtype still need to be elucidated. The more sophisticated classification of TNBC would enable us to have a better understanding of its molecular mechanism and promote the development of precision medicine. This study was limited by the small sample size used due to the rarity of MBC. Moreover, MBC was considered as a single group in our study although the included MBC cases had different metaplastic components. TCGA Data Acquisition and Cohort Selection TCGA RNA sequencing level 3 normalized data were downloaded from TCGA Data Portal and imported into R (Version 4.0.3) using TCGAbiolinks (Version 2.16.4) functions GDCquery, GDCdownload and GDCprepare for further analysis. 27 Among cases having immunostaining data of ER, PR and HER2, 122 TNBC cases have been selected, among them, there were 14 cases of MBC. Samples that are molecularly classified as LAR was identified in a previous article using Lehmann classifier and were used in this study. 28 In total, there are 38 cases of the LAR subtype of TNBC. Analysis of Differentially Expressed Genes The gene list selected in the analysis of the AR pathway was searched in Pathway Commons database. The Fragments Per Kilobase of transcript per Million mapped reads Upper Quartile (FPKM-UQ) RNA-seq data were log2-transformed before further process. The FPKM-UQ was implemented at the GDC on gene-level read counts that were produced by HTSeq and based on a modified version of the FPKM normalization method. 29 The log2-transformed FPKM-UQ data were analyzed using limma 30 Random Forest Analysis The log2-transformed FPKM-UQ data of DEGs in the MBC and LAR samples were imported into the randomForest Figure 1. AR pathway genes were differentially expressed in MBC and LAR. AR was highly expressed in the LAR group while its expression in MBC was low (left panel). The membrane-bound estrogen receptor, GPER1 showed a higher expression in MBC than in LAR (middle panel). As the gene with most significant expression difference, RUNX2 was upregulated in MBC while downregulated in LAR (right panel). Figure 2. Classifying MBC and LAR using random forest algorithm. Clustering of MBC samples (blue) and LAR sample (red) using 167 AR pathway genes (A). Genes that contributed most to the classification were listed using 2 different parameters (B and C). function of the randomForest package (Version 4.6-14). 31 The randomForest function implements Breiman's random forest algorithm for classification, the algorithm yields an ensemble that can achieve both low bias and low variance and effectively avoid overfitting. The MDSplot function was implemented for the multi-dimensional scaling plot of the proximity matrix from randomForest. The number of trees (ntree) was set to be 500 by default. Each tree was grown independently, and the final prediction was yielded by the mean value. 70% of the dataset was taken for training and the rest for testing by default. Authors' Note Taobo Hu and Guiyang Zhao contributed equally to this article. Our study did not require an ethical board approval because all data used in the manuscript were public accessible and were download from public database. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Natural Science Foundation of China (Grant Figure 3. Visualization of 2 representative trees with the maximum and minimum nodes generated by random forest. The tree with maximum nodes used SPDEF gene expression value as the root node and the expression of other 9 genes as internal nodes, making the total nodes number to be 21. It was a 2-class split for each root and internal node which was determined by the gene expression value of the specific gene in the node. The cutoff value for the binary split in each node was calculated automatically (A). The tree with minimum nodes used the expression of the RUNX2 gene as the single root and internal node, generating 2 leaf nodes.
2021-06-24T06:16:43.713Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "97b5a4319662d2529fabd5b2bab6e484b65a899f", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/15330338211027900", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd0fc73cd90f1338ede7c43e80764795d71663c1", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
55228542
pes2o/s2orc
v3-fos-license
Nonparametric Model for Business Performance Evaluation in Forestry Determination of efficiency has become increasingly important in many areas of human activity. Approach to this problem is particularly interesting when there are no clear success parameters, and when the efficiency of using several different resources/inputs is measured for achieving several different outputs. In such measurements, we are always interested in determining the degree of efficiency of individual organisations, institutions, associations, etc. in relation to others acting under similar conditions. In doing so, the compared objects are presented through data on used resources/inputs and data on achieved outputs. Introduction Determination of efficiency has become increasingly important in many areas of human activity. Approach to this problem is particularly interesting when there are no clear success parameters, and when the efficiency of using several different resources/inputs is measured for achieving several different outputs. In such measurements, we are always interested in determining the degree of efficiency of individual organisations, institutions, associations, etc. in relation to others acting under similar conditions. In doing so, the compared objects are presented through data on used resources/inputs and data on achieved outputs. In forestry, the determination of efficiency of forestry companies is extremely complex because of multiple goals of forest management. The principle of sustainable development represents the management and use of forests and forest land in the way to preserve their biological diversity, productivity, regeneration capability, vitality and potential in order to enable forests to fulfil now and in future their key economic, ecological and social functions. The above stated makes the conditions of forest management increasingly demanding and imposes the necessity of continuous analyses of all relevant business performance indicators. In the last few decades, forest management has been focused on multifunctional use and general benefits of forests. Due to multiple benefits and advantages offered by forests, as well as the non-market nature of a part of these outputs, the measurement of performance in forestry is highly demanding. In such conditions, it is pretty difficult to apply conventional economic methods, such as cost-benefit analysis, internal rate of return and others for determining business success. The right evaluation method must be selected in order to determine whether the resources are used efficiently. Taking into consideration multiple inputs and multiple outputs of forest management, in this paper Data Envelopment Analysis (DEA) was applied for determining the performance level of forest management units. DEA represents a methodology suitable for the efficiency analysis of numerous production units, but is not traditionally used in forestry. Although it was first applied in the forestry sector in 1986 [1], the number of papers based on measuring the performance by non-parametric techniques, such as DEA, is still very limited in forestry literature. The basic idea is to determine the performance through the efficiency level of individual DMUs 1 based on the relationship between a complex input and a complex output. Data Envelopment Analysis, as the technique for measuring productivity and efficiency, is widely applied in many areas. It was used, for example, for making comparisons between organisations [2], companies [3], regions and countries [4]. For determining business performance it was applied in banking [5], agriculture [6], wood industry [7], schooling [8], etc. In DEA bibliography [9] there are approximately 3,200 published DEA papers. However, in the area of management of renewable natural resources, it is still not sufficiently present. In forestry literature there is only a limited number of DEA papers [10 -13], and it yet has to be introduced and accepted in forestry as a management tool at a strategic and operating level of decision making. So, this paper assesses the efficiency of basic organizational units in the Croatian forestry, forest offices, by applying Data Envelopment Analysis (DEA), a nonparametric methodology for measuring relative efficiency of comparable decision making units with more inputs and outputs. The relative efficiency of compared forest offices is calculated in the paper with the most frequently used DEA models -CCR and BCC model. According to the aquired data, conducted calculations and analysis, the results of global technical efficiency (obtained by CCR model), local pure technical efficiency (obtained by BCC model) and scale efficiency were determined. The results also included the calculation of efficiency frontier, frequency of efficient units in reference set of inefficient units, determination of sources and values of inefficiencies, influence of the forest offices' structural characteristics on their efficiency and the average efficiency of forest offices grouped with respect to the forest administrations and regions they belong to. The research reveals DEA as a powerful multi criteria decision making tool and a possible, very valuable support in forest management. Efficiency and the possibility to measure relative efficiency In the business analysis some indicators are calculated which represent the basis for evaluation and comparison of business performance (indicators of liquidity, profitability, cost-effectiveness, etc.). However, these indicators in the calculations take into account only some of the accounting issues, and so represent partial performance indicators. At the same time, multicriteria analysis of these partial indicators can't identify the best-performing unit, because it is unlikely that one of the units has all the observed simple indicators the best i.e. better than the other compared units. If we want to calculate an indicator of business performance which will reflect efficiency of the organizational unit we take into consideration the ratio of output and input. If we want to calculate a measure of efficiency that will consider more inputs and more outputs, it is necessary to make a selection of inputs and outputs that will be taken into the calculation, and it is necessary to join a certain weight to inputs and outputs in order to define a single measure of efficiency for each organizational unit. Absolute measure of efficiency can be determined when we have explicitly defined relationship between inputs and outputs, or when we know the association that for every combination of inputs joins a specific set of possible outputs. If this relationship is known then, from the relation between really achieved and theoretically achievable outputs of each individual unit, it is possible to determine their absolute efficiency. The concept of relative efficiency is used when it is not possible to define theoretically possible level of efficiency, and so the certain units are compared with those units whose business performance, given the state of manufacturing technology, is the best. DEA methodology does not require the pre-determined weight of inputs and outputs, and does not require any knowledge of the explicit links between inputs and outputs. Based on the known empirical data about the level of inputs and outputs for each unit DEA calculates its relative efficiency compared to other units. Observed unit reaches 100% relative efficiency (rating 1) if and only if compared with the other units it doesn't show inefficiency in the use of any inputs or outputs. Specifically, for a unit is said to be relatively efficient if: 1. it can not increase any of its outputs withouta. an increase of its inputs, or, b. reducing some of its remaining outputs 2. it can not reduce any of its inputs withouta. reducing some of its outputs, or, b. increasing some of its remaining inputs. Generally about Data Envelopment Analysis The story about DEA begins with the doctoral dissertation of Edward Rhodes, who has tried to evaluate the curriculum of public schools in Texas in the United States. At that time, it was a challenge to assess the relative efficiency of schools with multiple inputs and outputs, and without the usual information on prices and costs. As a result, the formulation of CCR model 2 was developed and the first DEA paper was published in the European Journal of Operational Research in year 1978 [14]. DEA was originally developed as a tool to measure the effectiveness of organizations working on non-profit basis (public schools and hospitals, military establishments), where it is not Such a number of articles shows the great importance and interest for the DEA methodology and its applications. The reasons for the rapid growth probably lie in the fact that DEA is an interdisciplinary applicable methodology, which is also suitable in cases where other approaches do not provide satisfactory results because of complex or unfamiliar nature of relationship between multiple inputs and outputs. So, in recent years, data envelopment analysis has become a central technique in the analysis of productivity and efficiency. Mathematical and statistical basics of DEA Data envelopment analysisis is a deterministic, non-parametric methodology for determining the relative efficiency of comparable decision making units considering their similar work technology and performance of similar tasks. Decision making units (DMUs) represent any production or non-production units that have the same types of inputs and outputs, and they differentiate one from another according to the level of available resources and the level of activity within the transformation process (inputs to outputs). DEA determines the relative efficiency of analysed units by constructing the empirical efficiency frontier i.e. frontier or margin of production possibilities (this term is used although the analysis may consider unproductive sectors) based on the information about used inputs and achieved outputs of all units included in the analysis. The most successful units (best practice units), the ones that determine the efficiency frontier, gain the grade '1', and the degree of technical inefficiency of all other units is calculated based on the distance of their input-output ratio in relation to the efficiency frontier ( Figure 1). For each unit included in the analysis a particular problem of linear programming is solved and its maximum efficiency, regarding other units in the reference set, is determined. The relative efficiency of the unit is calculated as the ratio of weighted sum of outputs and weighted sum of inputs. Weight of outputs and inputs for each unit is determined so to make its measure of efficiency maximum possible, with the limitation that the result of the relative efficiency can not be over one ('1'). A model defined in such a way maximizes the result of the relative efficiency of each unit provided that the gained set of weights must be feasible and attainable for any other unit in the observed group. This means that DEA defines the best possible achievable efficiency frontier i.e. production possibilities, and sets the maximum output for each unit at a given level of its inputs. DEA is based on the extreme values and each DMU is compared only with the best units. The basic assumption is that if a particular unit with X inputs (resources) can produce Y outputs (products), the other units should be able to do the same if they are working efficiently. The center of the analysis lies in finding the 'best' virtual production unit for each actual/real unit. If the virtual unit is better than the original, whether it achieves more outputs with the same inputs or it achieves the same outputs with fewer inputs, then the real unit is inefficient. In the next section a simple example will be given to explain the theoretical basis on which the Data envelopment analysis stands. First, numerical example of mathematical assumptions and procedures necessary in different DEA models will be presented. And then the graphical representation of the same example will describe the concept of Data envelopment analysis. Simple numerical example A simple numerical example might help show what is going on. Assume that there are three baseball players (DMUs), A, B, and C, with the following batting statistics. Player A is a good contact hitter, player C is a long ball hitter and player B is somewhere in between. Now, as a DEA analyst, we combine parts of different players. First let us analyze player A. Clearly no combination of players B and C can produce 40 singles with the constraint of only 100 at-bats. Therefore player A is efficient at hitting singles and receives an efficiency of 1.0. Now we move on to analyze player B. Suppose we try a 50-50 mixture of players A and C. This means that lambda=(0.5, 0.5). The virtual output vector is now, lambda Y = (0.5 * 40 + 0.5 * 10, 0.5 * 0 + 0.5 * 20) = (25, 10) Note that X = 100 = X(0) where X(0) is the input(s) for the DMU being analyzed. Since lambdaY > Y(0) = (20, 5), then there is room to scale down the inputs, X and produce a virtual output vector at least equal to or greater than the original output. This scaling down factor would allow us to put an upper bound on the efficiency of that player's efficiency. The 50-50 ratio of A and C may not necessarily be the optimal virtual producer. The efficiency, theta, can then be found by solving the corresponding linear program. It can be seen by inspection that player C is efficient because no combination of players A and B can produce his total of 20 home runs in only 100 at bats. Player C is fulfilling the role of hitting home runs more efficiently than any other player just as player A is hitting singles more efficiently than anyone else. Player C is probably taking a big swing while player A is slapping out singles. Player B would have been more productive if he had spent half his time swinging for the fences like player C and half his time slapping out singles like player A. Since player B was not that productive, he must not be as skilled as either player A or player C and his efficiency score would be below 1.0 to reflect this. This example can be made more complicated by looking at unequal values of inputs instead of the constant 100 at-bats, by making it a multiple input problem, or by adding more data points but the basic principles still hold. (Source: [16]). Graphical example If it is assumed that convex combinations of players are allowed, then the line segment connecting players A and C shows the possibilities of virtual outputs that can be formed from these two players. Similar segments can be drawn between A and B along with B and C. Since the segment AC lies beyond the segments AB and BC, this means that a convex combination of A and C will create the most outputs for a given set of inputs. This line is called the efficiency frontier. The efficiency frontier defines the maximum combinations of outputs that can be produced for a given set of inputs. The segment connecting point C to the HR axis is drawn because of disposability of output. It is assumed that if player C can hit 20 home runs and 10 singles, he could also hit 20 home runs without any singles. We have no knowledge though of whether avoiding singles altogether would allow him to raise his home run total so we must assume that it remains constant. Since player B lies below the efficiency frontier, he is inefficient. His efficiency can be determined by comparing him to a virtual player formed from player A and player C. The virtual player, called V, is approximately 64% of player C and 36% of player A. (This can be determined by an application of the lever law. Pull out a ruler and measure the lengths of AV, CV, and AC. The percentage of player C is then AV/AC and the percentage of player A is CV/AC.) The efficiency of player B is then calculated by finding the fraction of inputs that player V would need to produce as many outputs as player B. This is easily calculated by looking at the line from the origin, O, to V. The efficiency of player B is OB/OV which is approximately 68%. This figure also shows that players A and C are efficient since they lie on the efficiency frontier. In other words, any virtual player formed for analyzing players A and C will lie on players A and C respectively. Therefore since the efficiency is calculated as the ratio of OA/OV or OC/OV, players A and C will have efficiency scores equal to 1.0. The graphical method is useful in this simple two dimensional example but gets much harder in higher dimensions. The normal method of evaluating the efficiency of player B is by using an LP formulation of DEA (Source: [16]). To conclude this section, DEA models are linear programming methods that calculate the efficiency frontier of a set of DMUs and evaluate the relative efficiency of each unit, therby allowing a distinction to be made between efficient and inefficient DMUs. Those identified as "best practice units" (i.e., those determining the frontier) are given a rating of one, whereas the degree of inefficiency of the rest is calculated on the basis of the Euclidian distance of their input-output ratio from the frontier [17]. Compared to regression or stochastic frontier analysis methods, DEA shows several advantages. First, DEA allows handling multiple inputs and outputs (with different units) in a noncomplex way. Second, DEA does not require any initial assumption about a specific functional form linking inputs and outputs. While a typical statistical approach (regression analysis) is based on average values, DEA is an extreme point method and compares each producer with only the «best» producers. Efficiency is determined relatively with respect to other production units in the observed group. DEA approach in evaluation of forestry units' performance Since DEA was introduced by Charnes, Cooper and Rhodes [14] several analytical models have been developed depending on the assumptions underlying the approach. For instance, the orientation of the analysis toward inputs or outputs, the existance of constant or variable (increasing or decreasing) returns to scale and the possibility of controlling inputs. According to Farrell [18], technical efficiency represents the ability of a DMU to produce maximum output given a set of inputs and technology (output oriented) or, alternatively, to achieve maximum feasible reductions in input quantities while maintaining its current levels of outputs (input oriented). In this study, output oriented DEA seems more appropriate, given it is more reasonable to argue that forest area, growing stock and other inputs should not be decresed. Instead, the goal of forest sector should be increased outputs of forest management, and improved general state of forests. Given the selected orientation and the diversity of units characterizing our example, we first applied CCR model proposed by Charnes et al. [14]. This model assumes constant returns to scale. Following Cooper et al. [19], we begin by the commonly used measure of efficiency (output/input ratio) and we try to find out the correponding weights by using linear programming in order to maximize the ratio. To determine the efficiency of n units Where u 0 is the variable allowing identification of the nature of the returns to scale. This model does not predetermine if the value of this variable is positive (increasing returns) or is negative (decreasing returns). The formulation of the output oriented models can be derived directly from models described in (1) and (2), see [19]. In this study, two measures of efficiency are applied -technical and scale efficiency (SE). Measurement of allocative efficiency requires data on production costs which were not available in our data set. For computing the applied models, DEA Excel Solver software was used. Sample selection and data description State forests in the Republic of Croatia (RC) are mostly managed by the company Croatian forests Ltd -they account for approximately 80% of the total forest-covered area or 1,991,537 ha. The company Croatian forests consists of: headquarters in Zagreb, 16 regional forest administrations (FA) and a total of 169 forest offices (FO). In the current three-layer organisation of the Croatian forestry, forest office is the organisational unit in which the basic tasks of forestry activities are carried out and most income and direct costs of forest management are incurred in. The efficiency analysis of selected forest offices is carried out based on the information adopted from the Croatian forests' ltd yearly reports. Additional applications and more robust data may provide additional insights for the evaluation of forest management. The research includes 48 forest offices. The selected forest offices are the representatives of four main regions in the Croatian forestry: lowland flood-prone forests (I), hilly forests of the central part (II), mountainous forests (III) and karst/Mediterranean forests (IV). Each region is represented by two forest administrations i.e. by six forest offices from each forest administration. The sample of organisational units ( Figure 3) and data involved in this research (yearly values of selected inputs and outputs) are shown in Table 1. Inputs and outputs were selected so as to reflect business activities of the investigated decision making units -forest offices as the basic organisational units of the Croatian forestry, which perform the basic professional and technical operations in forest management (regeneration and silviculture of forests, wood harvesting) in a certain part of the forest economic area of RC, and where most income is achieved and direct costs incurred from the core business activity of forest management. According to the Forest Act, along with conventional production of wood, forest management must also provide additional outputs. They are related to silviculture, protection and use of forests and forest land for construction and maintenance of forest infrastructure, all in accordance with general European criteria for ensuring sustainable forest management. Also, the goal of Croatian forests ltd. and its administrations and offices is business profitability. Most income comes from sold wood and hence the segment related to maintaining and enhancing the production function of forests (increment of growing stock) becomes increasingly important. Accordingly, the inputs and outputs considered in this example are: Inputs There are 48 forest offices evaluated in this model. For the basic DEA models, the number of offices (units under consideration) should be a minimum of between 3 to 5 times the total number of input and output factors. Thus, we have limited the total number of inputs and outputs to eight factors. Table 2 presents the descriptive statistics of the variables used in the analysis. A wide variation in both inputs and outputs is noticable. The input use is in some cases twenty times larger than that used by other offices, while variation in output variables is even higher. Such variation in the level of input and output implies that there are big differences between conditions under which individual forest offices operate. These differences are not unexpected, since the sample involves all representative areas managed by Croatian forests. However, it may also be a sign of poor management of resources in individual forest offices. Technical and scale efficiency Technical and scale efficiency were determined individually for each forest office. Results obtained by the application of the output-oriented DEA are given in The average CCR efficiency of the investigated forest offices is 0.829, which means that an average (assumed) forest office should only use 82.9% of the currently used quantity of inputs and produce the same quantity of the currently produced outputs, if it wishes to do business at the efficiency frontier. In other words, this average organisational unit, if it wishes to do business efficiently, should produce 20.6% 3 more output with the same input level. According to the BCC model, the average efficiency is 0.904. This means that an average forest office should only use 90.4% of the current input and produce the same quantity of output, if it wishes to be efficient. In other words, to be BCC efficient it should produce 10.6% 4 more outputs with the same inputs. In spite of a relatively high mean efficiency (83 or 90%) and regardless of the used model (CCR or BCC), the lowest level of relative efficiency ranges between 0.407 (CCR) and 0.524 (BCC). This implies firstly that individual units can reduce the level of used input up to 59.3% or 47.6%, without affecting the output level, and secondly that there are significant differences in production and business activities between the analysed units. According to the CCR model, 15 forest offices are relatively efficient (31%), while a total of 24 units (50%) are rated '1' according to the BCC model. Incompatibility between CCR and BCC efficiency is most conspicuous with forest offices with extremely low values of one or more input variables. According to the model with variable returns (BCC), the efficiency of such units is much higher than according to the model with constant returns (CCR). This may indicate the influence of size or volume of activities of the observed units on the level of their efficiency, but it can also mean that the BCC model with the selected input and output variables cannot make proper distinction between efficient and inefficient units. Such results may, however, also be useful if additional models of decision making are applied. The results of DEA analysis may then be used as the first filter of inefficient units. The survey of DEA results is given in Table 4 The interpretation of scale efficiency scores allows for some interesting remarks. Scale efficiency shows how close or far the size of the observed unit is from its optimal size. The efficiency of 100% indicates that the size and volume of activities are well balanced. The values lower than 100% mean that the level of technical efficiency is at least partly under influence of size or volume of activities of the observed unit. The scale efficiency of 0.919 means that the analysed forest offices would increase their relative efficiency on average by 8% if they adapted their size or volume of activities to the optimal value. Relatively efficient are 16 (33%) units. Almost all of them (15) are also efficient according to the CCR model (Table 3). Forest offices that are efficient only according to the BCC model ( Table 3) do not show the same efficiency level in case of determination of scale efficiency. This indicates their inadequate size or inadequate volume of activities expressed by the main parameters of their production and business performance. These are mostly the units with low values of one or more input and output variables -Karst/Mediterranean forest offices with low growing stock, number of employees, annual cut, etc. Sources and values of inefficiency By selecting output-oriented models projection course of inefficient units against the efficiency frontier was determined. By comparing empirical and projected data, it is possible to identify the sources of inefficiency as well as their value. The lower the percentage of projected input values in empirical input values, the higher is on average the source of inefficiency caused by this input. The higher the percentage of projected output values in empirical output values, the higher is the source of inefficiency caused by this output. It can be concluded from the above Table that the second and third output -annual cut and investments -affect the inefficiency of forest offices most seriously. Then follow the activities of forest regeneration and achieved income with a somewhat lower impact on inefficiency of forest offices. In the period concerned the observed units should have produced on average 25.64% more than the produced quantity of output O1, 168.04% more than the produced quantity of the second output O2, 119.45% more than output O3 and 67.61% more than the produced quantity of output O4. Similarly, they should have used 85.48% of the used quantity of the first input I1, 93.47% of the quantity of output I2, 96.60% of the third input I3 and 96.94% of the used quantity of input I4. Then they would be CCR-efficient. For achieving BCC efficiency, it was necessary to produce on average 18.68% more than the produced quantity of the first output I1, 58.94% more than the second output O2, 107.23% more than output O3 and 56.03% more than output O4. With such an average increase of output, the observed forest offices would do business efficiently according to the BCC model. It should be noted that the projected values are achievable because some forest offices involved in the analysis achieved them successfully. Structural characteristics and efficiency of forest offices Forest offices differ among themselves in a series of structural characteristics and hence professional and technical operations are carried out in different conditions with respect to the surface area, number of employees, means of work, growing stock, etc. Differences between the basic structural characteristics of the analysed forest offices are shown in Table 1 and 2. Based on the efficiency results of forest offices grouped according to the values of their basic structural characteristics -surface area, growing stock and number of employees, it has been determined to what extent the given environment affects the efficiency of specific units. The average efficiency with respect to surface area was determined as the arithmetic mean of the efficiency of forest offices that belong to a certain surface area class (Figure 4). The highest levels of efficiency according to all three models were recorded for forest offices that manage a surface area ranging between 10 and 15,000 hectares (the average efficiency is 0.969 according to the CCR model, 0.977 according to the BCC model and 0.991 according to the SE model). The lowest levels of efficiency were determined for the group of forest offices with a surface area from 5 to 10,000 hectares. The volume of the managed growing stock was taken as the second criteria for grouping the analysed units. Forest offices are divided into classes with respect to the growing stock expressed in m 3 per hectare, and the average efficiency of individual classes is presented in Figure 5. Forest offices that manage the lowest growing stock volume (less than 100 m 3 /ha) also have the lowest average relative efficiency, according to the CCR and SE model (0.676 and 0.689, respectively). According to these models the highest level of efficiency is recorded for forest offices with growing stock ranging between 200 and 300 m 3 /ha i.e. over 300 m 3 /ha -0.890 (CCR) and 0.984 (SE) for the group III (200-300 m 3 /ha) and 0.824 (CCR) and 0.980 (SE) for the group IV of forest offices (> 300 m 3 /ha). Only one forest office manages the growing stock exceeding 400 m 3 /ha and it was not separated in a special class but was included in the group IV. 13 The average efficiency with respect to surface area was determined as the arithmetic mean of the efficiency of forest offices that belong to a certain surface area class (Figure 4). The highest levels of efficiency according to all three models were recorded for forest offices that manage a surface area ranging between 10 and 15,000 hectares (the average efficiency is 0.969 according to the CCR model, 0.977 according to the BCC model and 0.991 according to the SE model). The lowest levels of efficiency were determined for the group of forest offices with a surface area from 5 to 10,000 hectares. The volume of the managed growing stock was taken as the second criteria for grouping the analysed units. Forest offices are divided into classes with respect to the growing stock expressed in m 3 per hectare, and the average efficiency of individual classes is presented in Figure 5. the efficiency of forest offices that belong to a certain surface area class (Figure 4). The highest levels of efficiency according to all three models were recorded for forest offices that manage a surface area ranging between 10 and 15,000 hectares (the average efficiency is 0.969 according to the CCR model, 0.977 according to the BCC model and 0.991 according to the SE model). The lowest levels of efficiency were determined for the group of forest offices with a surface area from 5 to 10,000 hectares. The volume of the managed growing stock was taken as the second criteria for grouping the analysed units. Forest offices are divided into classes with respect to the growing stock expressed in m 3 per hectare, and the average efficiency of individual classes is presented in Figure 5. According to the BCC model, the average efficiency of all groups is assessed as relatively high. The highest average efficiency of forest offices with low growing stocks in the Karst and Mediterranean areas is the effect of increasing returns to scale, where it is considered that little increase of input (growing stock, etc.) would result in more than proportional increase of output (income, allowable cut, etc.). This assumption may be considered wrong for the said forest offices, if bad structure and poor quality of growing stock in the Karst and Mediterranean area are taken into account. The observed forest offices employ 2,007 workers. Their number ranges from a minimum of 8 workers to a maximum of 100 workers per forest office. The number of workers in individual forest offices is mainly connected with the quantity and volume of production tasks. The average efficiency of forest offices with respect to the number of employees is presented in Figure 6. 14 increase of input (growing stock, etc.) would result in more than proportional increase of output (income, allowable cut, etc.). This assumption may be considered wrong for the said forest offices, if bad structure and poor quality of growing stock in the Karst and Mediterranean area are taken into account. The observed forest offices employ 2,007 workers. Their number ranges from a minimum of 8 workers to a maximum of 100 workers per forest office. The number of workers in individual forest offices is mainly connected with the quantity and volume of production tasks. The average efficiency of forest offices with respect to the number of employees is presented in Figure 6. It can be seen that the highest level of CCR and SE efficiency is achieved by Forest offices with the highest number of employees (group IV and V). For forest offices with 61 to 80 employees, the determined BCC, CCR and scale efficiency is 0.914, 0.920 and 0.992, respectively. In the group with more than 80 employees there are only two forest offices and their efficiency is approximately 0.985 regardless of the applied model. Relative efficiency of forest administrations and regions The sample of forest offices included in the analysis comes from eight Forest administrations. Six Forest offices from each selected Forest administration account for 35% (FA Split) to 67% (FA Nova Gradiška and Buzet) of the total number of offices that make individual Forest administrations. The efficiency level of individual Forest administrations is calculated as the weighted arithmetic mean of the pertaining Forest offices' relative efficiency (Figure 7). Surface areas of Forest offices are taken as weights. Relative efficiency of forest administrations and regions The For success assessment of Forest administrations, besides their average efficiency, it is also important to take into account the number of Forest offices that define the efficiency frontier. Discussion and conclusions In this very dynamic period of management of natural resources, when forest experts face the challenges of professional and responsible management of forests and forest land, having to observe at the same time the protection requirements of their ecological, social and economic functions, as well as challenges of profitable management of forestry companies, managers need different models for converting the accounting and financial data into useful information. In this paper the models of Data Envelopment Analysis were applied for the assessment and comparison of organisational units in croatian forestry. In applying these models, a number of variables can be taken into consideration, so as to obtain a more comprehensive indicator for evaluating business activities of organisational units in forestry. Organizational units in forestry, besides final 'products' (volume of the harvested wood, length of the constructed forest roads, renewed forest areas etc.), provide through forest management a range of services and beneficial functions that forests offer to users. Because of that the efficiency of forestry units is more difficult to assess than the efficiency of the ordinary production units which are dealing with simple commodity production. Specifically, it is difficult to quantify the amount of resources (inputs) that are needed to 'produce' a certain amount of such services and the general goods. It is also difficult to quantify the amount of these outputs. Thus, a common feature of the organizational units in forestry is that a part of their output consists of services and general benefits, most of which are difficult to express materially. The business analysis in forestry requires that such 'intangible' outputs are in the best way possible replaced by other more easily accessible and measurable substitute variables. Comprehensive business analysis also imposes the need to use multiple methodologies and models which together can give more integral description of production and business results and provide better performance indicators. In this paper, Data envelopment analysis is presented and used for the evaluation and comparison of forestry organizational units' performance i.e. efficiency of Forest offices. DEA represents methodology which at the same time considers multiple variables, so that it can provide a more comprehensive measure of business conduct in forestry. As a technique for measuring productivity and efficiency DEA experienced wide usage in many areas. However, in the field of natural resource management it is still not represented enough. In the forestry literature there is only a limited number of papers based on the determination of the efficiency by nonparametric techniques such as DEA. This as well as other non-traditional methods should yet to be introduced and accepted in forestry as a management tool on both strategic and operational level of planning and decision-making. Through comparisons by DEA methods it is possible to determine the greatest achievements which are objectively feasible for the most important natural and financial business segments and the total business results, but also to identify the resources whose use, taking into account the objective circumstances, isn't efficient enough. In addition, this approach allows detection of possible improvements in the business, but also the sources of the failure in business management. Based on the presented research of business performance evaluation in the paper it is considered that the application of DEA in forestry could be, as well as in many other business systems, a very strong support to planning and decision-making. As for the disadvantages and limitations of DEA, one of the major drawbacks of DEA method is low discrimination of in/efficient units in the upper range of efficiency. Specifically, the number of single-efficient units increases with the number of input and output variables. The number of decision making units considerably larger than the number of variables (n >> m + t) is not always sufficient enough for a 'harsher' i.e. more severe distinction of efficiency. The reason for that partly lies in the flexibility of the method and the described way of determining the weights of inputs and outputs. In order to overcome this problem, several different models have been developed like "Cone-Ratio Method", "Assurance Region Method" and "Proportionbased Weights" [19]. Another limitation is the overall complexity of the method. Since the standard formulation of DEA model calculates separate linear program for each compared unit, extensive comparisons can be computationally intensive. Therefore, the model can seem quite complex and less attractive. Furthermore, DEA method is good in estimating "relative" efficiency, but it stretches very slowly the absolute efficiency. In other words, the analysis shows how efficient a particular decision making unit is in comparison to other units, but not how successful the DMU is compared to the "theoretical maximum". One of the main disadvantages of DEA method is its sensibility to extreme observations and random errors. The basic assumption is that there are no random errors and that all deviations from efficiency frontier represent inefficiency. The advantage of DEA methodology over traditional techniques (i.e. multiple regression, stochastic frontier) is in the comparison of units with multiple inputs and outputs, whereby they can be expressed in different units of measure. Furthermore, the selected inputs and outputs are assumed to have a correlation, however it is not necessary to know the explicit form of this correlation. DEA enables a direct comparison of the DMU with other units or a combination of units with similar work/production technologies and similar tasks. Using the best units as the reference values (benchmarks), DEA indicates to inefficient units what changes in their resources are needed in order to improve their business performance. In this paper the relative efficiency of organisational units of 'Croatian Forests' ltd is calculated based on CCR and BCC output-oriented DEA models. Shares have been determined of projected values of inputs and outputs in empirical values, as well as sources and amounts of inefficiency. Scale efficiency of Forest offices has also been determined. The effect of structural characteristics on relative efficiency of forest offices is determined, and so is the average efficiency of Forest administrations and geographic regions. On the average, global technical efficiency obtained by CCR model amounts to 0.829. Local pure technical efficiency, obtained by BCC model is 0.904, and scale efficiency is 0.919. A higher level of efficiency is averagely achieved by forest offices with an area from 10 to 15,000 hectares and with the growing stock from 200 to 300 m 3 /ha. A relatively higher efficiency is achieved by units in continental regions. The analysis of amounts and causes of inefficiencies shows that inefficiency is more significantly affected by outputs O2 and O3 (allowable cut and investments). DEA solutions and the results of relative efficiency like the ones in the presented research can be interesting to forestry experts, managers and researchers due to three properties of this method: • Characterisation of each organisational unit by a single result of relative efficiency, • Improvements proposed by the model to inefficient units are based on achieved results of units that manage their business efficiently, • Considering the problems with DEA is an alternative and indirect approach to specifying abstract statistical models and decision making based on residual analysis or analysis with coefficients -parameters. In this way, DEA with its characteristics can become a new management tool in forestry which can be used for the analysis of business efficiency that enables a new approach to organization and data analysis, cost-benefit analysis, estimation of the frontier and the theory of learning from the most successful ones. Undoubtly, additional research is required to generalise the evidence provided in this study, in particular regarding the explanation of the underlying differences in the use of particular inputs and the production of certain outputs that could improve efficiency of forest management units. Nevertheless, some interesting insights regarding the performance of the forest management units in Croatia may have been provided. It is also considered that by the development and application of Data envelopment analysis and other models of multi-criteria decision making, it is possible to enrich the forestry science and practice by an approach that should provide easier analysing, planning and predicting in forest management.
2018-12-11T23:50:55.799Z
2014-02-12T00:00:00.000
{ "year": 2014, "sha1": "9ad1ca39166d2e98699964a8a10b812f2c6e6a76", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/45959", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "4d89a38f47e5348c7a61d9a3f865c3e388d1b43b", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Business" ] }
14070758
pes2o/s2orc
v3-fos-license
Routine Magnetic Resonance Imaging at Term-Equivalent Age Detects Brain Injury in 25% of a Contemporary Cohort of Very Preterm Infants Introduction In recent years, significant investigation has been undertaken by means of magnetic resonance imaging (MRI) in an attempt to identify preterm infants at risk for adverse outcome. The primary objective is to provide a comprehensive characterization of cerebral injury detected by conventional MRI at term-equivalent age in an unselected, consecutive, contemporary cohort of preterm infants born <32 gestational weeks. Secondly, this study aims to identify risk factors for the different injury types in this population. Methods Data for all preterm infants born <32 gestational weeks and admitted to Innsbruck Medical University Hospital were prospectively collected (October 2010 to December 2015). Cerebral MRI was evaluated retrospectively using a validated scoring system that incorporates intraventricular haemorrhage (IVH), white matter disease (WMD) and cerebellar haemorrhage (CBH). Results 300 infants were included in the study. MRI showed 24.7% of all infants to have some form of brain injury. The most common injury type was IVH (16.0%). WMD and CBH were seen in 10.0% and 8.0%. The prevalence of common neonatal risk factors was greater within the group of infants with CBH. In particular indicators for respiratory disease were observed more often: longer ventilation duration, more frequent need for supplemental oxygen at day 28, higher rates of hydrocortisone treatment. Catecholamine treatment was the only neonatal risk factor that was overrepresented in infants with WMD Discussion Cerebral MRI at term-equivalent age, as addition to cranial ultrasound, detected brain injury in 25% of preterm survivors. The diagnosis of IVH was already made by neonatal ultrasound in most cases. In contrast, only a minority of the CBH and none of the non-cystic WMD have been detected prior to MRI. Decreasing gestational age and neonatal complications involved with immaturity have been identified as risk factors for CBH, whereas WMD was found in relatively mature infants with circulatory disturbances. Introduction In European countries 1.1% to 1.6% of live births are very preterm. [1] Depending on gestational age, up to 49% of preterm infants exhibit a psychomotor or mental delay at toddler age. [2] This delay does not even out during the following years, but particularly cognitive and behavioural deficits are still or first become evident at school age and beyond. [3,4] Despite numerous investigations that aimed to early identify infants at risk, prediction of long-term outcomes for individual infants is still challenging. The use of magnetic resonance imaging (MRI) has provided an additional means of depicting the wide spectrum of preterm brain injury. This is of major importance since a key paper by Woodward et al. found that abnormal findings on MRI at term-equivalent age are significantly better at predicting adverse outcome than are abnormalities that can be detected by ultrasound. [5] Since then, several MRI evaluation scales have been developed to quantify the severity of brain abnormality and predict neurodevelopmental outcome of preterm infants. [5][6][7] However, these scoring systems usually account for cerebral white and grey matter, but disregard cerebellar injury and may thereby underestimate the full extent of injury. [5][6][7] Furthermore, several parameters in these scorings are considerably subjective. We thus chose to employ a recently developed simple scoring system that incorporates all major injury types and differentiates between intraventricular haemorrhage (IVH), white matter disease (WMD, including non-cystic and cystic WMD) and cerebellar haemorrhage (CBH). [8] There is evidence of associations between MRI findings and neurodevelopmental outcomes, but due to the above-mentioned reservations reported relationships are quite variable. [5,9,10] Another limitation of these studies is the fact that MRI was performed in a research setting and not as part of routine care. Thus, data on the absolute frequency of brain injury in unselected preterm populations are scarce and the analysis of associations with long-term outcome is hindered. As a first step to a thorough work-up of this topic the present study was designed to provide a comprehensive characterization of cerebral injury detected by routine MRI at term-equivalent age in an unselected, consecutive, contemporary cohort of preterm infants born at <32 gestational weeks. Secondly, this study aims to identify risk factors for the various injury types in this population. 355 infants were eligible for MRI at term-equivalent age. Of these children 24 (6.8%) infants were transferred out of Tyrol prior to term-equivalent age, two (0.5%) infants were too unstable and 23 (6.5%) of all parents did not consent to participate. Accordingly, 306 (86.2%) infants were scanned at term-equivalent age; for six (1.7%) infants it was not possible to obtain high-quality MR images. Thus, the final study population consisted of 300 (84.5%) infants. Cerebral MRI at term-equivalent age is part of our routine follow-up program for all preterm infants born at <32 gestational weeks. All infants were scanned without sedation during postprandial sleep as described in a previous paper. [11] All caregivers gave written informed consent to the performance of the MRI. The study was approved by the ethics committee of the Medical University of Innsbruck (study No. AN2013-0086 333/4.2). Patient characteristics Neonatal data was collected during the hospital stay as described previously. [12] Cranial ultrasound examinations were routinely performed during the initial hospital stay. All images were evaluated regarding the diagnoses of IVH, WMD and CBH in daily interdisciplinary meetings (neonatology, paediatric radiology). Magnetic resonance image acquisition All images were acquired with a 3.0 Tesla Siemens Magnetom Verio (Siemens, Erlangen, Germany) at the local Department of Neuroradiology. The MRI protocol included the following imaging sequences: axial T2-weighted TSE images covering the whole head (TE 99 ms, TR 4590 ms, FOV 15 x 11 cm, matrix: 147 x 256, slice thickness 3 mm, no gap); 3D MP-RAGE T1-weighted images covering the whole head (TE 4.54 ms, TR 1770 ms, TI 1000 ms, flip angle 9 degrees, FOV 20 x 15 cm, matrix 144 x 192, slice thickness: 1.0 mm, gap 0.5 mm). From 2012 susceptibility weighted imaging (SWI) was included in the routine protocol (available for 181 of 300 infants (60.3%)): axial SWI images covering the whole head including brain and skull (TE 20 ms, TR 27 ms, FOV 20 x 15 cm, matrix 182 x 256, slice thickness 2.0 mm, gap 0.4 mm). The use of SWI did not increase the rate of detection of cerebellar or intraventricular haemorrhages as compared to infants in whom SWI was not employed. Magnetic resonance image evaluation Cerebral injury was graded according to a scoring system previously published by Kidokoro et al. [8] Kidokoro's current brain injury assessment covers three common injury patterns in preterm infants (IVH, WMD and CBH). [8] All injury types were graded as grade 1 to grade 4 according to the degree of severity. High-grade injury (grade 3 or 4) in any category was defined as severe injury. IVH grade 1 was defined as the presence of hemosiderin deposits or post haemorrhagic cysts within the thalamo-caudal notches. IVH grade 2 IVH was defined as the presence of hemosiderin deposits outside the region of the thalamo-caudal notches along the ventricular wall without ventricular dilatation. IVH grade 3 was defined as ventricular dilatation >97 th percentile with evidence of previous ventricular haemorrhage. IVH grade 4 was defined as the presence of parenchymal haemorrhagic lesions or posthaemorrhagic cystic encephalomalacia. WMD grades 1 and 2 were defined by the presence of small punctate lesions ( 3 mm in individual size) in periventricular white matter on either or both of the T1/T2-weighted images. WMD grade 2 was differentiated from grade 1 by the presence of lesions in bilateral corticospinal tracts or with !3 lesions per hemisphere. WMD grade 3 was defined as the presence of extensive lesions along the wall of lateral ventricles with high signal on T1-weighted images. WMD grade 4 was defined as the presence of cystic lesions in periventricular white matter. CBH grade 1 consisted of unilateral small punctate lesions ( 3 mm in size). CBH grade 2 consisted of bilateral small punctate lesions ( 3 mm in size). CBH grade 3 consisted of an extensive unilateral lesion (>3 mm in size). And CBH grade 4 was defined as bilateral extensive lesions (>3 mm in size). High-grade injury (grade 3 or 4) in any category was defined as severe injury. All MR images were evaluated by two operators (V.N., T.D.) blinded to the clinical data. Consensus was reached upon discussion. Statistical analysis Data analysis was performed using SPSS software, version 22.0 for Windows (IBM Corp., Armonk, NY, USA). Data distribution was tested using the Kolmogorow-Smirnov test. Depending on the distribution of data, Student's T test or the Mann-Whitney U test was employed for comparison of two groups. Comparison of categorical data was made with the chi-square or Fisher's exact test. Patient characteristics The maternal and neonatal characteristics of the 300 study participants are shown in Table 1. MRI was performed at a mean gestational age of 40.6 ±0.7 weeks. Frequency of brain injury Of the total cohort of 300 infants 74 (24.7%) showed some form of brain injury on MRI at term-equivalent age. Of all infants 19 (6.3%) were diagnosed with any form of severe injury and 24 (8.0%) with more than one type of injury. Detailed results are presented in Table 2. The diagnosis IVH was already made by neonatal ultrasound in most cases (46 (95.8%) of 48 infants). In contrast, WMD and CBH were detected by neonatal ultrasound in only 8.0% to 10.0% of all cases (3 of 30 infants with WMD, 2 of 24 infants with CBH). WMD diagnosed by neonatal ultrasound was the cystic form in all three cases. Injury patterns IVH was the most frequent injury type observed (16.0% of all infants) and was an isolated finding in 62.5% of all cases. Similarly, WMD was an isolated finding in two-thirds (66.7%) of all cases. In contrast, CBH was frequently associated with an additional supratentorial injury (65.0%). Patterns of injury are visualised in Fig 1. Neonatal risk factors for brain injury The rate of brain injury detected by MRI at term-equivalent age was higher in the group of more immature and sicker infants (Table 3). Separate analysis of infants with severe injury showed similar associations and additionally revealed the need for supplemental oxygen at day 28 and at a postmenstrual age of 36 weeks, late-onset sepsis, patent ductus arteriosus, necrotising enterocolitis and parenteral nutrition for !14 days as neonatal risk factors for severe brain injury (data not shown). We found that the prevalence of all common neonatal risk factors was greater within the group of infants with CBH than in infants without brain injury (Table 4). This pattern was not observed for infants with WMD. Mean gestational age did not differ significantly between infants with WMD and infants without brain injury ( Table 4). The rate of infants with birthweight <1000g was 4-fold lower in infants with WMD. The only neonatal risk factor that was overrepresented within the group of infants with WMD was catecholamine treatment (p = 0.018). IVH is usually detected by neonatal ultrasound and there is already an established set of neonatal risk factors for IVH. For completeness risk factors for IVH are listed in Table 4. Discussion Our comprehensive assessment of a large and unselected cohort of very preterm infants proved that cerebral MRI at term-equivalent age, as addition to cranial ultrasound, detects brain injury in 25% of preterm survivors and especially facilitates the detection of WMD and CBH. MRI thereby contributes to specifying the type and frequency of injury patterns in this population. Analysis of neonatal characteristics revealed that the most immature infants with complicated neonatal history are at greatest risk for injury of the cerebellum. In contrast to this, infants with WMD turned out to not be part of this recognized group of most vulnerable infants. The only identifiable risk factor for this condition was the need for catecholamine treatment. Several authors report good agreement between ultrasound and MRI regarding demonstration of IVH and cysts. [13] However, it has been shown that ultrasound lacks sensitivity in the detection of (punctate) cerebellar lesions and non-cystic WMD. [5,14] This is in accordance with our own experience and confirmed by the results of the present study. The diagnosis of IVH was already made by ultrasound in most cases, whereas CBH were each diagnosed in only a fraction of all cases and non-cystic WMD was not detected by ultrasound at all. Yet there is no other publication that used exactly Kidokoro's definition of WMD. However, there are several reports that provide information on the incidence of punctate WM lesions that form the imaging correlate for Kidokoro's WMD grades 1-3. [6,[14][15][16][17] The reported incidences range from 20% to 30%. [6,[14][15][16][17] There is some evidence that the total lesion burden of punctate WM lesions is better demonstrated in an early MRI scan (approx. three weeks after preterm birth) and there is a decrease in intensity and amount until term age. [6,16,18] Thus, in our study the performance of all MR scans during a narrow time window around 40 weeks postmenstrual age, thus 8 to 16 weeks after birth, might have led to underestimation of the total load of non-cystic WM injury. Due to increased survival of highest-risk infants and better imaging modalities cerebellar injury is now reported more often, but established information about the frequency of CBH from either imaging modality in large cohort studies is still limited. Kidokoro found 10% of all infants to have a CBH and 2.2% a severe CBH. [8] This is in accordance with our own results. Using ultrasound, different working groups report an incidence that ranges up to 15%. [19] Studies using MRI report rates, depending on gestational age of the population included, as high as 20%, predominantly due to the detection of low-grade (punctate) lesions. [20] Furthermore, it has been shown that especially extremely immature infants are at high risk for developing concurrent IVH and CBH. [19] Also in our cohort we found additional supratentorial brain injury in about two-thirds of infants with CBH, with concurrent IVH appearing most frequently. This phenomenon may be explained by similarities in the pathogenesis of CBH and IVH, which are discussed below. Speaking generally, the rate of brain injury was higher in the group of more immature infants with consequently higher rates of neonatal complications. Analysis of neonatal risk factors for the individual injury types revealed that CBH affected the most immature and sickest infants. Especially parameters for circulatory disturbance and respiratory disease were more common among infants with CBH. Similar observations have been made by other working groups that did extensive research on cerebellar injury in neonates. They proposed circulatory factors and severe respiratory problems to play a role in the onset of CBH. [19,20] Whether these factors co-occur with CBH or whether and to what extent they are implicated in the pathogenesis of CBH has not yet been fully elucidated. However, it seems that the (multifactorial) pathogenesis of CBH in the preterm infant has similarities to that of IVH. [21] This assumption is also supported by our own analyses that revealed an overlapping of many clinical risk factors for CBH and IVH. Germinal matrices are present also in the cerebellum and analogous to the supratentorial germinal matrices these sites are especially vulnerable to circulatory disturbances, which are common in sick preterm neonates. [21] Interestingly, this pattern of immaturity and related neonatal disease was not present in infants with WMD. Infants with WMD did not suffer more often from any neonatal disease with the exception of a higher rate of catecholamine treatment. The pathogenesis of non-cystic white matter injury is not yet completely understood, but the current evidence suggests both haemorrhagic and hypoxic-ischaemic processes to be involved in the pathogenesis of this lesion type. [17,22] The role of circulatory factors and concurrent hypoxia-ischaemia is supported by a pathology study in infants with non-cystic WMD that showed diffuse gliosis suggestive of hypoxia-ischaemia. [23] This study was performed in term neonates with congenital heart disease, another group of infants in whom exactly this injury pattern is frequently found, and corroborates the hypothesis that altered cerebral perfusion may play a major role in the pathogenesis of non-cystic white matter injury. Need for catecholamine treatment may be regarded as a surrogate for severe neonatal diseases, e.g. arterial hypotension in the wake of sepsis. However, this assumption is reflected neither by our own data nor by the study by Kidokoro et al.[8] One reason may be the fact that there is a loss of efficient cerebral autoregulation during dopamine supply in preterm infants. [24] Additionally, it has been shown that preterm neonates treated for arterial hypotension with inotropic drugs, despite treatment, spent more time with a blood pressure below their gestational age than did agematched controls that did not receive any blood pressure support. [25] The main strength of this study is that the study cohort represents a consecutive, contemporary population seen at a well-equipped tertiary centre for neonatal care. A high percentage (84.5%) of the eligible population underwent MRI at term-equivalent age. Thus, the nature and frequency of cerebral findings may be regarded as representative for other European centres with comparable resources and concepts of care. Univariate analysis was chosen as an explorative approach to evaluate clinical risk factors for brain injury in our population. Adjustment for multiple testing was not considered to be reasonable since especially immaturity and parameters concerning respiratory disease are mutually dependent. A limitation at that point is that outcome data are not yet available for the total cohort. This data will be provided in the future since our cohort continues to be followed to school age and possibly beyond. This study provides comprehensive data on frequency and patterns of brain abnormalities detected by conventional MRI at term-equivalent age in a contemporary cohort of preterm infants born at <32 weeks. There was good agreement between neonatal cranial ultrasound and MRI in the diagnosis of IVH, but MRI proved superior in the detection of CBH and noncystic white matter injury. Decreasing gestational age and neonatal complications involved with immaturity have been identified as risk factors for CBH, whereas white matter injury was found in relatively mature infants and was associated with a more frequent need for catecholamine supply suggestive for circulatory disturbances. Current evidence indicates an association between these early MRI findings and subsequent neurodevelopmental outcome. However, to date comprehensive assessment of the effect of especially isolated subtle brain injury and delayed maturation in otherwise "uncomplicated" and "healthy" preterm infants has been confined to the second year of life. Thus, after MRI at term-equivalent age has been implemented as routine examination in many centres, standardised neuropsychological follow-up of large cohorts of preterm infants until adulthood is absolutely essential to uncover potential associations between MRI findings and subtle cognitive deficits or behavioural problems that may first manifest themselves at later ages.
2018-04-03T01:26:00.240Z
2017-01-03T00:00:00.000
{ "year": 2017, "sha1": "2d09e3eab4f5ac0729c1ef4ac7dbdc4e4eb60186", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0169442&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2d09e3eab4f5ac0729c1ef4ac7dbdc4e4eb60186", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265129698
pes2o/s2orc
v3-fos-license
Maleic acid and malonic acid reduced the pathogenicity of Sclerotinia sclerotiorum by inhibiting mycelial growth, sclerotia formation and virulence factors Sclerotinia sclerotiorum is a necrotrophic plant pathogenic fungus with broad distribution and host range. Bioactive compounds derived from plant extracts have been proven to be effective in controlling S. sclerotiorum. In this study, the mycelial growth of S. sclerotiorum was effectively inhibited by maleic acid, malonic acid, and their combination at a concentration of 2 mg/mL, with respective inhibition rates of 32.5%, 9.98%, and 67.6%. The treatment of detached leaves with the two acids resulted in a decrease in lesion diameters. Interestingly, maleic acid and malonic acid decreased the number of sclerotia while simultaneously increasing their weight. The two acids also disrupted the cell structure of sclerotia, leading to sheet-like electron-thin regions. On a molecular level, maleic acid reduced oxalic acid secretion, upregulated the expression of Ss-Odc2 and downregulated CWDE10, Ss-Bi1 and Ss-Ggt1. Differently, malonic acid downregulated CWDE2 and Ss-Odc1. These findings verified that maleic acid and malonic acid could effectively inhibit S. sclerotiorum, providing promising evidence for the development of an environmentally friendly biocontrol agent. Supplementary Information The online version contains supplementary material available at 10.1007/s44154-023-00122-0. Introduction Sclerotinia sclerotiorum, a cosmopolitan necrotrophic pathogen, is a saprophytic and parasitic fungus that infects more than 400 dicotyledons such as sunflowers, soybean, canola and oilseed rape (Chen et al. 2022;Kim et al. 2011;Shahoveisi et al. 2022).Sclerotinia stem rot (SSR) caused by S. sclerotiorum occurs in many areas, resulting in a severe yield loss of oilseed rape in China, Canada, the United States and other regions (Bolton et al. 2006;Hu et al. 2019).SSR reduced the annual output of oilseed rape by 10%-30% and even 80% in extreme cases, which seriously endangered agricultural production and caused economic losses (Hu et al. 2017;Qin et al. 2011).Since SSR is a soil-borne disease, the formation of sclerotia in soil plays a significant role in the pathogenic process (Cheng et al. 2019).There are two different approaches for S. sclerotiorum to infect the host plant: the main way is hyphae formed directly from germinating sclerotia, and another is through the germinated ascospores (Ding et al. 2021).Pathogenic factors are responsible for the successful infection of S. sclerotiorum.Researches have shown that S. sclerotiorum releases oxalic acid (OA) to help its colonization of oilseed rape (Ghosh et al. 2016;Fujinami et al. 2022).In the early stage of infection, high concentrations of OA create a reducing environment that can inhibit the oxidative burst of plants and facilitate fungal invasion (Kim et al. 2011).In contrast, low concentrations of OA induce resistance in plants.Therefore, the sclerotial formation and OA secretion of S. sclerotiorum are vital for the pathogenic process. Utilizing chemical pesticides has long been an effective method for preventing and controlling S. sclerotiorum (Liu et al. 2018(Liu et al. , 2021;;Oliveira et al. 2013a, b).However, the current issues of pesticide reduction and fungicide-resistant strains of S. sclerotiorum have received considerable attention (Sun et al. 2018;Besil et al. 2018;Zhou et al. 2014a, b).Although numerous pieces of research focus on alternative methods like agricultural practice, biological methods and breeding disease-resistant cultivars (Alvarez et al. 2012;Grandini et al. 2022;Zhang et al. 2020), these methods are not always available and effective.Botanical pesticides are an emerging component of modern pesticide development (Coman et al. 2013;Zhao et al. 2022;Ngegba et al. 2022).Recently, secondary metabolites, such as organic acid, alkaloids and phytosterol, have been used as the main active ingredients of new botanical pesticides, which are biodegradable, economical and environmentally friendly (Luo et al. 2021;Li et al. 2022;Chen et al. 2011a, b).The application of plant extracts as the main active compounds of pesticides to control fungal diseases has a promising prospect. In our previous studies, we found that dissolved organic matter derived from oilseed rape straw supplemented with selenium (Se) in soil (RSDOM Se ) inhibited the mycelial growth of S. sclerotiorum (Jia et al. 2020(Jia et al. , 2019;;Cheng et al. 2020).Among the eight metabolites upregulated in RSDOM Se , maleic acid and malonic acid inhibited the mycelial growth of S. sclerotiorum effectively (Jia et al. 2019).However, there was no report on the effects of the two acids on morphological and physiological characteristics, and relevant pathogenic gene regulations of S. sclerotiorum were unknown.To further elucidate the potential inhibitory effects of the two acids on S. sclerotiorum, experiments were conducted: (1) to examine the impacts of maleic acid and malonic acid on the antifungal sensitivity, mycelial growth, the pathogenicity of mycelia on detached leaves, sclerotial formation and subcellular structure of sclerotia of S. sclerotiorum, and (2) to quantify oxalic acid (OA) secretion in mycelia and assess the expression of relevant pathogenic genes. Effect of maleic acid and malonic acid on the growth of S. sclerotiorum In this study, we clearly clarified the sensitivity of S. sclerotiorum to maleic acid and malonic acid (Fig. S1), with the half-maximal effective concentrations (EC 50 ) for maleic acid and malonic acid determined to be 2.6 mg/ mL and 7.0 mg/mL, respectively.The following studies utilized the effective concentration of 2 mg/mL, which exhibited lower toxicity. As shown in Table 1, the mycelial growth of S. sclerotiorum was significantly inhibited by maleic acid, as well as malonic acid.The inhibition ratios of the three treatments, namely maleic acid (32.5%), malonic acid (9.98%) and maleic acid + malonic acid (67.6%), were determined in comparison to the control.Additionally, the combination of maleic acid and malonic acid effectively inhibited the lesion diameters on detached leaves of oilseed rape.The inhibition ratios were 6.22% for maleic acid, 12.44% for malonic acid, and 20.73% for maleic acid + malonic acid, when compared with the control. Inhibitory effect on sclerotial formation The sclerotial formation was examined (Fig. 1).The results indicated sclerotial formation was inhibited by malonic acid, leading to a decrease in the number of sclerotia.However, an increase was observed in their weight (Fig. 1).Compared with the control, the weight of Table 1 The lesion diameters of S. sclerotiorum determined after 48 h incubation on PDA media with maleic acid and/or malonic acid and the lesion diameters of S. sclerotiorum determined after 36 h incubation on detached leaves Data were analyzed by one-way ANOVA and shown as means ± standard error (SE).The concentrations of maleic acid, as well as malonic acid, were 2 mg/mL.Different letters indicated statistically significant differences (p < 0.05) sclerotia in the treatments of malonic acid and the twoacid combinations increased by 40% and 58%, respectively.The reduction ratios of sclerotial numbers with the two treatments were 17% for malonic acid, and 46% for maleic acid + malonic acid in comparison to the control.However, the treatment with maleic acid increased both the number and the weight of sclerotia, although these changes were not statistically significant. Effect of the two acids on the ultrastructure of sclerotia The internal structure sclerotia was observed using TEM. Both acids negatively affected sclerotia compared to the control.In normal sclerotia cells, the cytoplasm exhibits uniformity, the organelles are distinctly visible, and the electron density within the cytoplasm is consistently distributed.(Fig. 2A).After acid treatment, the matrix was sparse and exhibited uneven electron density.The integrity of the cell membrane was compromised, leading to the emergence of multiple patchy regions with reduced electron density within the cell (Fig. 2B, C, and D).In addition, the cell wall became thinner after acid treatment (Fig. 2B, C, and D).Overall, the cellular structure remained largely intact with only a small amount of localized damage observed. Analysis of OA secretion and acid production in mycelia The OA secretion in mycelia with different treatments was shown in Fig. 3.The corresponding standard curve was shown in Fig. S2, and the R 2 value of which reached 0.9993.Compared with the control, maleic acid significantly reduced OA secretion, whereas malonic acid treatment and maleic acid + malonic acid treatment significantly increased OA secretion.The decreased ratio of maleic acid on the OA secretion was 45%, and the increased ratios for the treatments of malonic acid and maleic acid + malonic acid were 42% and 46% respectively.pH of maleic acid, malonic acid and their combination in PDB were 2.53, 2.24 and 2.12 respectively.Low pH of the two acids were related to lower pathogenicity of S. sclerotiorum. qRT-PCR verification of the target gene expression levels Two oxalate decarboxylase (OxDC) genes (Ss-Odc1, Ss-Odc2), two cell wall degradation enzymes (CWDE2, CWDE10) and two genes related to virulence (Ss-Bi1, Ss-Ggt1) were evaluated by qRT-PCR.The treatments of malonic acid and the two-acid combination significantly decreased the relative expression level of Ss-Odc1, and maleic acid upregulated the expression level of Ss-Odc2, as shown in Fig. 4A and B. The treatments of maleic acid and the two-acid combination significantly lowered the expression of CWDE10 with the corresponding ratios of 36% and 32%, while malonic acid significantly downregulated the expression of CWDE2 (Fig. 4C and D).As for Ss-Bi1, maleic acid decreased the gene expression by 29%, compared with the control.In addition, the expression of Ss-Ggt1 was declined in the treatments of maleic acid and two-acid combination, with the maleic acid treatment resulting in a 75% decrease (Fig. 4E and F). Discussion Long-term use of traditional pesticides has been found to be detrimental to environment, human health and the progress of ecologically sustainable development (Zhou et al. 2014a, b;Sahni et al. 2016).To reduce the usage of conventional fungicides, alternative methods are worth more attention.In our previous study, it has been proved that RSDOM Se can inhibit the mycelial growth of S. sclerotiorum.Maleic acid and malonic acid, which was among the upregulated metabolites of RSDOM Se , showed significant inhibitory effect on mycelial growth (Jia et al. 2019).Maleic acid is an important intermediate in chemical industries (Ayoub et al. 2022).It is usually utilized as an acidic catalyst in the food processing industry, due to its non-toxic nature and ediblility (Zhang et al. 2022a, b).Malonic acid is a common component of many products and processes in the pharmaceutical and cosmetic industries (Gu et al. 2022).Studies have shown that malonic acid and maleic anhydride or related compounds have definite antibacterial effects (Chen et al. 2011a, b;Kuwaki et al. 2002).Based on the previous findings, this study provided some evidences that maleic acid and malonic acid inhibit the growth of S. sclerotiorum in vitro (Fig. 5). Maleic acid and malonic acid reduced the pathogenicity of S. sclerotiorum The activities of fungicides on various plant pathogenic fungi followed the principle of hormesis, described as high-dose inhibition and low-dose stimulation (Zhang et al. 2019).To ensure effective inhibition, the EC 50 values of the two acids on S. sclerotiorum were determined.EC 50 of maleic acid and malonic acid were 2.6 and 7.0, respectively.Yeon et al. found that maleic acid exhibited antifungal activity against a diverse range of fungi and oomycetes, with the minimum inhibitory concentration ranging from 312.5 to about 2,500 μg/mL (Yeon et al. 2021).In addition, the previous studies showed that malonic acid at a concentration of 2 mg/L had a significant inhibitory effect on S. sclerotiorum (Jia et al. 2019).Therefore, the same concentration of 2 mg/mL was selected for this study.Generally, all our designated concentrations stayed within the stimulation phase, and the inhibitory effect of maleic acid was better than that of malonic acid (Fig. S1).The two acids significantly inhibited the mycelial growth of S. sclerotiorum and reduced the lesion diameters on the detached leaves (Table 1).The inhibitory effect of the combined application of two acids surpassed that of a single acid treatment.Therefore, it is recommended to utilize a combination of the two acids for the control S. sclerotiorum. Possible inhibitory evidence regarding the two acids on S. sclerotiorum Further possible inhibitory evidence of maleic acid and malonic acid on S. sclerotiorum was also investigated, it might involve the following several processes: (1) The two acids inhibited the sclerotia formation The sclerotial numbers were significantly reduced at the presence of acombination of two acids, whereas the presence of maleic acid alone resulted in only a slight reduction or no change (Fig. 1).The reduced number of S. sclerotiorum suggested that sclerotia were inhibited, corroborating the findings reported by Cheng et al. (2019) and Zhang et al. (2022a, b).Reducing the number of pathogens can effectively mitigate the prevalence of soil-borne diseases (Chen et al. 2011a, b).It is noteworthy that while maleic acid increased both the weight and number of sclerotia (Fig. 1), it significantly inhibited the mycelial growth and the incidence of disease (Table .1), which may be attributed to the reduction of virulence (Fig. 4).Host-induced gene silencing (HIGS) enhances sclerotiorum, but it did decrease the quantity of sclerotia and increase the weight of individual sclerotia.Interestingly, only the strain with CsGPA1-silenced exhibited reduced virulence (Zhu et al. 2021).Additionally, a study showed a positive correlation between sclerotinia virulence and colony diameter, but no correlation was found between virulence and the number, size, or weight of sclerotia.(Rather et al. 2022).Consequently, the relationship between the sclerotia formation and virulence of S. sclerotiorum needs to be further investigated.(2) Maleic acid reduced OA production of S. sclerotiorum The synthesis and secretion of OA at high concentrations by S. sclerotiorum is a primary determinant for successful plant infection (Hou et al. 2019).In this study, maleic acid significantly curtailed OA secretion, while malonic acid and the combined treatment of two acids enhanced OA secretion (Fig. 3).OA is a key pathogenic factor of S. sclerotiorum, which secretes a large amount of OA during early plant infection to suppress the production of plant reactive oxygen species and promote the colonization and expansion of pathogenic bacteria (Cessna et al. 2000).Decreasing OA production in S. sclerotiorum could elevate the pH of surrounding environment, thereby diminishing its pathogenicity (Derbyshire et al. 2021).Interestingly, despite the increased OA secretion by S. sclerotiorum, malonic acid alone and the combined treatment of two acids exhibited a positive inhibitory effect.One study found that an activating mutation of the S. sclerotiorum pac1 gene increased oxalic acid production at low pH but decreased virulence (Kim et al., 2007).Therefore, the reduction of virulence of S. sclerotiorum induced by maleic acid and malonic acid might be related not only to OA content but also to the pH change caused by it.Another study showed that the growth of S. sclerotiorum was affected by pH.Oxalic acid, citric acid, glutaric acid and tartaric acid inhibited sclerotia formation at pH 1.72, 2, 2.43 and 1.96 respectively, and mycelial growth at pH 1.56, 1.88, 2.3 and 1.9 respectively (Atallah et al. 2020).The pH of 2 mg/mL maleic acid, malonic acid and their combination in PDB were 2.53, 2.24 and 2.12 respectively.Therefore, the addition of maleic acid and malonic acid subjected S. sclerotiorum to a highly acidic environment, which inhibited its growth. (3) The two acids regulated pathogenic gene expressions of S. sclerotiorum To better understand the potential mechanisms, we evaluated the molecular level associated with OA production, activities of cell wall degradation enzymes (CWDEs) and virulence of S. sclerotiorum.Ss-Odc1 and Ss-Odc2 are two putative oxalate decarboxylase (OxDC) genes.The transcript of Ss-Odc1 exhibited significant accumulation in different stages of compound appressorium development and plant colonization.In contrast, the Ss-odc2 transcript was only significantly accumulated only during the middle and late stages of the compound.Evidence indicates that the expressions of Odc1 and Odc2 reduced the accumulation of OA, which was not induced by the low pH of the hyphae or exogenous OA Fig. 5 The evidence of inhibition of in vitro growth of S. sclerotiorum by maleic acid and malonic acid (Liang et al. 2015).In this study, maleic acid upregulated the gene expression of Odc2, while malonic acid showed no positive effects on the expression of Odc1, Odc2 (Fig. 4A, B), aligning with the determination of OA secretion (Fig. 3).During the fungal infection in plants, an increased level of cell wall degrading enzymes (CWDEs) enhances the fungal pathogens to colonize plants and cause infection (Kubicek et al. 2014, Sun et al. 2023).S. sclerotiorum can produce multiple CWDEs that facilitate host penetration, enhance host tissue maceration, and degrade host cell walls (Oliveira et al. 2013a, b).CWDE2 (cellulase family protein) and CWDE10 (pectinesterase A) are two kinds of cell wall-degrading enzyme genes (Xu et al. 2015).In this study, maleic acid and malonic acid reduced the virulence of S. sclerotiorum by down-regulating CWDE10 and CWDE2 respectively (Fig. 4C, D).Interestingly, some studies reported no relations between the gene expression of CWDEs and the pathogenicity of S. sclerotiorum (Anees et al. 2010).It may be that increased CWDE transcripts do not necessarily lead to increased virulence in unfavorable environments, such as high pH, where enzyme activity may not be optimal (Favaron et al 2004).Ss-Ggt1, a γ-glutamyl transpeptidase, regulates the ROS antioxidant system (Li et al. 2012).As for Ss-Bi1, it encodes a putative Bax-inhibitor protein that is vital in the hyphal stress response and full virulence of S. sclerotiorum, influencing the pathogenicity in an oxalic acid-independent manner (Yu et al. 2015).The declining gene expression might indicate gene silencing so that Bax expression is inhibited and PCD (Programmed Cell Death) could not be activated to enhance plant resistance to pathogens (Shlezinger et al. 2011).However, results of this study showed that only maleic acid facilitate plant resistance against S. sclerotiorum through downregulating Ggt1 and Bi1 (Fig. 4E, F). (4) Role of Maleic Acid in the TCA Cycle Enhances Plant Resistance In our previous study, we found the application of RSDOM Se exhibited a significant antifungal effect on S. sclerotiorum.According to the analysis of differential metabolites and up-regulated KEGG (Kyoto Encyclopedia of Genes and Genomes) metabolic pathways, the inhibitory effect of RSDOMSe might be associated with the upregulation of not only maleic acid and malonic acid but also metabolic pathways related to maleic acid (Jia et al. 2019).Succinic acid and fumaric acid, two main components of the tricarboxylic acid (TCA) cycle, were identified as two key metabolites that were up-regulated with RSDOMSe treatment (Fig. 6).Some studies have shown that succinic acid had the potential to participate in the host's immune regulation as a signal molecule (Jiang et al. 2023;Wei et al. 2023).Meanwhile, the TCA cycle not only contributes to the maintenance of energy metabolism homeostasis but also promotes the synthesis of non-essential amino acids such as aspartic acid, which can help plants absorb nutrients and maintain metabolic stability (Yang et al. 2021). Conclusions The combination of maleic acid and malonic acid, derived from oilseed rape straw, could effectively control S. sclerotiorum.This control is achieved by inhibiting mycelial growth, damaging the subcellular structure of sclerotial, reducing oxalic acid secretion and regulating the expression of pathogenic genes.Malonic acid was effective in inhibiting the mycelial growth and sclerotia formation of S. sclerotiorum.Maleic acid, on the other hand, reduced the pathogenicity of S. sclerotiorum by decreasing OA secretion and reducing the expression of virulence-related genes such as Ss-Bi1 and Ss-Ggt1.In addition, the detached leaf experiments showed that the combination of the two acids could effectively reduce the infection of S. sclerotiorum in oilseed rape.This study suggested that maleic acid and malonic acid had potential as safe ecological inhibitors for S. sclerotiorum, which provided a theoretical reference for the subsequent development of green and environmentally friendly pesticides. Pathogen and chemicals S. sclerotiorum (JZJL-13) used in this study was obtained from the Key Laboratory of Crop Disease Monitoring and Safety Control, Huazhong Agricultural University.Fungal strains were cultured on potato-dextrose-agar (PDA) medium (200 g potato, 20 g dextrose, and 15 g agar in 1 L water), and the corresponding liquid medium was potato-dextrose-broth (PDB) medium.Sclerotia were activated at first, and mycelial plugs cut with the same radius were placed into a new PDA and incubated at 23 °C for 48 h to obtain new mycelia of S. sclerotiorum.Maleic acid (ID: 392248) and malonic acid (ID:844) used in this study were purchased from Aladdin Reagent limited-liability company in Shanghai. Antifungal activity assay To estimate the activity of S. sclerotiorum responding to the two acids, the half-maximal effective concentrations (EC 50 ) were determined according to Jia et al. (2019).Different gradient concentrations of maleic acid (2, 4, 6, 8, 10 mg/mL) and malonic acid (0.8, 1, 1.6, 2.4, 3.2 mg/ mL) were set to measure the mycelial growth of S. sclerotiorum.The prepared mycelial plugs (6 mm in diameter) of 2-day-old colonies in PDA media were transferred to PDA media with thegradient concentrations of maleic acid and malonic acid.Culturing S. sclerotiorum on PDA with no acid addition was the control treatment.The colony diameters of mycelial agar in the petri dish were determined after incubation in darkness at 23 °C for 48 h.According to Cheng et al. (2019), the inhibition ratio was defined as follows: "d control " was the mycelial colony diameter of S. sclerotiorum in the PDA medium, and "d treated " was the colony diameter of S. sclerotiorum in the PDA medium with maleic acid or malonic acid.Each treatment was repeated four times. The "logit" method was utilized to proceed with nonlinear data fitting.The values in the X-axis refer to the gradient concentrations of the acid, and the values in the Y-axis refer inhibition ratios of the acid (Sebaugh 2011).Based on the results of EC 50 and low phytotoxicity, an equal concentration of 2 mg/mL was selected for the following study.The fresh mycelial agar was placed on the center of the PDA medium with four treatments: the control, 2 mg/mL of maleic acid, 2 mg/mL of malonic acid, 2 mg/mL of maleic acid + 2 mg/mL of malonic acid (the same as below).Each treatment was preformed with four replicates. Estimation of pathogenicity on detached leaves of oilseed rape The oilseed rape selected in this experiment was Brassica napus L. cultivar Zhongshuang No.9 from the Oil Crops Research Institute, Chinese Academy of Agricultural Sciences.Detached leaves of oilseed rape were picked from the eco-agriculture base (30°28′26''N, 114°2′15''E), Huazhong Agricultural University, Wuhan, China.Mycelial plugs (6 mm in diameter) with different treatments were inoculated onto the detached oilseed leaves with wounds pretreated with a sterile knife, and the diameters of wounds on the leaves were the same size as the prepared mycelial plugs.The colony diameters of the detached leaves were measured by cross method 36 h later to examine the pathogenicity.Each treatment was repeated four times. Sclerotial formation determination To estimate the effect of maleic acid and malonic acid on sclerotial formation, the numbers and weight of S. sclerotiorum in treatments of the two acids were determined.Similarly, mycelial plugs were transferred to fresh PDA media with different treatments.Each petri dish was incubated at 23 °C in darkness for 15 d.Then, the number of sclerotia on each PDA plate was recorded, and the sclerotia were collected and weighed.Each treatment was repeated four times. Transmission electron microscopy (TEM) analysis To study the subcellular effect that maleic acid and malonic acid exerted on S. sclerotiorum, TEM observation was considered a priority to observe the ultrastructure of sclerotia, and the operational process was based on Cheng et al. (2019).After collecting sclerotia from the PDA medium with different treatments, sclerotia were fixed in a solution of 2.5% glutaraldehyde in 100 mM phosphate buffer (pH = 7.2) at 4 °C for 4 h.After that, phosphate buffer was used to rinse samples for 4 h.Next, two-hour required for the rinsed samples immersed in 1% osmium tetroxide with the same buffer at 4 °C.Then, the samples were dehydrated in graded acetone series for 4 h, completely immersing them in a mixed solution with graded acetone and resin for 4 d.Ultimately, a Leica Ultracut UCT ultramicrotome with a diamond knife was utilized to obtain ultra-thin Sects.(50 nm) of the samples.The samples were finally observed by an electron microscope (TEM, H-7650, Hitachi, Japan). Oxalic acid secretion and acid production determination The OA secretion of S. sclerotiorum in the PDB media was determined according to Jia et al. (2019).The 2-dayold mycelial agars were transferred to PDB media with different treatments and were cultured in the dark at 23℃ for 72 h.Each PDB medium had 5 mycelial agars.Afterwards, the PDB solution was centrifuged (10,000 × g, 15 min) to obtain the supernatant.Subsequently, the determination of OA content followed the colorimetric method.0.4 mL supernatant was moved to a colorimetric tube with 0.1 mL 0.5 mg/mL Fe 3+ standard solutions (FeCl 3 ), 1 mL KCl-HCl solution (3.7 g/L KCl and 5.4 g/L HCl, pH 2.0) and 0.06 mL 0.5% sulfosalicylic acid (w/v).After 20 min, the absorbance at 510 nm was read from a UV-5200 ultraviolet spectrophotometer.The acid of the liquid was determined by the Seven2Go pH meter S2-Std-Kit (Cheng et al. 2019).The pH in the PDB medium was measured to investigate the change in acid production in mycelium due to treatments.Each treatment was repeated four times. RNA isolation and quantitative real-time PCR (qRT-PCR) analysis The determination of the relevant gene expression levels was based on Xu et al. (2020).This experiment included two main steps: acquisition of mycelial samples and specific determination of the gene expression process.To obtain mycelium samples, mycelial plugs were inoculated onto sterilized cellophane disks on PDA plates for 48 h at 23 °C.After that, the mycelia on the cellophane were collected and ground with high-throughput tissue grinding machines (Jingxin Corporation, Shanghai). The determination process was mainly divided into three parts, including extraction of RNA, reverse transcription of RNA, and quantitative PCR detection.Mycelial RNA was extracted according to NI-Sclerotinia sclerotiorum RNA Reagent (Newbio Industry, Tianjin, China), and RNA samples were reversely transcribed by EasyScript One-Step gDNA Removal and cDNA Synthesis Super-Mix (TransGen Biotech, Beijing) to obtain cDNA.Quantitative PCR detection was performed using the ABI Q6 Flex system (Applied Biosystems, USA).Target primer sequences were listed in Table S1 (Supplementary).The reference gene, β-tublin, was used to normalize the transcript levels of target genes.Each qRT-PCR was repeated three times and each biological replicate had two technical replicates.The 2 − ΔΔCT method was applied for determining the expression of target genes. Statistical analysis All data analyses were performed with SPSS software version 22.0.Data preprocessing included the test of Normality test and homogeneity of variance.After that, one-way analysis of variance (ANOVA) was adopted for a series of experiments including antifungal sensitivity assay, estimation of pathogenicity on detached leaves of oilseed rape, sclerotial formation determination, OA secretion determination, RNA isolation, and quantitative real-time PCR (qRT-PCR) analysis.Duncan's test was to compare the means of the treatments.When p < 0.05, the result was considered significant. Fig. 1 Fig. 1 Effects of maleic acid and malonic acid (2 mg/mL) on the number and weight of sclerotia.Data for each column were the per number and weight of sclerotia in one PDA plate.Data were analyzed by one-way ANOVA and shown as mean ± standard error (SE).Different letters indicated statistically significant differences among the different treatments (p < 0.05) by Duncan's tests Fig. 2 Fig.2Effects of maleic acid and malonic acid on ultrastructural changes of sclerotia.Representative TEM images of sclerotia sections selected from four specimens in each treatment: A The control; B 2 mg/mL maleic acid; More particles were formed in sclerotia and different contents reduced.C 2 mg/mL malonic acid; Fewer and bigger particles were formed and also the contents degraded.D 2 mg/mL maleic acid + 2 mg/mL malonic acid.(I: bar = 2 μm; II: bar = 1 μm).The cell wall became thinner in treatments of the acids, compared with the control.Yellow circles were to mark the changes of contents in sclerotia.The thickness of the cell wall was indicated via yellow arrows Fig. 3 Fig.3Effect of maleic acid and malonic acid on OA secretion of S. sclerotiorum.Data were analyzed by one-way ANOVA and shown as mean value ± standard error (SE).The values with the same letter were not significantly different at p < 0.05 according to Duncan's tests Fig. 4 Fig. 4Relative expression levels of six target genes of S. sclerotiorum.S. sclerotiorum was incubated for 48 h in PDA medium containing different treatments, and mycelia was collected for qRT-PCR analysis.The concentrations of maleic acid, as well as malonic acid, were 2 mg/mL.Data were analyzed by one-way ANOVA and shown as mean value ± standard error (SE).Bars with different letters are significantly different (p < 0.05) Fig. 6 Fig. 6 Role of Maleic Acid in the TCA Cycle Enhances Plant Resistance.As revealed by the up-regulated KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway, several metabolic pathways contribute to enhancing plant resistance, with maleic acid participating in some of them such as the TCA cycle
2023-11-13T14:37:25.060Z
2023-11-13T00:00:00.000
{ "year": 2023, "sha1": "0924936ba1af405e91ddef91dcceca520b334f0d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "0924936ba1af405e91ddef91dcceca520b334f0d", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
255805702
pes2o/s2orc
v3-fos-license
Genome-wide analyses of the mung bean NAC gene family reveals orthologs, co-expression networking and expression profiling under abiotic and biotic stresses Mung bean is a short-duration and essential food crop owing to its cash prominence in Asia. Mung bean seeds are rich in protein, fiber, antioxidants, and phytonutrients. The NAC transcription factors (TFs) family is a large plant-specific family, participating in tissue development regulation and abiotic and biotic stresses. In this study, we perform genome-wide comparisons of VrNAC with their homologs from Arabidopsis. We identified 81 NAC transcription factors (TFs) in mung bean genome and named as per their chromosome location. A phylogenetic analysis revealed that VrNACs are broadly distributed in nine groups. Moreover, we identified 20 conserved motifs across the VrNACs highlighting their roles in different biological process. Based on the gene structure of the putative VrNAC and segmental duplication events might be playing a vital role in the expansion of mung bean genome. A comparative phylogenetic analysis of mung bean NAC together with homologs from Arabidopsis allowed us to classify NAC genes into 13 groups, each containing several orthologs and paralogs. Gene ontology (GO) analysis categorized the VrNACs into biological process, cellular components and molecular functions, explaining the functions in different plant physiology processes. A gene co-expression network analysis identified 173 genes involved in the transcriptional network of putative VrNAC genes. We also investigated how miRNAs potentially target VrNACs and shape their interactions with proteins. VrNAC1.4 (Vradi01g03390.1) was targeted by the Vra-miR165 family, including 9 miRNAs. Vra-miR165 contributes to leaf development and drought tolerance. We also performed qRT-PCR on 22 randomly selected VrNAC genes to assess their expression patterns in the NM-98 genotype, widely known for being tolerant to drought and bacterial leaf spot disease. This genome-wide investigation of VrNACs provides a unique resource for further detailed investigations aimed at predicting orthologs functions and what role the play under abiotic and biotic stress, with the ultimate aim to improve mung bean production under diverse environmental conditions. Background Plants produce edible organic substances from simple inorganic molecules that can help feed the growing human and livestock populations. However, plants are subject to environmental extremes such as light, temperature, nutrients deficiency, and biological challenges, e.g., pests and pathogens [1] that can severely impact yield and quality of plant products. The evolutionary arm-race between plants and environmental challenges has enabled plants to adapt stressful conditions allowing them to persist under various environmental conditions. Crop plants are facing a great range of environmental stresses, many of which are likely to occur concurrently. In order to overcome such stresses, plants encode a wide range of stress-responsive genes, most of which are known from detailed work in model plants such as Arabidopsis and rice [2]. Transcription factors (TFs) are genes that act as pivotal regulators of plant responses to abiotic stresses such as cold, drought, and salt [3]. The TFs are interacting with cis-elements in the promoter region of different stress-related genes, and TFs have been found to be acting as molecular switches for the transcription of their target genes. Approximately, ~ 7% of a typical plant genome encrypts putative TFs and these habitually belong to large gene families [1]. One such large family of transcription factors is the NAC family of TFs. These genes are characterized by the presence of a NAC domain and the first example of such gene was first reported in petunia [4]. The NAC domain is comprised of an N-terminal DNA-binding domain, a nuclear localization signal (NLS), and a C-terminal transcriptional activation domain (AD). The N-terminal region consists of a160 amino acid long conserved DNA domain, which can be further subdivided into five subdomains (A-E) [5] which are arranged in the order A > C > D > B > E. The subdomains A and C have been revealed to be important for protein structure stabilization [5]. The highly variable C-terminal region interacts with other transcription factors and likely plays role in different developmental functions [6]. NAC TFs have numerous functions in plants, i.e., the formation of plant shoots apical meristems [7], nutrient transfer, control of the cell cycle in the senescence process [8], plant stress response [7], regulation of plant innate immunity [9], and hormone signalling [10]. For example, OsNAC4 is a key positive regulator of programmed cell death in plants [11] and overexpression of OsNAC6 resulted in increased tolerance to blast disease in rice [12]. A number of NAC proteins may positively regulate plant defence responses by activating pathogenesis-related (PR) genes, inducing genes involved in mediating the hypersensitive response (HR) and programmed cell death at the infection site in resistant plant species [13]. HvNAC6 was found to be induced under resistance conditions in barley in response to Pseudomonas syringae, Botrytis cinerea and Alternaria brassicola infections [14,15]. Likewise, understanding the complex mechanism of drought and salinity tolerance is imperative for future agriculture production and, interestingly, numerous NAC genes have been identified to be involved in plant response to drought and salinity stress. In transgenic rice, OsNAC2 and OsNAC6 and OsNAC10 genes have been shown to enhance drought and salinity tolerance [16]. Several genes are induced by drought and salt stresses in tolerant cultivars across a range of species, including TaNAC69 and TaNAC6 in wheat [17,18] and CarNAC3 in chickpea [19]. NAC genes have been identified from 166 species, e.g., there are 105 NAC genes in Arabidopsis [20], 151 in rice [21], 142 in grapevine [22], 163 in Populus [23], 113 in Japanese apricot [24], 63 in coffee [25], and 152 in soybean genome [26]. However, NAC genes have thus far not been well studied in mung bean. Mung bean (Vigna radiata L.), a well-known pulse crop, belongs to the subfamily Papilionoideae of Fabaceae. It originated in Southeast Asia, where its progenitors occur wildly. This edible legume crop is being grown on more than 6 million ha worldwide (about 8.5% of the global pulse area) and consumed in large quantities across Asia [27]. Mung bean is widely cultivated in Asia, in dry regions of southern Europe, and in the warmer parts of Canada and the United States owing to the fact that this is a low-input crop, relatively droughttolerant, and which has a short growth cycle (< 80 days) [28]. Mung bean is a cheaper source of carbohydrates, high-quality protein, folate, and iron compared to most other legumes. Due to the nutritional value and health benefits of mung bean for people in developing countries, there has been a large interest in developing genetic and genomic tools to enhance mung bean breeding [29]. The availability of a draft genome sequence has significantly enhanced the progress of downstream analyses using the mung bean genome sequences [30]. The NAC gene family has a diverse role in plant development and physiology, and it is, therefore, crucial to further investigate this gene family in mung bean. Here, we analyzed the chromosome localization, conserved motifs, genetic structure, and Keywords: Mung bean, NAC, Transcription factor, Phylogeny, Co-expression network, Gene ontology, Biological process phylogenetic relationships, based on sequence data from 81 VrNAC TFs. Our results will facilitate follow-up studies aimed at functionally characterizing the NAC genes in mung bean. Identification of NAC family members in mung bean The amino acid sequences of putative VrNAC TFs, having the conserved DNA binding domain of NAC, were used as an input in BLAST searches for mung bean. Therefore, BLAST results identified 81 putative VrNAC genes; each gene was interpreted as VrNAC1.1 to VrNAC11.3 according to their respective positions on the chromosomes in the mung bean genome. The putative VrNAC genes encoded predicted proteins, ranging from 106 to 973 AA (amino acids) with isoelectric point (pI) from 4.39 to 10.74, and molecular weights, ranging from 9168.2 to 106,654 Da. However, comprehensive information of all identified VrNAC proteins, including their accession numbers, gene length, and chromosome location is shown in Table S1. Chromosomal distribution of VrNAC genes and synteny analysis To examine the VrNACs distribution on different chromosomes, the genomic sequences of VrNACs were used as a query to search against the mung bean genome to pinpoint the VrNACs distribution on chromosomes. The physical map positions revealed mapping of VrNACs on the mung bean chromosomes as per the location from short to long arm telomere (Fig. 1). Although, the majority of the chromosomes harbored VrNAC genes, the distribution was highly uneven, with a particularly high enrichment seen on chromosome 7 with 12 VrNAC genes; whereas chromosome 9 did not harbor any VrNAC genes at all. Other chromosomes with many VrNAC genes include chromosome 1 with 10 and chromosome 5 with 9 and chromosome 2 with 8 VrNAC genes. Chromosomes 3, 10, and 11 only harbored 3 VrNAC genes per chromosome. Duplication events play a vital role in plant evolution, such as tandem and segmental duplications are important processes resulting in genome expansion and increased complexity. We were able to infer duplication events in mung bean and Arabidopsis based on our phylogenetic analyses ( Fig. 1; Fig. 2), providing important information regarding the function of VrNACs as orthologs, serve as the main source of functional predictions of genes through comparative analysis. Several segmental duplication events were identified in both the mung bean and Arabidopsis genomes, with tandem and segmental duplication events being more prominent in Arabidopsis, an observation that has already been described in earlier studies. However, no tandem duplication events were observed in this study. Duplication patterns were observed for a total of 15 segmental duplication pairs and suggest that segmental duplication events have played a vital role in the amplification of VrNACs. Different orthologs combinations were identified between mung bean and Arabidopsis and overall 11 different orthologs were identified, likely to have similar functions in mung bean. The findings from our analyses of NAC gene orthologs in Arabidopsis and mung bean help illuminate how function may be assigned through the use of phylogenetic relationships between orthologous genes. However, the duplication events we inferred highlight segmental duplication as a driving force in the evolution of VrNACs and may thus be associated with future applications of the genes from the VrNAC family. Gene structural analysis During the evolution of multigene families, newly evolved copies are often evolving new gene functions, and this is also reflected in the diversification of gene structures in the VrNAC family. To further understand the structural diversity of different VrNAC genes, exon/intron sequences of all VrNACs were used for to assess their structural organization using GSDS2.0 (http:// gsds. gao-lab. org/) (Fig. 3). The results from the VrNAC gene structures show that the number of introns ranges from one to five, with one NAC gene (Vradi0007s01690.1) completely lacking introns. Phylogenetic classification and protein motif analysis of VrNACs To explore the phylogenetic relationships among the 81 VrNACs, we built a phylogenetic tree based on the VrNAC proteins using the Neighbour-Joining (NJ) method. This method was used as sequence lengths of the individual VrNACs, varied dramatically between different genes. The results indicate that the VrNAC family can be classified into 9 subfamilies, hereafter referred to as Group A to Group I (Fig. 4). Group A and Group B were the largest groups with 14 and 13 VrNAC proteins, respectively, followed by groups Group D and Group G with 4 and 5 proteins, respectively. These results suggest that VrNAC proteins may have played a critical role in the mung bean genome expansion. We also analyzed putative conserved motifs in the VrNACs using MEME (https:// meme-suite. org/ meme/) to further study the evolution of the VrNAC proteins. We identified 20 conserved motifs (1-20) from the VrNAC proteins. As expected, closely related genes, belonging to the same phylogenetic clade, mostly had similar motif compositions and only minor differences were observed within clades, suggesting that genes within clades are true paralogs. Based on the conservation of various domains, group A and group H contained a maximum of 11 conserved motifs, followed by group B, which contained 10 conserved motifs, group C contained 9 conserved motifs, group D, E, F, G, and H have the same 7 conserved motifs, i.e., 1, 2, 3, 4, 6, 18 and 20 motifs showed the high conservation of these motifs within clades. Remarkably, conserved motifs in the N-terminal of the VrNAC proteins are highly conserved for DNA-binding and similar motif compositions were shared within the clades. Such motifs conservation among the proteins suggests functional Phylogenetic analysis and conserved domain analysis of the VrNAC proteins from mung mean. A All VrNAC TFs are categorized into nine groups based on their protein sequences. B Conserved motifs of VrNAC TFs proteins as per the phylogenetic relationship. The conserved motifs were determined using MEME and each conserved motif is indicated by a different color; likewise, the length of each motif is displayed correspondingly similarity and conserved motifs may thus indicate potential functional sites and that genes participate in inducing similar downstream functions. Comparative phylogenetic analysis of NAC between mung bean and Arabidopsis Exploring the comparative evolutionary relationship among different TFs between different plant species, a neighbor-joining phylogenetic tree was built from mung bean and Arabidopsis NAC proteins (Fig. 5). All members of NAC TFs from Arabidopsis and mung bean were classified into 13 subgroups, designated as NAC-I to NAC-XIII, respectively. NAC-IX constituted the prime clade with 36 VrNAC members followed by NAC-X which contains 33 VrNAC protein. The smallest clades, NAC-I, NAC-II and NAC-V, contained no genes from mung bean. The core objective of this comparative phylogenetic analysis was to detect putative orthologous genes between mung bean and Arabidopsis NACs. A key component of comparative genomics is to track the presence, structural characteristics, and functional similarities of orthologous genes across multiple genomes. Based on the inferred phylogenetic tree, VrNAC genes were comparatively assessed to their Arabidopsis orthologs. Different orthologous genes are present in the mung bean genome. For example, NAC-IX and NAC-XI have two orthologs, NAC-X have three orthologs while Group-XII and Group-XIII have one gene orthologous with Arabidopsis. Evolution collinearity of NAC genes between mung bean and other plant species We employed collinearity analysis to identify homologous genes and evolutionary relationships among genes. To explore the evolutionary relationship of VrNACs with other plant species, we performed comprehensive collinearity analyses using rice and tomato ( Fig. 6; Table S2). Tomato has many health-promoting compounds A comparative phylogenetic analysis based on the mung bean and Arabidopsis protein sequences of NACs. The Neighbor-joining (NJ) tree was constructed using Clustal Omega (https:// www. ebi. ac. uk/ Tools/ msa/ clust alo/) with the 1000 bootstrap replicates to assess tree reliability. NAC TFs from mung bean and Arabidopsis are classified into 13 different groups, highlighted by different colors in the tree including vitamins, carotenoids and phenolic compounds, and exhibits 58 homologous pairs with mung bean. Rice is a monocot, belonging to the Poaceae family, often serves as a secondary model plant together with Arabidopsis and we identified 16 homologous pairs between rice and mung bean. The results indicate that NACs evolved from a common ancestor and have diversified across different plant species. Cis-element analysis of the VrNAC gene family Cis-acting regulatory elements are the binding sites for a particular TF which determine the initiation or repression of transcription. As such, these cis-regulatory elements are the imperative gene structures in a genome. In the current study, we identified putative cis-regulatory elements to further inspect the probable functions of different NAC family genes in mung bean. This analysis was done using the PlantCARE database based on the 1 kb sequences immediately upstream of the VrNACs transcription start site. A total of 985 cis-regulatory elements, associated with different processes, i.e., abiotic and biotic stresses, developmental process, and light responsiveness, etc., were identified in the promoter regions of the VrNAC (Fig. 7; Table S3). Numerous cis-elements corresponding to tissue-specific expression, including root-specific, seed-specific, endosperm-specific, and meristem-specific expression were present in the VrNAC genes promoters. Similarly, several light-responsive cis-elements were revealed, broadly distributed in the VrNACs promoter regions. Particularly, elements important for response to abiotic stress, including cold and dehydration-responsive elements, drought-responsive, low-temperature element, wound responsive, defense and stress-responsive elements were detected. Given the results, we can speculate that VrNAC TFs may counter different abiotic and biotic stresses and might have many prospective functions in enhancing the stress resistance in mung bean. Comprehensive miRNA targeting VrNACs Micro-RNAs, miRNAs, are a global gene regulatory mechanism that help control gene expression under a large number of scenarios and miRNA-mediated gene regulation evolved more than 425 million years ago in the plant kingdom [31]. To further understand miRNA regulatory mechanisms, we identified putative miR-NAs that target VrNACs (Table S4). The most highly targeted genes were VrNAC1.4 (Vradi01g03390.1), Vradi0007s01630.1, and Vradi07g17120.1 which contain 9, 6, and 5 miRNAs, while the least targeted genes, with one miRNA, are listed in the Table S4. Vra-miR164a, b, c, and d, were found to be the most abundant miRNAs collectively targeting 25 VrNAC genes. It's noteworthy that Vra-miR164 has been shown to be involved in regulating drought and salt tolerance [32,33]. Gene ontology and co-expression regulation enrichment analysis Gene ontology (GO) analyses classified the 81 putative VrNAC TFs into three gene ontology categories, i.e., 1) biological process (47), 2) cellular component (6), and 3) molecular function (5), based on enrichments with p values < 0.05 (Table S5). A few important biological processes related to these GO terms are aromatic compound biosynthetic process (GO:0019438), organic cyclic compound biosynthetic process (GO:1901362), heterocycle biosynthetic process (GO:0018130), regulation of metabolic process (GO:0019222), and RNA metabolic process (GO:0016070) (Fig. 8). Given enrichment of VrNACs in different GO terms, VrNACs can be inferred to play important roles across many different biological processes acting to maintain plant homeostasis in response to environmental influences. Gene co-expression networks can be used to identify important regulatory genes and the regulatory roles of genes can be further investigated by the interaction between transcription factors (TFs) and their target genes. The complex network of genes and their positive and negative co-expression patterns thus control different biological and physiological mechanisms in the plant and is therefore important for plant responses to different environmental conditions. We constructed a co-expression network for VrNACs based on a Pearson's correlation coefficient (PCC) threshold of 0.4 [34], and we identified 173 genes in mung bean that are involved in the network together with the VrNACs (Fig. 9; Table S6). We used Cytoscape (https:// cytos cape. org/) to display the resulting networks of VrNACs and their co-expressed genes. 37 VrNAC genes were included in the co-expression network with 100 other mung bean genes that may be participating in different regulatory Fig. 8 GO clustering exhibiting the involvement of VrNAC genes in different biological processes, participating in different mechanisms. The red to yellow color ribbon denote low to high enrichment patterns of VrNAC genes in different biological processes. The plot was produced using R (https:// www.r-proje ct. org) mechanisms. For example, VrNAC2.6 is co-expressed with four genes, including three genes that belong to VrNACs, i.e., VrNAC5.1(A0A1S3U6Y9), VrNAC6.6 (A0A1S3UCB4), and VrNAC7.12 (A0A1S3TPG8). Moreover, VrNAC2.2 (A0A1S3THY5) is co-expressed with 39 different genes, including A0A1S3U176 (Xylulose 5-phosphate/phosphate translocator, chloroplastic) and A0A1S3U2L1 (Ergosterol biosynthetic protein 28), A0A1S3U9Q2 (Vesicle transport protein), etc. Similarly, VrNAC8.4 is co-expressed with 13 genes, including A0A1S3VNL2 (Lipid phosphate phosphatase epsilon 2) and A0A3Q0EP37 (NAC domaincontaining protein 40), for regulating the different physiological functions in mung bean. Our network analysis hence reveals genes that are regulating different stress response mechanisms via positive or negative co-expression patterns under different environmental conditions to maintain plant homeostasis. Expression pattern of VrNACs under stresses The expression patterns of 22 different VrNAC genes were evaluated under both abiotic and biotic stresses (Fig. 10). Two weeks old mung bean leaves were inoculated with powdery mildew, bacterial leaf spot and fusarium wilt to assess the expression of the VrNAC genes in response to biotic stress. To assess the effects of abiotic stresses, leaves were exposed to drought and salinity stresses. The NM-98 mung bean variety used in these experiments is thought to be tolerant to drought stress and bacterial leaf spot. The results suggest that VrNACs appeared to be induced in response to the abiotic stress compared to biotic stresses (Fig. 10). We found that expression of VrNAC7.7 is induced under bacterial leaf spot, powdery mildew and drought stress early following the application of stress. The genes VrNAC3.2, VrNAC5.2, VrNAC5.6 are also induced under biotic stresses. Based on the observed expression patterns of the VrNAC genes, we conclude that VrNAC genes playing significant roles under both abiotic and biotic stresses through many different pathways. Discussion Mung bean, a legume crop, is an important food source and is consumed as a replacement for meat owing to its high protein content. In addition to providing a source of food for consumption, mung bean cultivation also has positive environmental effects, for instance by contributing to soil fertility through nitrogen fixation. Wholegenome sequences of many food legumes, such as pigeon pea [35], chickpea [36], and adzuki bean [37], have recently been released and such whole-genome sequencing data is a valuable resource for comparative and evolutionary studies. Transcription factors (TFs) are the major focus in biological research owing to the fact that TFs regulate the expression of downstream genes and play key roles in different pathways that regulate numerous biological processes in plants. Currently, a large number of TFs belong to different families, regulating drought, salinity, low temperature, hormonal, and pathogenic reactions in plants have been identified, and these transcription factors may thus be involved in different mechanisms allowing plants to handle different stresses. As an example, NAC TFs in plants are involved in plant development and senescence as well as in response to both abiotic and biotic stresses. The mung bean genome contains 81 predicted VrNAC genes, most of which to date have not been characterized in detail. The goal of the current study was to identify the orthologs of all mung bean NAC genes via comparative evolutionary relationship among NAC TFs from both mung bean and Arabidopsis. NAC TFs are known to be widely distributed in different plant species and have potential roles in regulating plant development, growth and stress responses [38]. The NAC TF family is one of the largest TFs families identified to date [39] and there are, for example, 117 NAC genes in Arabidopsis [11], 151 in rice [12], 79 in grape [13], 180 in apple [40], 152 in maize [41], 71 in chickpea [42], 96 in cassava [43], 185 in Asian pears [44], and 80 in tartary buckwheat [17]. Assessing the gene structure of the 81 VrNAC genes, we found that they contain between one to five introns. The intron numbers of VrNACs are different from those identified in other plants, such as in rice 0-16 [12], 0-8 in poplar [14], and 0-9 in cotton [45], respectively. Interestingly, identified stress-responsive NACs in chickpea, pigeon pea, and groundnut have two introns [46]. The diversity of gene structures and conserved motifs seen in mung bean NACs also indicate that these TFs are highly functionally diverse [47]. Gene duplication events are known to have an imperative role in the evolution and expansion of gene families and gene duplication of NAC TFs has been observed in many plant species [48]. We found 27 pairs of genes that exhibited evidence for duplication events among the 81 VrNACs and this may have contributed to the expansion of the NAC family in mung bean. As different proteins having similar sequences are predicted to have diversified functions, we analyzed the functions of the VrNAC TFs based on their placement in a phylogenetic tree of NAC proteins. In addition, we identified orthologous pairs of NACs TFs between mung bean and Arabidopsis based on protein sequence similarity. The comparative phylogenetic analysis of Arabidopsis and mung bean NAC proteins showed that NACs could be classified into 13 groups or clades, named NAC-I to NAC-XIII. The NAC-IX group constituted the largest clade with 36 NAC proteins. Group-IX contains 11 VrNAC which are largely involved in the formation of photoassimilate, a compound synthesized by assimilation under light-dependent reaction [49]. In this group, VrNAC1.6, an ortholog of AT3G17730, and VrNAC5.5, an ortholog of AT1G54330.1, are both involved in sugar transport through phloem sieve element cells in plants [50]. We also identified transcriptional activators, involved in the induction of abscisic acid (ABA) responsive genes, such as AT1G32510.1 and its homologous VrNAC7.4 [51]. Moreover, AT4G17980.1 and AT5G46590.1 trigger the ABA-inducible genes in response to dehydration and osmotic stresses that lead to stomatal closure to inhibit further water loss under dehydration conditions [52]. They act synergistically with ABF2, which acts as a positive component of glucose signal transduction [53,54]. However, AT2G27300.1 is found to be an ortholog of Vradi0425s00070.1, activated by proteolytic cleavage through intramembrane proteolysis (RIP), and induces GA-mediated salt-responsive suppression in seed germination and flowering via FLOWERING LOCUS T (FT) [55]. Among the 33 NACs in the NAC-X groups, 16 belong to VrNACs and 17 are representing the AtNAC proteins. In this group, we identified three orthologous groups of mung bean and Arabidopsis NACs. AT3G12910.1 is an ortholog of VrNAC5.6, which acts as a negative regulator of leaf senescence in Arabidopsis [51]. The AT1G79580.1 is an ortholog of VrNAC5.4, regulates root cap development that determines the growth trajectory and expedites the root penetration in the soil [55]. Likewise, VrNAC8.3 is an ortholog to AT1G71930.1, and AT1G71930.1 participate in the formation of vascular system by regulating the immature xylem vessel-specific genes expression [56]. In addition, AT1G71930.1 contributes also to secondary cell wall biosynthesis and modification and programmed cell death [57]. In group-XI, VrNAC7.6 is an ortholog of AT1G61110.1 which is associated with anther development and pollen production, essential for normal seed development [58]. Besides, AT1G01720.1 is an ortholog of VrNAC6.6, belongs to a large family of putative transcriptional activators with the NAC domain and known as ATAF1. ATAF1 is representing the uncharacterized plant-specific gene family encoding NAC transcription factors and is regulated in response to various external stimuli in Arabidopsis. It is also involved in resistance to the non-host biotrophic pathogen Blumeria graminis f. sp. hordei in Arabidopsis [59]. In group-XII, the AT1G56010.1 ortholog of VrNAC5.7, encodes a transcription factor participating in shoot apical meristem and auxin-mediated lateral root development [60]. The group-XIII contains 25 NAC proteins in which VrNAC5.3 grouped with AT1G76420.1. However, AT1G76420.1 regulates the shoot apical meristem formation during embryogenesis and organ separation [60]. Group IV has 17 NACs, including eight VrNACs and nine NACs from Arabidopsis. In this group, AT1G25580 is ortholog of VrNAC11.3, encoding Suppressor of Gamma Response 1 (SOG1), a putative transcription factor governing multiple responses to DNA damage [61]. The given results exhibited that putative mung bean NAC TFs orthologs with Arabidopsis NACs might have similar functions as previous studies approved the functional similarities among orthologs of different plant species [62]. Cis-acting regulatory elements (CAREs) are critical gene structures in eukaryote genomes. CAREs determine transcriptional initiation and are characterized by having conserved motifs between 5 to 20 nucleotides long that are found upstream of the transcriptional start codon. In this study, 133 drought stress-responsive CAREs were identified in the putative VrNAC genes. Important CAREs detected were light response elements that are the most abundant CARE identified for VrNACs. Additionally, 21 auxin-responsive elements and 20 stress and defense-responsive elements were also identified. Several other promoter elements were also identified that are known to play key roles in various plant development and stress responses such as seed regulation, endosperm expression, gibberellin acid, and salicylic acid. The existence of different cis-regulatory elements in the promoters receive special consideration as they provide insights into gene regulation and plant signaling under stress conditions. A co-expression network ensures the coordination expression of genes that have important roles in different cellular processes during plant development and differentiation. Deciphering the predicted co-expression networking needs extensive genomic approaches to elucidate the functional and structural characteristics of the VrNACs to improve the knowledge of mung bean genome. This novel genome information would ultimately be useful for breeders to target the respective genes and develop resilient plant genotypes. Conclusions Abiotic and biotic stress affects mung bean with varying severity at different growth stages, which can result in moderate to severe yield loss. We identified 81 NAC genes in mung bean and detailed analyses identified phylogenetic relationships among the genes, their chromosomal locations, gene structure, conserved motifs from their promotors and the expression profiles of the putative VrNAC genes. The comparative phylogenetic revealed clusters of VrNACs and identified orthologs in Arabidopsis as well as several paralogs. Collinearity analyses identified 58 and 26 VrNAC genes having homologous pairing with tomato and rice, respectively and highlights that these plant species evolved from a common ancestor. The research findings presented here provides a milestone for further research aimed at accelerating functional genomics and molecular breeding programs in mung bean. Having detailed knowledge of stress-responsive VrNACs in mung bean will be a highly valuable resource for future molecular breeding in food legumes. Sequence data retrieval The nucleotide as well as protein sequences of mung bean NAC (VrNAC) genes were searched for and retrieved from the plant transcription factor database (http:// plant tfdb. gao-lab. org/) (Table S7; Table S8). The sequence information of all VrNAC genes was cross-checked with National Center for Biotechnology Information (NCBI) database by Basic Local Alignment Search Tool (BLAST). The HMM file of the NAM domain (PF02365) was retrieved from the Pfam database and was used to assess the NAC family proteins in mung bean based on an e-value less than 0.001 using HMMER 3.0 (http:// hmmer. org/). All Arabidopsis thaliana protein sequences for comparative analysis were retrieved from the TAIR database (www. arabi dopsis. org/) (Table S9). We determined the molecular weights and isoelectric points for all identified proteins of VrNACs using the Expasy server (www. expasy. org/). Gene structure and chromosome location To illustrate the gene structure of all putative VrNACs, we used the Gene Structure Display Server 2.0 (http:// gsds. cbi. pku. edu. cn/). This tool uses coding sequences as input to generate gene structures. The location of each VrNAC gene was determine by their start and end position on each mung bean chromosome, and their graphical representation was made by TBtool (https://github. com/CJ-Chen/TBtools/releases). Syntenic evolutionary analysis Orthologous and paralogous NAC genes between Arabidopsis and mung bean were illustrated with different lines using Circos (http:// circos. ca/) [63]. Wholegenome sequences and annotation files of Arabidopsis, tomato and rice were downloaded from Phytozome v13.0 (https:// phyto zome-next. jgi. doe. gov/). Syntenic evolutionary analyses between different plant species were performed using the genome and annotation files as inputs to the Multiple Collinearity Scan Tool kit (MCS-canX, https:// github. com/ wyp11 25/ MCSca nX). The Dual Synteny Plot in MCScanX was employed to identify the homologous pairs between mung bean and two other plant species [64]. Collinearity maps between mung bean and Arabidopsis, tomato or rice were constructed in this way. Comparative phylogenetic analysis MEGA X (http:// www. megas oftwa re. net/) was used to construct the phylogenetic tree of the VrNACs genes, and the VrNACs genes were divided into different groups as per the phylogenetic tree nodes. All protein sequences of VrNACs were initially aligned by using ClustalW (http:// www. ebi. ac. uk/ clust alw/) with the default parameters. Moreover, to assess the support for the protein classification, a phylogenetic analysis of the NAC protein sequences was performed using 1000 bootstrapping values. For comparative studies, 81 and 73 NAC protein sequences obtained from mung bean and Arabidopsis, respectively, were used. Briefly, all 154 NAC proteins were aligned with the ClustalW program using a BLO-SUM protein weight matrix. All the other parameters were used with their default settings. A comparative phylogenetic tree was inferred from the aligned sequences using the Neighbor-Joining (NJ) scheme combined with bootstrapping to assess node robustness using MEGA X (https:// www. megas oftwa re. net/). The Poisson correction method was used for computing the evolutionary distances of different amino acids substitutions per site. Conserved motif analysis The classification of domains in the putative VrNACs was performed using the MEME suite 4.11.1 (http:// meme-suite. org/ tools/ meme). Several conserved motifs were identified using optimum search parameters with maximum number of motifs = 20; minimum sites per motif = 2). Putative promoter cis-acting element analysis The upstream 1 kb from the transcription start site was retrieved for all VrNAC genes using the NCBI database in order to perform promoter analysis via PlantCARE tool (http:// bioin forma tics. psb. ugent. be/ webto ols/ plant care/ html/). All identified cis-elements were then highlighted at their respective positions in the 1 kb region of each VrNAC gene upstream region. Co-expression network construction The co-expression data of VrNACs were downloaded from the STRING database (https:// string-db. org/ cgi/). Initially, we ranked correlated genes with a Pearson correlation coefficient (PCC) higher than 0.4. Afterward, we arranged the VrNACs and co-expressed genes as per the PCCs threshold value. The Cytoscape (https:// cytos cape. org/ index. html) software was used to construct the co-expression network between VrNACs and other coexpressed genes. miRNA prediction in VrNACs All miRNA sequences from mung bean were retrieved from the Plant microRNA Encyclopedia (PmiREN, https:// www. pmiren. com/). Mung bean genome sequence data representing the VrNACs were submitted as candidate genes to predict potential miRNAs by searching against the data retrieved from PmiREN using the psRNATarget Server with the default parameters (http:// plant grn. noble. org/ psRNA Target/). Plant materials and growth conditions Two weeks old mung bean seedlings, belonging to the variety NM-98, were selected for stress treatments. NM-98 has earlier been identified to be tolerant to drought and bacterial leaf spot in Pakistan and is currently cultivated in many regions in Pakistan [65]. Plants were grown in the greenhouse with three biological replications in pots and growth conditions were maintained at temperatures of 28/23 °C for day and night, respectively. Light intensity was maintained at 600 μmolm −2 s −1 with day and night cycles of 14/10 h. For the salt treatment, 2 weeks old seedlings were transferred to Yoshida's solution containing 100 mM NaCl. For the drought stress treatment, irrigation was halted after germination, and seedlings were collected for RNA extraction after 4 weeks. In addition to the abiotic stresses, we also inoculated the mung bean leaves with powdery mildew, Fusarium and bacterial leaf spot. Samples were collected in three biological replicates 3 hours after inoculations. Plants grown under normal conditions and without any stress treatments were considered as control samples for comparisons. All collected samples were stored at − 80 °C prior to the next step. All experiments performed in this study
2023-01-15T15:18:36.660Z
2022-07-15T00:00:00.000
{ "year": 2022, "sha1": "5d3f24178c403b7b0771d9b972d6fbd17e5f6ca5", "oa_license": "CCBY", "oa_url": "https://bmcplantbiol.biomedcentral.com/counter/pdf/10.1186/s12870-022-03716-4", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "5d3f24178c403b7b0771d9b972d6fbd17e5f6ca5", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
266878940
pes2o/s2orc
v3-fos-license
Assessment of On - going tectonic deformation in the Goriganga River Basin, Eastern Kumaon Himalaya Using Geospatial Technology The Goriganga river basin lies in the Northeast Kumaon Himalaya and is found suitable for assessing active tectonics at different scales. In addition, this study focuses on the assessment of ongoing tectonic activity through morphotectonic measurement of the Goriganga river basin, which is an ideal location for such analysis and Goriganga river basin transects with three major domains of Himalaya ’ s lithotectonic structures viz., Tethys, Vaikrita, and Lesser Himalayan Domain. To realize this task, the ASTER Digital Elevation Model was used and found suitable to extract different morphotectonic indices such as Stream Length Gradient (SL), Hypsometric Integral (HI), Length of Overland Flow (Lg), Drainage Density (Dd) and Channel Sinuosity (Cs). Results of these important indices, including SL (18 - 4737) HI (0.26 - 0.57) INTRODUCTION The tectonic and surface processes work in tandem to carve landscapes and landforms, which indeed bear an imprint of these processes' relative timing and magnitude.The repercussions of tectonic uplift on surface processes, viz., weathering and erosion, have long been conceded, while the effects of weathering and erosion on tectonics, have invoked attention only in recent years (Willett et al., 2006;Farooq et. al., 2015;Khan et. al., 2023).Active tectonics is significantly helpful in assessing and analyzing the deformation of the earth's crust on a time scale, significant to human society (Keller and Pinter, 2002;Omar and Mosar, 2019).Active tectonics establishes a critical relationship between tectonics and geomorphological processes that shape the earth's surface (Burbank and Anderson, 2011;Taib et. al., 2023).Further, the impact of such processes is measured by computing geometry, topography, and shape of the earth's surface and the temporal change thereof (Turner, 2006).It was deciphered from basin morphometric indices, which are crucial to record the presence of faults, thrusts, and shear zones, active ruptures, displaced landforms, and influence on landform evolution (Wallace, 1990).The presence of active faulting on landscapes may be displayed by the offset of river channels, forming lakes, and developing meanders (Tricart, 1974).Further, the effects of active faulting across a river can be manifested in the form of alignment of drainage patterns along faulting and sometimes irregular (Twidale, 1971).In tectonically active regions, drainage is used as a proxy to cognize evolution of landscape (Goldsworthy and Jackson, 2000).It is evident that Himalayan region is al., 2017, Valdiya 1976Valdiya , 1992Valdiya , 2000)).Many workers have postulated that the geomorphic evolution of Himalayan Thrust Belts is operated under the control of external factors including tectonic uplift, weathering, and erosion etc. (Bookhagen and Burbank, 2006: Joshi and Kotilia, 2015; PérezPena, Azor, Azanón, and Keller, 2010; Pazzaglia, 2013; Kothyari et al., 2008;Valdiya, 1993).The Goriganga River basin lies across Trans Himadri Fault (THF), Main Central Thrust (MCT), Berinag Thrust (BT), North Almora Thrust (Pathak, Pant and Dharmwal, 2013;Valdiya, 1976Valdiya, , 2003)), and transecting with many unknown faults i.e., Rauntis Fault.Therefore, it is particularly an ideal site for the quantification of geomorphic features which evolved due to tectonic activity, which lies in the recognized faults and thrusts.Geomorphic signs concomitant with faults and thrusts in the region are incised meanders, formation of lakes/ paleolakes, offset of rivers, landslides, strath terraces and vertical gorges etc. (Mohammad, 2014;Farooq et al., 2015;Farooq and Khan, 2017;Pathak et al., 2013;Bookhagen et al., 2013;Kothyari et al., 2020) and impact of land use and land cover on neo-tectonic activity (Khan et al., 2023).This study aimed to highlight the neo-tectonics deformation in the eastern Kumaun Himalaya where the Goriganga river basin is the suitable site for the assessment. Study area The Goriganga River starts its journey from Milam Glacier, located south of the Nanda Devi Peak in Tethys Himalaya.This basin lies between latitudes 29°4 5'03" to 30°35'53" north and longitudes 79°59'10" to 80°29'25" east and is elongated in the NNW-SSE direction.It covers a 99.2 km distance before confluence with Kali River at Jauljibi with a catchment area of 2242.4 km 2 .Glaciers dominate the northern extremity of the Goriganga river basin and covers 605 km2 (27%) area under dense ice cover.The Dhauliganga River shares its boundary in the northeast and the Ramganga River in the southwest and it flows southwards for a considerable distance.Geologically, the Goriganga River basin transects with three domains of Himalaya viz., the northeastern part of the basin (415.68 km2) lies over the Tethys domain, the Central or northwestern part (1228.46km2) over the Higher Himalaya, separated by THF in the north and MCT in the south and southern part (599.76 km2) lies over the Lesser Himalaya (Fig. 1).Therefore, the Gorigranga River basin is found suitable for morphotectonic assessment. Geologic Setting The Goriganga River basin comprises three major domains of tectonics with an extent of 99.After crossing the Main Central Thrust (MCT) or Munisiari Thrust (MT), the river flows through the Almora Group of rocks and Mandhali Formation near Munisiari (Valdiya, 1980).Further, it flows through Rautgarh and Gangolihat Formations and transects with numerous transverse faults as the Baram fault before the confluence with Kali River (Fig. 2).Lithotectonic units, faults, and thrusts indicate that the Goriganga River follow different litho-structural features from Trans Himadri Fault to many transverse faults up to the outlet. Drainage Pattern Gorigana is a monsoon and glacier-nursed river that originates at more than 5000 meters above the mean sea level from Milam Glacier in the south of Nanda Devi Peak.It runs through glaciated broad floored, U-Fig. 1. Showing the study area along with major lithotectonic domain shaped valleys up to a considerable distance.Broad floored valleys turn into deep V-shaped canyons as it enters the Thrust Zone.Goriganga River takes nearly a right-angle turn when it crosses the Trans Himadri Fault, where Paleolake was studied by Valdiya (2001), Kotlia and Rawat (2004) and Kotlia and Joshi (2013).It creates an S-shaped meander near Bogdyar before crossing Main Central Thrust and many river offsets and strath terraces have been perceived downside when it passes the Baram Fault near Toli village.Climatologically, the Goriganga river basin comprises considerable diversity from south to north, influencing the topographic evolution of monsoon and glacially fed regions of Kumaon Himalaya, creating distinct morphotectonic zones.In the southern portion of the river basin, monsoon produces torrential rainfall that modifies or carves the landscape and drainage pattern, while the northern part comprises a glacial climate which creates broad floored valleys and sharp ridges which are tectonically active in the presence of well-known Trans Himadri Fault (THF). Data analyses In the present work, morphotectonic analyses were based on integrating several datasets acquired through remote sensing, GIS, and fieldwork.Advanced spaceborne Thermal Emission and Reflection Radiometer (ASTER) data derived Digital Elevation Model (DEM) was obtained from USGS Earth Explorer and input into the GIS environment for further analysis.The Aster Instrument was built to collect orthoimages for deriving digital elevation models by Japan's Ministry of Economy, Trade and Industry (METI) and the National Aeronautical and Space Administration (NASA).It was launched in December 1999 and acquired data in 14 bands using three different telescopes and sensor systems.Images were acquired in three visible and nearinfrared bands with a spatial resolution of 15 m, six short-wave-infrared bands with a spatial resolution of 30 m, and five thermal infrared (TIR) bands that have a spatial resolution of 90 m (ASTER Validation Team, 2009).Further, ASER digital elevation model with 30meter resolution was generated using level-1A scenes Valdiya (2008) and Sharma and Paul (1988) (Table 1).Topographic maps at a scale of 1:250000 scale were acquired from Army Map Services, which publishes it on behalf of the University of Texas at Austin, which is responsible for the publication and distribution of these maps.The downloaded maps were georeferenced to UTM projection and WGS84 datum using Global Mapper 13.These georeferenced maps were used for vectorizing the locations of villages and small towns/ cities in the Goriganga river basin.It also provided additional GIS input to compare the accuracies of drainage networks and watershed boundaries.Geological information about lithology, significant tectonic features such as thrusts and dislocations, and smaller faults, joints, and shear zones, was primarily collected from Valdiya (2010) revised geological map of the Kumaon Himalaya.To be consistent with the AS-TER DEM and multispectral data used in this work, the map was georeferenced to the UTM projection and WGS84 datum.Additionally, published geological maps of the Kumaon Himalaya that had a geographic referencing coordinate system and could thus be georeferenced were used to collect geological data (Fig. 2).Watershed boundaries and detailed drainage networks were extracted using ArcGIS and TauDEM plugins due to their reproducible and accurate results.However, the conventional digitization method was avoided due to its time-consuming and non-reproducible nature.The automated watershed and drainage network extraction method is the most favored and widespread for studying watersheds.Stream networks and basin boundaries derived by the TauDEM (Terrain Analysis Using Digital Elevation Models) plugin and further various hydrologic characteristics formulated using digital elevation data of the study area.These hydrologic grids are put into the GIS domain to locate all sites within the DEM upstream of the designated outlet point to define the borders (Farooq et al., 2015).The watershed delineation process involves four main steps: filling in 'pits' in the raw DEM, figuring out the flow direction from each grid cell, figuring out the flow accumulation, stream grids, and drainage area for each cell, and drawing the boundaries of the watershed by working backward from the flow direction grid.To provide comprehensive coverage of the Goriganga river basin, four ASTER DEMS scenes (ASTGTM_N290E079, ASTGTM_N29E080, ASTGTM_N30E079, and ASTGTM_N30E080) were tiled.The following was the data processing process for stream network and watershed delineation: The Digital Elevation Model (DEM) typically comprises low-elevation sections that were bordered by higher topography that impedes a smooth flow.Low-elevation sections might not always be found naturally; they might occasionally be straightforward artifacts created during continuous slope modeling.Low elevation section filling is done by merely increasing the elevation of a cell designated as a pit until it is equal to the elevation of its uphill neighbor.The pit-filled DEM was used to calculate the flow direction from each grid cell to its next downhill neighbor.The pit-filled data (hydrologically corrected) is given as input for deriving the slope of each grid cell, named the D8 flow direction.The output numerically encodes the steepest descent direction from each grid as 1 = East, 2 = North East, 3 = North, 4 = North West, 5 = West, 6 = South West, 7 = South and 8 = South East.This step involves of delineating flow accumulation, stream grids, and drainage areas.The total number of uphill cells that flow to any given cell was determined using the flow direction grid.During this step, a summing was done for all cells within a dataset to create a flow-accumulation grid.It is based on a total number of upslope cells flowing into a downslope-flowing cell.In last, so far generated hydrologic grids input for delineating watershed boundaries.For this purpose, an outlet was specified at the confluence of Goriganga with Kaliganga River and boundaries were derived through the specified outlet.Further, these layers are converted to a polygon, representing the boundaries.Using ArcMap 10.7, the Goriganga catchment was distributed into 32 watersheds for detailed drainage network and watershed analysis by entering an optimum threshold during delineation.Further, a detailed drainage network was derived from Aster Digital Elevation Model (DEM) and used to extract and calculate the following morphotectonic indices for assessment and mapping variation in tectonic activity considering lithotectonic features (Table 2). Stream Length Gradient Through differential erosion, rivers sculpt the surround- By stacking cloud-masked and non-cloud-masked scenes ing rocks and soils at varying rates, changing the landscape (Hack, 1973).As the rate of uplift is balanced by the rate of erosion and the river system has a slightly concave longitudinal profile (Schumm et al., 2002), the processes of erosion and uplift indicate crustal stability.Tectonic, lithological and climatic variables deviate from this dynamic equilibrium (Hack, 1973).Hack (1973) proposed an indicator termed the Stream Length Gradient Index to assess gradient in river systems.It is defined as SL= (1) Where SL is the stream-length gradient index, ΔH is the change in elevation of the channel reach under investigation, ΔL is the length of the reach and L is the total planimetric length from the midpoint of the reach to the highest point on the channel (Mahmood and Gloaguen, 2012).The SL index assesses relative tectonic activity (Keller and Pinter, 2002).High values of SL indicate recent tectonic activity while high values indicate low tectonic activity.However, anomalously low values may also be associated with tectonic activity when streams and rivers flow along strike-slip faults (Keller and Pinter, 2002). Hypsometric Integral (HI) The hypsometric integral is alluring for morphometric investigations since it is a dimensionless parameter and enables scale-independent comparison of various catchments.According to Strahler, low values indicate ancient, eroded landscapes in the final stages of geomorphic evolution, while high values indicate younger, less eroded landscapes where the uplift was outpacing surface denudation processes, meaning that a signifi-cant portion of the basin's rock mass remains to be removed (Dowling et al., 1998).The volume of the basin above its lowest point, which has not been eroded, is expressed by the hypsometric integral.The integral explains the distribution of elevation in a certain area of the terrain, particularly a drainage basin.It is defined as: HI = (2) Where EL mean is the mean elevation, EL min the minimum and EL max is the maximum elevation within the drainage basin as extracted from a DEM.The range of hypsometric values is 0 to 1. Due to the high processing involved, hypsometric analysis has not been used as much in the past.However, with the accessibility of digital elevation data and the advancements in computing and GIS technologies, hypsometry is a useful technique to measure watershed parameters objectively (Dowling et al., 1998). Length of Overland Flow (Lg) A network of flow paths downslope to the nearest canal via surface runoff follows the drainage divide.According to Horton (1945), the length of time that water travels over land before entering a particular stream route is known as the length of overland flow (Lg).The length of overland flow is estimated by: L g = (3) where drainage density is denoted by Dd.The overland flow length is almost half the drainage density (Dd) and roughly half the average distance between stream channels.It thus provides a measure of drainage basin efficacy.If the overland flow values are smaller, surface runoff will enter streams more quickly.It indicates the concentration or stream frequency of a region and is directly correlated with the infiltration rate.There are typically fewer streams per unit area in a river basin with permeable rocks than one with impermeable rocks.In fact, L g is largely controlled and affected by the degree of slope, geological conditions of lithology and structure, soil characteristics, vegetation cover, rainfall intensity and infiltration capacity (Ghosh, 2011).L g is basically inversely proportional to the average slope of the stream and is synonymous with the length of sheet flow.One important aspect of Lg, it does not directly exhibit the active tectonic status of a drainage basin, it can nevertheless serve as a proxy indicator of active tectonics, in that smaller values of L g are indicative of steeper gradients and, therefore, of greater degree of tectonic turmoil. Aspects Drainage Density (Dd) Horton (1932) established drainage density (Dd) as a significant morphometric term.Dd is the ratio of the total channel length of streams of all orders in a basin to the basin area.Dd is a crucial sign of the linear scale of fluvial topographic landform components.It provides a numerical measurement of the typical length of the stream channel for the basin and illustrates how closely spaced the channels are.In places with highly permeable subsurface material under extensive vegetation cover and/or areas with low relief, a low drainage density is more likely to occur, according to measurements of drainage density done over a wide range of geologic and climatic types.High relief, limited vegetation, and weak or impermeable underlying material all contribute to high drainage density.A fine drainage texture is produced by high drainage density, whereas a coarse texture is produced by low drainage density (Strahler, 1964).Drainage Density is calculated by following the formula.Dd = Lu/A (4) Where Lu = Total stream length of all orders, A = Area of basin (km2) Sometimes, drainage densities are less than 1 km/km2 in very permeable terrain with minimal chance of runoff.Densities over 500 km/km2 are frequently observed on heavily dissected surfaces.Upon detailed examination of the mechanisms generating variation in drainage density, it has been observed that climate, terrain, soil infiltration capacity, vegetation, and geology are among the factors influencing stream density (Pidwirny, 2006).Nag (1998) states that regions with dense vegetation, low relief, and highly resistant or permeable subsurface material are typically associated with low drainage density.The importance of Dd as a factor influencing how long it takes for water to get to a basin's outlet was noted by Langbein (1947).In contrast to low drainage density basins, which have a delayed hydrologic response, high drainage density basins frequently have a very rapid hydrologic response and a hydrograph with a steep recession (or falling) limb. Channel Sinuosity (Cs) A dimensionless parameter called channel sinuosity (Cs) or sinuosity index is created by dividing the length of a stream's real course by the distance between its two endpoints.The channel's midline serves as the measurement for the sinuous length.Sensitive markers of active tectonics include sinuosity indices of rivers in mountainous regions.Rivers rarely travel in a straight line from source to mouth.Changes in slope brought on by the deformation result in adjustments to river channels.Channel sinuosity value 1 corresponds to perfectly straight channel while values close to 1 are characteristically indicators of tectonically active regions.Channels with higher ratios are called meandering channels (Leopold et al., 1964) and are typical of tectonically stable areas.The channel sinuosity index was used to assess the relative tectonic activity in the Goriganga River basin, proposed by Mueller (1968).For calculating the channel sinuosity index, the below-mentioned formula was used: (5) where Cs is the channel sinuosity, Sl is the stream length, and Vl is the valley length. Valley Floor Width and Height Ratio (Vf) Valley floor width to height ratio (V f ) was calculated by Bull and McFadden (1977) as a geomorphic measure of active tectonics to distinguish between open, broad valleys in regions of moderate tectonic stability and narrow, deep valleys typical of quickly ascending areas.According to Silva et al. (2003), the Vf illustrates the distinctions between U-shaped, broad-floored valleys with primarily lateral erosion into nearby hill slopes in response to relative base-level stability or tectonic quiescence, and V-shaped valleys occupied by streams incising their bedrocks in response to active uplift.This index, therefore, uses one vertical and one horizontal dimension at a given section across the stream in the erosional system.The V f index is calculated by the equation: Where, V fw is the width of valley floor and E sc is the elevation of the valley floor.E rd and E ld are the elevations of right and left valley divides respectively (Fig. 3).and Stolar, 2008).High levels of HI along the eastern bank of the Goriganga support the idea that trans-Himalayan River courses predating the Himalayan orogeny are associated with regions of relatively high rates of active tectonics in the Himalaya perpendicular to the main grain of Trans Himadri Fault (THF) on the both east and west banks of Goriganga.Southern region which is experiencing moderate tectonic activity as per the results of Length of Overland Flow (Lg) indicat-ed moderate tectonic activity.However, this region is also transecting with active faults and lineaments.However, in this central southern region, fluvial processes were dominant to regulate erosion and cannot be neglected.Two sub-basins expressed the highest values of Lg, indicating lesser tectonic activity than the rest.It means these are dominated by erosional processes rather than tectonic activity due to their proximity to the trunk It can be readily appreciated that the sub-basins of the northern half of the Goriganga are somewhat more tectonically active than those of the southern half, as indicated by the Lg index (Fig. 9). Conclusion It is concluded that there is strong neotectonic control over the evolution of the Goriganga River Basin due to known and unknown faults, lineaments, and thrusts.Additionally, meandering streams, parallel tributaries, tilted sub-basins, extended valleys, and active faults are manifestations of the effects of tectonic struggle.As it stands, the Rauntis and Baram Faults continuously alter the valley.The tectonic activity that predominated around the Trans Himadri Fault, Main Central Thrust, and Munisiari Thrust significantly altered the drainage network of the Goriganga river basin, as seen by the Stream Length Gradient (SL) and Channel Sinuosity, such as the Bogdyar meander.However, due to glacial erosion, which shapes their valleys into a U shape, only a few subwatersheds are exhibiting correspondingly modest tectonic activity. The Goriganga River basin is determined to be tectonically active.Additionally, the forms and processes that are causing the active deformation are greatly impacted by the ongoing tectonic activity.The strength of these processes is great enough to cause rivers to diverge from their original paths (Bogdyar Meander, almost right angle turn at Trans Himadri Fault), the construction of linear and serrate ridges, fault scarps, etc., close to established faults and thrusts, among other things. 2 km from north to south.The geological formations of the Tethys domain are represented by a thick late Proterozoic to late Cretaceous sedimentary cover, termed Tethys sediments, lying over the crystalline basement, which represents the distal continental margin of the Indian shield.The Higher Himalayan Crystallines (HHC) dominate the Central part of the basin, which includes high-grade metamorphic rocks of Precambrian, Paleozoic, and Early Mesozoic ages (Metamorphic rocks) which metamorphosed during the late Eocene to early Miocene and invaded by granites of Miocene age (Larson and Godin, 2009, Valdiya, 2001). EL mean -EL min ) / (EL max -EL min ) where EL mean is the mean elevation, EL min is the minimum and EL max the maximum elevationStrahler (1952) Valley Floor Width to Height Ratio (V f )V f = 2V fw / (E ld -E sc ) + (E rd -E sc ), where V fw = width of valley floor, E ld and E rd = Elevation of left and right valley divides, E sc = Elevation of the valley floor Bull and McFadden (1977) Stream-length gradient index (SL) SL = Δh /Δl×L, where Δh = Difference in elevation of the ends of the reach, Δl = Length of reach, L = distance from the midpoint of reach to the most distant point upstreamHack (1973) Fig. 6 .Fig. 7 .Fig. 5 .Fig. 8 . Fig. 6.Showing active control of tectonic Over Rauntis Gad sub-basin ) and Class 3 (4.18 to 5.95), as shown in Fig.10.Since the northwest is composed of Tethys deposits and Vaikrita formations, it has a relatively high drainage density and surface permeability.High drainage densities are also associated with a range of tectonic consequences.The southern region has moderate drainage density values corresponding to active faults and lineaments leading to a relatively high infiltration rate(Khairuddin et.al., 2017).The southern region has moderate drainage density values corresponding to active faults and lineaments, leading to a relatively high infiltration rate. Fig. 13 . 14 . Fig. 13.Showing proposed Dam Sites and Bogdiyar Meander.It comes in very high tectonically active region Fig. 14.Valley Floor Width to Height Ratios (Vf) of the sub -basins of the Goriganga.
2024-01-10T16:29:27.443Z
2023-12-20T00:00:00.000
{ "year": 2023, "sha1": "a53f9ffe2e64b3a179e59061954ff735bff912ef", "oa_license": "CCBYNC", "oa_url": "https://journals.ansfoundation.org/index.php/jans/article/download/5068/2612", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "15eaf65b54f2daeec4f2f465e7cce25e9ec18992", "s2fieldsofstudy": [ "Geology", "Geography", "Environmental Science" ], "extfieldsofstudy": [] }
236689781
pes2o/s2orc
v3-fos-license
Machine learning screening of bile acid-binding peptides in a peptide database derived from food proteins Bioactive peptides (BPs) are protein fragments that exhibit a wide variety of physicochemical properties, such as basic, acidic, hydrophobic, and hydrophilic properties; thus, they have the potential to interact with a variety of biomolecules, whereas neither carbohydrates nor fatty acids have such diverse properties. Therefore, BP is considered to be a new generation of biologically active regulators. Recently, some BPs that have shown positive benefits in humans have been screened from edible proteins. In the present study, a new BP screening method was developed using BIOPEP-UWM and machine learning. Training data were initially obtained using high-throughput techniques, and positive and negative datasets were generated. The predictive model was generated by calculating the explanatory variables of the peptides. To understand both site-specific and global characteristics, amino acid features (for site-specific characteristics) and peptide global features (for global characteristics) were generated. The constructed models were applied to the peptide database generated using BIOPEP-UWM, and bioactivity was predicted to explore candidate bile acid-binding peptides. Using this strategy, seven novel bile acid-binding peptides (VFWM, QRIFW, RVWVQ, LIRYTK, NGDEPL, PTFTRKL, and KISQRYQ) were identified. Our novel screening method can be easily applied to industrial applications using whole edible proteins. The proposed approach would be useful for identifying bile acid-binding peptides, as well as other BPs, as long as a large amount of training data can be obtained. www.nature.com/scientificreports/ based on the specific interests of researchers and industries, with no a priori knowledge, and no guarantee that the desired BPs are present. The common approach therefore has a significant 'trial and error' element, potentially leading to wasted time and money. In silico approaches for identifying novel BPs have been proposed 9,10 . In silico approaches make use of peptide databases containing sequences derived from proteins of interest and implement bioinformatic tools to predict bioactivity. Many peptide databases have been developed, including the database BIOPEP-UWM 11 , which stores BPs along with edible proteins, allergenic proteins with their epitopes and sensory peptides, and amino acids. In addition, it implements predictive tools, including the theoretical degree of hydrolysis and bioactivity prediction. Using the BIOPEP-UWM database, the appropriate fraction of DPP-4 (dipeptidyl peptidase-4) inhibiting peptides derived from mealworms (Tenebrio molitor) was selected 12 . This approach has also been adopted in pigeon pea (Cajanus cajan) 13 . Recent studies have shown that the combination of databases with advanced machine-learning-based bioinformatics tools is a promising approach for screening and developing novel BPs. For example, Meher et al. 14 created an antimicrobial peptide by using predictive models with support vector machine (SVM) algorithms and antimicrobial databases CAMP 15 , APD3 16 and AntiBP2 17 . Gautam et al. predicted cell-penetrating activity by using SVM and novel databases 18 and achieved a maximum accuracy of 97.4%. In the present study, a novel strategy to screen BPs derived from edible proteins was developed using BIOPEP-UWM and machine learning. Machine learning using training data is convenient for acquiring the sequence characteristics of BPs. If the acquired model has high prediction accuracy, the derived BP fragments can be predicted without any wet experiment. This strategy allows for the exploration of all BPs from edible proteins by in silico screening using databases such as BIOPEP-UWM. The experimental workflow is shown in Fig. 1. We used the training data obtained with a high-throughput peptide array to generate positive and negative datasets. The predictive model was generated using the explanatory variables of the peptides in these datasets. Finally, this model was applied to a peptide database derived from edible proteins using BIOPEP-UWM. In the method proposed here, the desired BP was identified first by the machine-learning method, and then the food materials were selected. It should be noted that optimization of the separation process was established without a trial-and-error approach after BP has been chemically synthesized. This is the opposite or reverse approach for BP identification. The aim of this study was to demonstrate the proof-of-concept that BP can be identified from a large number of candidate proteins, by using the opposite approach. We tested our peptide screening tool by searching for bile acid-binding peptides. In humans, cholesterol absorption occurs in the proximal jejunum of the small intestine, where both dietary cholesterol and biliary cholesterol are available for uptake from the intestinal lumen via bile acid micelles 19,20 . Bile acid-binding peptides interact with bile acids that form micelles and subsequently disrupt the micelles, contributing to the suppression of intestinal cholesterol absorption. We previously designed bile acid-binding peptides using an informatics approach [21][22][23][24] . However, the designed peptides were not found in storage proteins or protein sources, and proteases were selected based on our interests. Bile acid-binding peptides work on the intestinal tract, and we Positive and negative datasets were generated from the training data. Subsequently, explanatory variables were generated with amino acid features (site-specific features) and peptide features (global features). Predictive models were constructed using a combination of the training data and explanatory variables. A new database containing peptides found in edible proteins was created using the edible protein database BIOPEP-UWM. Lastly, constructed models were applied to the edible protein database and the bioactivity of each peptide was predicted. www.nature.com/scientificreports/ therefore do not need to consider their absorption from the small intestine when developing novel health foods. Using our novel approach, we have established a framework for rapid and cost-effective screening of BPs, which may be applied to the development of new health-promoting products. Results and discussion Measurement of bile acid binding in a synthetic peptide array. Training data are essential for the construction of classification models. To generate training data, 460 4-, 5-, 6-, and 7-mer peptides were chemically synthesized in the peptide array. A part of the synthesized peptide was identified by MS analysis to verify that the synthesized peptides coincided with the designed amino acid sequences. Assessment of binding ability between bile acid and peptides was performed using two kinds of antibodies: a first antibody against bile acid and a fluorescent-labeled secondary antibody. As a binding activity of the peptide, the average fluorescence intensities were determined based on the triplicate fluorescence intensities of the same peptide sequence. The sequences and fluorescent intensities are shown in Supplementary Table S2 and Supplementary Figs. S1-S6. The fluorescence intensity of 4-mers was lower than that of longer peptides ( Supplementary Fig. S2A). The observed low intensity of 4-mers in the training data may be due to the relatively low hydrophobicity of the 4-mer peptides. Using the peptide array data, 150 peptides with the highest fluorescent intensities were defined as the 'positive' dataset, and 150 peptides with the lowest fluorescent intensities were defined as a 'negative' dataset for bile acid binding activity. The average fluorescence intensities of the positive and negative datasets are presented in Table 1. Here, 150 positive dataset numbers were selected because the average fluorescence intensities of posi- www.nature.com/scientificreports/ tive datasets were more than the average plus SD of 460 peptides. The same number of negative datasets were selected. The frequency distributions of the fluorescent intensities of all peptides are shown in Fig. 2 for clarity. The distribution of the 5-mer was slightly broader than that of the others. When the peptides became longer, the hydrophobicity of the peptide gradually increased. This may be the reason the high performance of 5-mer was obtained, as shown in Table 2. Since there was a significant difference between the two datasets (P < 0.001), the randomly designed peptide library contained peptides with different bile acid-binding bioactivities. Construction of predictive model and evaluation of model performance. To construct the predictive model, 7 amino acid features (AAF), including isoelectric point (IP), polarity (PL), hydropathy index (HI), molecular weight (MW), index of helix (Ph), and index of turn (Pt) listed in Supplementary Table S1 were selected. The collinearity of these features is contradictory. Since these were used as site-specific characteristics, the number of these AAFs was 28 for 4-mer, 35 for 5-mer, 42 for 6-mer, and 49 for 7-mer peptides. Next, the characteristics of peptides, not amino acids, were generated using these seven AAFs. The average of each AAF was generated as important global feature (GF). However, even though similar averages of an AAF such as IP are nominated in two arbitrary peptides, the peptide features are quite different if the maximum or minimum of the AAF of peptides differ. For instance, even though the average of IP is neutral, it is considered that the features of peptides consisting of non-charged amino acids are quite different from those of peptides consisting of positively and negatively charged amino acids. To explain the peptide feature, therefore, the deviation of AAF, the maximum, and the minimum were generated as GFs. Since there are seven AAFs, 28 GFs were generated independent of peptide length. To perform machine learning, peptide features such as AAFs and GFs of 300 peptides (positive = 150, negative = 150) were calculated. For each 4-mer, 56 features were generated (28 AAFs and 28 GFs), for each 5-mer 63 (35 AAFs and 28 GFs), 6-mer 70 (42 AAFs and 28 GFs), and 7-mer 77 (49 AAFs and 28 GFs), and used as explanatory variables. Three algorithms were used to construct the predictive model (SVM, RF, and LR), and the model performance was evaluated by comparing the accuracy, precision, and recall. Peptides with a probability of > 0.5, designated as positive, and those with a probability of < 0.5, were designated as negative for bile acid binding ability. Except for the precision scores of 5-and 7-mers, all RF scores were the highest among the three tested algorithms (Table 2). Therefore, RF was selected as the predictive algorithm. The scores 4-mer peptides were lower than the scores of longer peptides ( Table 2). The ratio of the average fluorescence intensity of positive the dataset and that of the negative dataset was defined as the P/N intensity ratio. In Table 1, the P/N intensity ratio of 4-mers (2.67) was lower than that of longer peptides (3.63 for 5-mers, 4.11 for 6-mers, 3.87 for 7-mers). This is caused by the relatively lower overall fluorescence intensity of the 4-mer training data. The model performance was roughly corelated with the P/N intensity ratio. The reason for the poor performance is the relatively large number of FPs and FNs predicted by the acquired model when the P/N intensity ratio is low. In order to predict the bioactivity of peptides, quantitative analysis of the relationship between the structure and bioactivity of peptides has received much interest from many physical biochemists. In a recent study 25 , the hydrophobicity of the amino acid located at the N-terminal end was reported to be more hydrophilic than that of the same amino acid located at both the middle and C-terminal ends. Therefore, it is likely that 4-mer peptides are more hydrophilic than longer peptides, such as 5-, 6-, and 7-mer peptides. The reason why 4-mer peptides show lower binding to bile acid is also considerable because of the lower hydrophobic interaction between the 4-mer peptide and bile acid. However, hydrophobicity is necessary for the strong binding of peptides to bile www.nature.com/scientificreports/ acid. In our previous paper, we identified bile acid-binding 4-mer peptides such as NGLK, YEAR, etc. 21 . These peptides showed similar or higher binding activity compared to the 6-mer binding peptides. It is likely that the 4-mer binding peptides show different physiochemical features compared with those of longer peptides. To investigate the importance of the input features, the variable importance was estimated according to the increase in the predictive error due to the permutation of out-of-bag data for the given variable. The importance of each input variable is listed in Supplementary Table S3. Most of the top 10 selected features referred to the GFs of peptides, namely av, sd, min, max, with the exception of two specific features: residue2_Molecular_weight for 4-mers and residue1_Isoelectric_point for 7-mers. In addition, two features for 4-mers, four features for 5-mers, four features for 6-mers, and five features for 7-mers were related to the peptide isoelectric point. Similarly, five features for 4-mers, three features for 5-mers, two features for 6-mers, and two features for 7-mers were related to molecular weight. This suggests that the GFs are more important than the site-specific features for bile acidbinding activity in peptides of 4-7 amino acids. Bile acid molecules are amphiphilic, with a hydrophobic steroid core and hydrophilic hydroxyl groups, and therefore, have strong surfactant action. Since peptide binding can occur in either direction with bile acids, site-specific peptide features may be less important. Features referring to the isoelectric point and molecular weight are among the most important in Supplementary Table S3. This suggests that peptides with high isoelectric points or high molecular weights bind strongly to bile acids. The five amino acids with the highest isoelectric points were R, K, H, P, and I 26 , and the top five for molecular weight were W, Y, R, F, and H 27 . Therefore, basic or aromatic peptides have higher binding activity against bile acids. Some studies have investigated the binding mechanisms between bile acids and other compounds, such as sterols and nisin [28][29][30][31] , and revealed that hydrophobic amino acids, especially aromatic amino acids, interact with bile acid micelles. These findings are in agreement with the top 10 features listed in Supplementary Table S3. We analyzed the appearance frequency of amino acid residues for peptides listed in Supplementary Table S2 and obtained Fig. 3 to verify the reproducibility of the learning data. In the appearance frequency of amino acids for positive peptides, 5 amino acids, F, K, R, W, and Y, showed high frequency. Among the negative peptides, 3 amino acids, C, D, and E, were relatively high. These results coincided with the results of the feature analysis from www.nature.com/scientificreports/ Supplementary Table S3. However, in the case of 4-mer peptides, a slightly different frequency was obtained; A and G were relatively low in positive peptides while D and E were relatively low in negative peptides. Construction of edible peptide database and prediction of bile acid binding activities. A set of 710 edible proteins were obtained from BIOPEP-UWM and digested using all available predicted protease binding sites (Table 3), resulting in 199568 4-mers, 198808 5-mers, 198055 6-mers, and 197310 7-mers. After removing duplicate sequences, the dataset contained 56171 4-mers, 89663 5-mers, 98387 6-mers, and 102805 7-mers. Thus, a total dataset of approximately 350000 peptide sequences was generated. All peptide sequences generated from edible proteins were applied to the acquired RF model. All applied peptides were labeled by output "probability", since the RF model is a discrimination model. The results are shown in Supplementary Table S4. Applied peptides were ranked in order of probability, and the top 50 positive and bottom 50 negative predicted peptides were extracted. Those peptides were synthesized and their bile acid binding activities were determined using a peptide array. The synthesized sequences are listed in Supplementary Table S5, and their fluorescence intensities are shown in Fig. 4. The average fluorescence intensity of positive peptides was higher than that of negative peptides (P < 0.001), indicating that the RF model could successfully predict bile acid binding activity. Those probability values were also listed in Supplementary Table S5. Since those values were nearly 1.0, the correlation between theoretical and experimental parts could not be discussed precisely. The details of the peptides are shown in Supplementary Table S6. We analyzed the appearance frequency of amino acid residues from Supplementary Table S5 and prepared Fig. 5 to verify the accuracy of the results of the predictive model. Since only 50 top or bottom peptides were used for appearance frequency, an explicit discussion was not clarified. However, 3 amino acids, F, L, and Y, showed a high frequency in positively predicted peptides, while W was high in 4-mer predicted peptides and R was high in 5-, 6-, and 7-mer predicted peptides. The different frequencies of positively predicted peptides may be due to the relatively low discrimination between positive and negative peptides (Fig. 4). Novel bile acid binding peptides from edible proteins. The top five peptides, ranked by fluorescence intensity in a peptide array for bile acid binding, are shown in Table 4. Seven of the peptides with the highest scores for bile acid-binding activity mapped to storage proteins in the database: VFWM from legumin A (Pisum sativum) 32 , QRIFW from high-molecular-weight glutenin (Triticum aestivum) 33 , RVWVQ from profilin-1 (Hordeum vulgare) 34 , LIRYTK from serum albumin (Gallus gallus) 34 , NGDEPL from legumin chain B fragment (Vicia faba) 35 , PTFTRKL from chicken connectin (titin) fragment (Gallus gallus) 34 , and KISQRYQ from alpha-S2-casein (Bos taurus) 34 . NGDEPL was predicted to have a low affinity for bile acid; however, it had a high bile acid-binding activity according to the peptide array. The mechanisms underlying this apparent contradiction are unclear, but this peptide might bind stereospecifically to bile acids. Since storage proteins are favorable for the manufacture of health foods and cosmetics, these protein sources are expected to contain novel bioactive components. Most of the predicted BPs in the present dataset were obtained by proteolysis by enzymes from plants or microorganisms and proteolysis by gastrointestinal enzymes 36 . Therefore, to evaluate the utility of these peptides at the industrial scale, we examined whether the seven peptides derived from storage proteins could be generated using peptidases or proteases. As a result, KISQRYQ was predicted to be generated from alpha-S2-casein (Bos taurus) with peptidyl-Lys metalloendopeptidase (Armillaria mellea neutral proteinase). Gutiez et al. previously investigated the relationship between autolysis caused by lactic acid bacteria and the production of angiotensinconverting enzyme (ACE)-inhibitory peptides, and reported that KISQRYQ was generated from skimmed milk (alpha-S2-casein) by Lactococcus lactis subsp. lactis IL1403 37 . Taken together, these results suggest that KISQRYQ could be a candidate BP for health food. In the present study, a new BP screening method was developed based on a synthetic peptide library for bile acid binding and machine learning. A database containing peptide sequences derived from edible proteins was developed to identify peptides with features associated with bile acid binding. Novel bile acid-binding candidate peptides were discovered by combining these two tools. Among the peptides with the highest predicted scores for bile acid binding activity, seven (VFWM, QRIFW, RVWVQ, LIRYTK, NGDEPL, PTFTRKL, and KISQRYQ) were derived from storage proteins. Among them, KISQRYQ was predicted to be generated from alpha-S2-casein (Bos taurus) with peptidyl-Lys metalloendopeptidase (Armillaria mellea neutral proteinase) or from skim milk with Lactococcus lactis subsp. lactis IL1403. Our novel method could successfully screen BPs and can be easily Table 3. The numbers of peptides derived from edible proteins by performing in silico protease digestion using all available proteases in the database. After removing duplicate sequences, the final number of peptides is shown in the right column. www.nature.com/scientificreports/ applied to industrial applications based on whole edible proteins. The proposed approach would be useful for bile acid-binding peptides, as well as for other BPs, provided that a large amount of training data can be obtained. Materials and methods Materials. The Fmoc amino acid OH was purchased from Watanabe Chemical Industries, Ltd. (Japan). BSA was purchased from Fujifilm Wako Pure Chemical Corporation (Japan). Taurocholic acid (T-4009) was purchased from Sigma-Aldrich (USA). The anti-cholic acid antibody (FKA502) was purchased from Cosmo Bio (Japan). Anti-rabbit IgG-conjugated Alexa 488 (ab150077) antibody was purchased from Abcam (Cambridge, UK). Synthetic peptide array generation. To generate positive and negative peptide training datasets for our machine learning algorithm, we synthesized 460 4-, 5-, 6-, and 7-mer peptides that were randomly generated using R software (version 3.5.3) (R development Core Team, https:// www.r-proje ct. org/). All peptides were synthesized on a cellulose membrane with a spot synthesizer (Intervis, ASP222, Cologne, Germany), as previously reported 38 . Fmoc-aund-OH was introduced at the C-terminal end of the peptide as a spacer. After synthesis, the side-chain-protecting groups of the Fmoc amino acids were removed using trifluoroacetic acid. The membrane was washed thoroughly with diethyl ether and methanol and dried. The membrane was soaked in PBS for 24 h and then transferred into 1% BSA in PBS solution at 37 °C for 12 h before the assay commenced. Bile acid binding assay. A bile acid-binding assay was conducted according to a previous study 23 . After washing the peptide array with PBS, 10 μg/mL taurocholic acid dissolved in PBS was added to the arrays and incubated for 1 h. After washing with PBS, anti-cholic acid antibody dissolved in 0.25% BSA was added to the array and incubated for 1 h at 37 °C. After washing with TBS containing 0.05% Tween 20, 2 μg/mL of anti-rabbit www.nature.com/scientificreports/ IgG conjugated to Alexa 488 dissolved in PBS was added and incubated for 1 h at 37 °C. After washing with TBS, peptide spots were fluorescently detected using a fluorescent imager (Typhoon FLA-7000, GE Healthcare Japan Life Sciences, Tokyo, Japan). The scanned images were quantified using Image Quant TL (GE Healthcare Japan Life Sciences, Tokyo, Japan). Average fluorescence intensities were calculated by subtracting the peptide array treated only with the secondary antibody from the triplicate fluorescence intensities of the same peptide sequence. Feature generation. Seven features were considered for the prediction of bile acid binding activity (Supplementary Table S1). General physicochemical features of peptides were described by pI 26 The correlation coefficient between Ph and Ps was > |0.98|; therefore, Ps was excluded from the feature index in this study. In addition, previous research has revealed that hydrophobic amino acids, especially aromatic ones, interact with bile acid micelles 19,30,31,41 . Therefore, the number of aromatic amino acids was included as a peptide feature. Based on these features, the GFs of the library peptides were generated. For example, in the case of 4-mer peptides, each amino acid has seven features (Supplementary Table S1), and 28 AAFs were also generated for each 4-mer peptide. In addition, four global values, the maximum, minimum, average, and standard deviation (sd) were generated for each peptide. This means that a total of 56 features (28 AAFs and 28 GFs) were generated and used as explanatory variables for each 4-mer peptide. All features were calculated in R. Construction of prediction models. To construct the prediction model, three algorithms were used: support vector machine (SVM), random forest (RF) 42 , and logistic regression (LR). Scikit-learn libraries 43 were adopted, and leave-one-out cross-validation (LOOCV) was imported into Python. The parameters for the algorithms were set as follows: In the SVM (linear) model, the default value of the parameter cost (C = 1) was used. In the RF model, the number of trees to grow (n tree ) were set at 100 or 500, and the number of variables randomly sampled as candidates at each split (m try ) was set to "auto. " In the LR model, the penalty was set to "lasso, " C was www.nature.com/scientificreports/ set to 10 or 50, and the maximum number of iterations required for the solvers to converge (max_iter) was set to 100. The probability of binding to bile acid was calculated for all peptides and classified based on a score of 0.5. The performance of all three machine learning models was evaluated using 3 metrics: TP is the true positive; TN is the true negative; FP is the false positive; FN is the false negative. Generation of peptide database for edible proteins. A total of 710 protein sequences were obtained from BIOPEP-UWM, available at http:// www. uwm. edu. pl/ bioch emia/ index. php/ pL/ biopep (accessed in October 2018) 11 . Peptides were generated based on the entire sequence of proteins. All sequences were sectioned into 4-, 5-, 6-, and 7-mer peptide fragments with one residue shift from the N-terminal amino acid. The peptide database was generated in csv format. For cleavage site prediction, PeptideCutter available at https:// web. expasy. org/ pepti de_ cutter/ was used 44 and all enzymes stored in the software were used as simulation enzymes.
2021-08-03T00:04:57.784Z
2021-02-19T00:00:00.000
{ "year": 2021, "sha1": "680c09b9335c1951690b3c80ec55bf9a807ec96f", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-95461-1.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "65ce75143bf12622dac9a362ee3d75731df7868d", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Chemistry", "Computer Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
259323939
pes2o/s2orc
v3-fos-license
Dental hygienists and dentists as providers of brush biopsies for oral mucosa screening Background: Oral cancer is a severe and potentially fatal disease usually starting in the squamous epithelium lining the oral cavity. Together with oropharyngeal carcinoma, it is the fifth to sixth most common malignancy worldwide. To limit the increase in the global oral cancer incidence over the past two of cancers in oral cavity. 1 Oral cancers cause a significant healthcare burden on the global scale, with some 275,000 new cases reported annually. 2In Sweden, this number gravitates towards 1200 new cases 3 and roughly 350 patients with oral-/ oropharyngeal cancer die each year. 4A 5-year overall survival rate of approximately 55% drops to only 3%-4% if the disease is detected at advanced stage. 3Moreover, approximately 30% of patients treated for oral/ pharyngeal cancers suffer a relapse, and have an increased risk of new primary tumours. 2,51][12] Oral cancer is often preceded by oral potentially malignant disorders (OPMDs), among which leukoplakia (LP), erythroplakia (EP), and oral lichen planus (OLP) predominate. 135][16][17][18][19][20] The diagnosis of OPMDs is clinical as well as histopathological.2][23] The risk for malignant transformation of LP was reported to be 3.5%, 24 but the rate varied between studies from 0.13% to 34%. 24The global prevalence of EP is reported to be 0.01%-0.21%,mostly occurring in men aged 50-70 years, 22 with malignant transformation rates ranging from 14% to 50%. 23,25,26Atrophic and erosive OLP, most frequently seen in men, have a risk of malignant transformation of 0.5%-2%. 23,27mours could be prevented through a routine screening and early detection of OPMD. 280][31][32] As dental hygienists (DHs) and dentists (Ds) regularly see their patients, they should be able to identify patients at increased risk.After being referred to a specialist these patients could be enrolled in a follow-up program at the primary point of care. 33In difference to a previous study, 34 focusing on evaluating the level of comfort in clinicians for using different adjunctive screening devices, including brush biopsies, this study was the first to investigate in depth the role for DHs in screening for premalignant oral lesions in the primary care settings.6][37] Cytological analysis of brush samples is a time-consuming method, often plagued by large interobserver variability.It requires specialist skills what makes it expensive, and difficult to introduce brush biopsies on a wider scale.However, cytology assisted by the artificial intelligence (AI)-based technologies opens opportunities for radical improvement.A deep learning-based AI method allows cost-effective, fast, non-invasive and objective characterization of cellular changes. 38It is concluded that a pipeline for nuclei classification and localization using deep learning can contribute to minimize the subjectivity of the human analysis and also support the detection of cancer at early stages. 39 | Aims The overall aim of this project was: To investigate whether DHs and Ds in general dental practice (GDP) are capable to collect enough cell material for cytological diagnosis by brush sampling of OPMDs as part of the national strategy for oral cancer screening and prevention. The specific aims of this project were: To test the ability of DHs and Ds in a dental setting to perform brush sampling for possible automated cytological diagnosis of OPMDs; To evaluate the level of comfort in performing brush biopsies for screening of oral lesions among the participating DHs and Ds.All patients over the age of 18 booked for an appointment with the DHs or Ds (regardless of the reason for their visit), were offered to be enrolled in the study.In total, 200 adult patients were intended to be recruited using convenience sampling.To be included in the study, patients had to be without cognitive impairment and to have the ability to understand information about the study, as well as to be able to sign a written consent form. | MATERIAL S AND ME THODS Clinical examination and documentation were carried out in accordance with the research protocol which included collecting data on the person's age, gender, tobacco habits, medications, and country of birth.Data on alcohol habits was collected through the Alcohol Use Disorders Identification Test (AUDIT). 42,43n samples from healthy individuals and ten from patients with OPMD were planned to be collected by each of the Ds and DHs, which is in total 100 samples from healthy individuals and 100 samples from patients with OPMD.Two brush samples were obtained from each patient, one for cytological analysis and one for hrHPV analysis. 6][47] Slides were examined at x10 objective from the left to right edge of the preparations with a minimum of 20% overlap between examined fields.Cells with somewhat atypical morphology were marked, and cell morphology was further analysed at 40× objective. 46,47When assessing the sample adequacy, a minimum of 10 fields was counted at 40× objective along a diameter that includes the center of the preparation.A cutoff of at a minimum of 5000 well-visualized/preserved squamous cells per slide was applied as in analogy to the previously established criteria for cervical swab evaluations. 45Representative images of the LBC preparations from a healthy control are shown in Figure 2. Images were obtained using Olympus BX43 light microscope, EP50 digital camera and an Imageview Cam-HD 6.3 series software (LRI Olympus).The FTA cards were processed at UU using a dedicated automated laboratory system (easyPunch STARlet; Hamilton Robotics).The system collects each card, acquires a photographic image of the sample collection area, and uses machine-learning software to identify parts of the sampling deposition area containing the highest amount of cellular material.For details, see two recent papers on brush biopsy. 35,36e hrHPV analysis detects and quantifies the following human papilloma virus (HPV) types: 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, and 59.It also measures a human single copy gene (Homo sapiens hydroxymethylbilane [HMBS]), which serves as a control.For this purpose, the samples must contain enough cellular material so that the analysis can be informative.The limit of detection for both HMBS and HPV was set to ten DNA copies per polymerase chain reaction (PCR). Following the primary investigation, patients with confirmed lesions (suspected LP, EP, or OLP) were referred to the Department of Orofacial Medicine, Falun Hospital, Falun, Region Dalarna, where the sampling procedure using a brush technique was repeated and a punch biopsy was performed as routine by a specialist in orofacial medicine.For histopathological diagnosis, the punch biopsy was referred to the Department of Oral Pathology, Faculty of Odontology, Malmö University, Malmö, Sweden. A questionnaire was designed and distributed as an online email survey to the participating DHs and Ds, in order to evaluate their level of comfort with being involved in the continuous clinical follow-up of OPMDs using non-invasive brush sampling for cytology. The questionnaire included background questions regarding number of years in the profession and previous courses on mucosal changes (answer options "yes"/"no").Other questions concerned identification of mucosal change, taking the sample, taking care of the sample, motivating the patient to agree to a biopsy, and the referral procedure.The answer alternatives for these questions were "easy," "quite easy," "quite difficult," and "difficult."Finally, the participating DHs and Ds were asked for their opinion on whether this follow-up could be included as a part of the daily work at a GDP (answer options "yes"/"no"), how long it took to take the sample, as well as time (in min) for sampling including photographing mucosal lesions. | Statistics The data were analysed using IBM SPSS Statistics for Windows, Version 26.0 (IBM Corp.).Descriptive analyses of frequencies, and distributions were performed.Chi-square tests were calculated when comparisons were made between DHs and Ds.Statistical significance was set at p < 0.05. | Ethical considerations The study involved minimal risk for patients, with only brush samples and, in referred patients, a 5 mm tissue biopsy (punch biopsy) taken from the oral mucosa.The samples were saved according to the existing procedure for pathology in Falun and at Dalarna Biobank (No: RD20-00418), and at the Institute for Immunology, Genetics, and Pathology at Uppsala University. The results are presented at group level and in such a manner that they cannot be linked to a particular person.The study was approved by the Swedish Ethical Review Authority (No: 2019-03904) and registered at Clinicaltrials.gov(NCT04081038). | RE SULTS In total, 100 patients were included in the study, 71% of whom were female.The mean age was 52.2 years (standard deviation [SD] 17.1), 52.9 years (SD 16.4) in females and 50.5 years (SD 19.1) in male patients.In total, 200 brush samples were collected by participating DHs and Ds in GDP, two from each patient.Of the patients, 67 were healthy and 33 had suspected OPMDs.Among the 33 patients with suspected OPMDs and referred to special care for punch biopsy, a further 22 brush samples were collected by a specialist.There was good correlation between the clinical diagnoses set by the DHs, Ds, and specialist with the histopathological diagnosis (see Table 1). More detailed comparison of the cytopathological and histopathological diagnoses is beyond the scope of this study and remains as (ongoing) future work. Table 2 shows the number of patient samples obtained per occupational group.In total, DHs collected more samples than Ds (57 vs. 43, p = 0.005).The DHs identified fewer patients with lesions than did the Ds (12 vs. 21; p = 0.003).All patients referred to specialist F I G U R E 3 Brush sample from a histopathological diagnosed lesion with severe dysplasia and HPV 16 infection.dental care were considered by the specialist as relevant for further investigation, and for incisional biopsies. Of the 222 samples (200 collected by DHs and Ds, and 22 collected by the specialists), the cell count was adequate for morphological assessment and hrHPV analysis (≥5000 cells) in 215 (97%). Three samples not fulfilling this criterion (i.e., containing <5000 cells) were collected by Ds. Of the 122 samples taken for cytology (100 samples collected in GDP and 22 samples in specialist care), 119 (98%) contained enough cell material to perform analysis.The samples were scanned, and the images were found to be suitable for training, validation, and test sets for developing a neural network model for cell detection and classification. Representative images of the cytological preparations are provided in Figure 1. For hrHPV analysis, 75 of the 122 samples were analysed, and 71 samples (95%) contained enough cell material to perform hrHPV analysis.All samples were negative for hrHPV. | DISCUSS ION The objective of this study was to explore if DHs and Ds were able to obtain sufficient cell material for cytological analysis by collecting brush samples. The results have demonstrated that DHs and Ds in GDP are capable and well suited to obtain cell samples for diagnostic purposes and for cell characterization as well as hrHPV analysis.The results have shown that all but three brush samples contained enough cells to allow cytological analysis, and it was possible to extract DNA for hrHPV PCR from all but four samples.The clinical picture as judged by the primary care staff as showing a mucosal derangement was, by the specialist considered, relevant for further investigation, and for incisional biopsies.Based on the histopathological diagnosis the clinical picture was considered as normal and not associated with any disease. The aim was to collect 200 brush samples in total.Due to the Covid-19 pandemic, with limited patient contact, the collection of samples ceased when samples from altogether 67 healthy patients and 33 patients with suspected OPMDs had been obtained in GDP. However, the number of samples was considered sufficient for the purpose of this study as 99% of the samples were adequate for cytology and 95% were adequate for HPV diagnosis.Before the study was initiated all participants had one day of information and education which included half a day training in oral clinical pathology which could have been extended with a few more hours.This would have possibly resulted in more samples from mucosal changes. Sweden has a long tradition of screening for cervical cancer in primary care.Cell sampling for cytology and HPV analysis is performed by midwives, who send the samples to the laboratory for testing.The evaluation of the resulting report, naturally, is in the hands of a specialist who is responsible for evaluation and organizing follow-up measures if required. 28Our long time goal is to implement this model in clinical practice for continuous follow-up of OPMDs in general dental care by DHs and Ds by using cytological diagnosis of oral mucosal changes which has been reported to be a safe, simple and fast method with high sensitivity and specificity.The modification of the Bethesda system for oral cytology can be used as a standardized system for the oral cytological assessment. 35,36,48l the participants thought that brush sampling could very well be carried out as part of routine in GDP.This notion is further underlined by the answers in the questionnaire, distributed to all participants, where a majority found the brush samples easy or quite easy to perform.The participating DHs spent more time on collecting the samples compared to the Ds, and probably this was because in Sweden, DHs have no assistants, whereas Ds do.support the diagnoses of LP and EP.However, it is mandatory to confirm a tentative clinical diagnosis with a tissue biopsy 35 to be able to decide on the correct treatment. All the collected samples were negative for hrHPV and the clinical diagnosis of OPMDs was not clearly established, confirming the challenges in identifying OPMDs as described previously. 49en an OPMD diagnosis is made, the patient should always be enrolled into a follow-up program to ensure that high grade cellular changes and oral cancer are diagnosed promptly.This is important, as it is not possible to predict which oral EP, LP, or OLP will progress to neoplasia -notably oral squamous cell carcinoma. 49 efficient, fast and cost-effective way to do this is by performing brush biopsies, a non-invasive technique with sufficient sensitivity and specificity that does not require surgery.It can be performed in GDP, as demonstrated by the results of this study. The report from the cytological examination should always include a copy to the responsible specialist in the same way as in case of screening for cervical cancer by the midwives in primary care. 28 responders, an online email survey concluded that continuing education is needed for DHs to be able to be responsible for continuous surveillance of OPMDs. 34To summarize, in advance of replacing tissue biopsies with brush biopsies for cytology, the recommendation is still to handle continuous surveillance of oral lesions at a specialist clinic, 50 which may explain the high propensity to refer patients to a specialist.The Swedish agency for health technology assessment and assessment of social services (SBU) have investigated the reliability of brush samples in combination with cytology when assessing oral mucosal lesions.The overall results showed that the sensitivity and specificity of diagnosis of oral cancer or potentially malignant changes was high.However, the sampling was mostly carried out by specialists, which makes the transferability of the results when implemented in general dental care uncertain. 51Our results show that brush sampling in general dental care worked very well in the five general dental practices which took a part in the study. In the future, even self-sampling, as available for screening for cervical cancer (which has been shown to work well), 52 may be an option.Furthermore, to organize screening, there is a need for a fast and inexpensive sampling procedure, including sampling, sending, and analysing samples in a safe way, and quickly reporting the results.It is therefore important that the process of preparing and scanning the microscope slides is efficient.Introducing an objective and noninvasive diagnostic method such as AI-assisted digital cytology may be of a great significance for the efficient OPMDs surveillance and follow-up after cancer treatment.With a reliable AI system, we expect that the workload of cytologists may decrease. More frequent brush sampling could be economically advantageous for both the society and the patient, enabling an earlier diagnosis and easier follow-up. | CON CLUS ION Based on the results of this limited study, we found that DHs and Ds in GDP are capable to collect enough cell material for cytological analysis.All the participating DHs and Ds were of the opinion that brush sampling could be handled routinely by DHs and Ds in GDP. | Scientific rationale for the study To organize screening using a fast and inexpensive sampling procedure, including sampling, sending, and analysing samples in a safe way, and quickly reporting the results, in order to prevent death from oral cancer. | Principal findings In order to better prepare DHs to identify suspected OPMDs, the curriculum in clinical oral pathology needs to be expanded, and in parallel, postgraduate and clinical training should be organized. An inquiry regarding the interest to participate in the study was sent out to all 20 GDPs in the Dalarna county, Sweden.Five clinics expressed their interest to participate.One DH and one D were recruited from each clinic.Prior to study start all the participants were engaged in a one day course conducted by 2 senior consultants in orofacial medicine which included: (1) information about the purpose of the study and how it is organized, (2) information and discussion regarding Good Clinical Practice (GCP) adapted for those who work with clinical trials in dentistry (3) lectures on clinical oral pathology including oral examination, in order to learn to identify OPMDs and (4) practical training on how to perform and handle the brush samples for cytology and HPV diagnosis. 16015037, 2023, 3 , Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/idh.12713by Uppsala University Karin Boye, Wiley Online Library on [12/10/2023].See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions)on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License The cytobrush was placed immediately in a vial of PreservCyt transport medium (Hologic), spun around the walls of the vial for approximately 10 s and then removed.Collected specimens were stored at room temperature until transport to the laboratory at the Department of Pathology & Cytology Dalarna, County Hospital Falun, Falun, Sweden, where they were processed within 4 weeks from the day of collection.At the laboratory, a ThinPrep TP5000 processor (Hologic, Inc.) was used to prepare liquid based cytology (LBC) slides (one slide per patient) according to the manufacturer's protocol.Slides were stained using Gemini AS slide stainer (Thermo Scientific) according to regressive Papanicolaou (PAP) staining technique as described by Soost et al. Figure 3 Figure 3 shows a brush sample from a histopathologically diagnosed lesion with severe dysplasia and HPV 16 infection.The oral epithelial cells from the lesion show severe dysplasia (published with permission from Tandläkartidningen Hirsch J-M, Haj-Hosseini N, Kruger Weiner C, Hasseus B, Lindblad J (2021) Icke-invasiv kontroll av cellförändringar i munslemhinnan.Tandläkartidningen 113 (9):48-55).Orange cells, hyperkeratinized surface cells in which the nucleus is missing.Blue cells with hyperchromatic nuclei with low nuclear cytoplasmic ratio interpreted as severe dysplastic epithelial cells.Dysplastic cells with relatively irregular shape, elongated which usually indicates an ongoing process in which an epithelial cell loses its cell polarity and cell-cell adhesion and acquires migratory and invasive properties, a so-called epithelial-mesenchymal transition.F I G U R E 1 Cytobrush rubbed against the mucosa. 16015037, 2023, 3 , Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/idh.12713by Uppsala University Karin Boye, Wiley Online Library on [12/10/2023].See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions)on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License Of the ten participants, nine (five DHs and four Ds) responded to the questionnaire regarding work experience in years, and comfort in performing brush sampling.Table 3 presents data on the participants' years of work, previous courses on mucosal changes, and experience of sampling.The mean number of years of work in the profession was 21.4 (DHs) and 23.0 years (Ds).All participants thought that sampling can be included in the duties of DHs andDs, and most thought that sampling and taking care of the samples was easy or quite easy.Seven out of ten of the participants thought the referral procedure was easy or quite easy.The time spent on taking the sample was somewhat longer for the DHs, especially when the process included taking photographs.A general comment was that much of the time spent on sampling was due to the fact that this was a study, which involved extra administrative tasks, and that the brush test itself only took about 5-10 min. TA B L E 3 4 Participants' work experience and level of comfort in performing brush sampling.professional duties, "yes," n 5 Time spent on sampling a , mean (SD) 12.0 (7.6) 11.2 (4.8)Time spent on sampling a including photography, mean (SD) 24.0 (13.4) 18.8 (2.5) a In minutes.SD = standard deviation. 16015037, 2023, 3 , Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/idh.12713by Uppsala University Karin Boye, Wiley Online Library on [12/10/2023].See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions)on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License Furthermore, the fact that this was a study, with some extra administration and with information having to be given to the patient, also contributed to the extra time DHs spent on sampling while the extra administration can easily be handled by the Ds assistant.None of the participants found it difficult to identify mucosal changes, and, according to the specialist at the specialist clinic, all referred patients with mucosal changes were correctly referred.As EP and LP are clinical diagnoses, an incisional biopsy is necessary to establish a histopathological diagnosis that coincide with the clinical diagnosis.Three samples diagnosed as lichenoid reactions in the tissue samples did not clearly demonstrate a microscopic picture to If cellular changes are significant, the patient should be reviewed by a specialist for further evaluation and treatment, which normally includes surgical interventions.Based on the collective findings, the specialist must decide on further management of the patients.If a future plan involves follow up based on the cytological diagnosis this could very well entail continuous follow-up with brush sampling in GDP, as outlined above.Even though the specialists in orofacial medicine considered all the referred patients as relevant for further investigation, based on the experience of this study, extended training in clinical oral pathology is suggested in advance of introducing brush sampling in GDP.Dental hygienists collected more samples than Ds did, but the DHs collected fewer samples with lesions, indicating a difficulty in identifying lesions adequate for brush sampling.Therefore, it is also suggested to initiate a discussion regarding the curriculum for the degree of dental hygiene with the responsible parties.A DH must demonstrate the ability to independently perform oral examinations, and to recognize the need for interventions.Previous studies regarding DHs conducting screening of oral lesions are few.Based on the answers from 369/3000 Clinical diagnosis GDP a Clinical diagnosis DOM b Histopathological diagnosis The clinical diagnosis set by dental hygienists, dentists, by specialist in orofacial medicine and the histopathological diagnosis. a General Dental Practice.bDepartment of Orofacial Medicine.TA B L E 1
2023-07-05T06:17:08.640Z
2023-07-04T00:00:00.000
{ "year": 2023, "sha1": "cc0037ee7c4840c642731ef53d199b81dca048ce", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/idh.12713", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "160d6634f6400f90e7e3d2a541f820d6d5270df3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17309086
pes2o/s2orc
v3-fos-license
Quasi-degenerate neutrinos and leptogenesis from L_mu-L_tau We provide a framework for quasi-degenerate neutrinos consistent with a successful leptogenesis, based on the L_mu-L_tau flavor symmetry and its breaking pattern. In this scheme, a fine-tuning is needed to arrange the small solar neutrino mass splitting. Once it is ensured, the atmospheric neutrino mass splitting and the deviation from the maximal atmospheric mixing angle are driven by the same symmetry breaking parameter lambda~0.1, and the reactor angle is predicted to be slightly smaller than lambda, while the Dirac CP phase is generically of order one. Given that the pseudo-Dirac nature of right-handed neutrinos is protected from the flavor symmetry breaking, a small mass splitting can be generated radiatively. For moderate values of tan(beta)~10, this allows for low-scale supersymmetric leptogenesis, overcoming a strong wash-out effect of the quasi-degenerate light neutrinos and evading the gravitino overproduction. I. INTRODUCTION Thanks to an impressive progress made in neutrino oscillation experiments, we have fairly good information of the low-energy observables like neutrino mass differences and mixing angles [1]. The least known parameter is the so-called reactor angle θ 13 . A measurement of this angle and a study of CP violation in neutrino oscillation is one of major tasks in the next neutrino oscillation experiments. These endeavors cannot, however, reveal what the absolute neutrino mass scale is. A future determination of this feature would be a key element in exploring the origin of neutrino mass, which clearly lies beyond the Standard Model (SM). Among all still allowed possibilities, the scenario of quasi-degenerate neutrinos is interesting, as it can be confirmed or disproved in the future neutrino-less double beta decay experiments or in the cosmological observations of the cosmic microwave radiation and the large scale structure of the universe [1]. One of the most fascinating connections between neutrino physics and cosmology would be a possible explanation of the baryon asymmetry of the Universe, η B ≡ (n B − nB)/n γ = 6.15(25) × 10 −10 [2] through leptogenesis [3] (see also [4] for a review of subsequent developments), which is linked with the neutrino masses and mixing originating from the seesaw mechanism [5]. Recent studies of leptogenesis revealed a meaningful constraint on the scale of the heavy right-handed neutrino mass M at which the baryon asymmetry is generated. Under the assumptions of a hierarchy in the masses of the heavy right-handed neutrinos and a CP phase of order one in their decay, M > ∼ 10 8 − 10 9 GeV is required to account for the baryon asymmetry of the Universe, if the inverse-decay of this right-handed neutrino is negligible [6]. In case of the quasi-degeneracy for low-energy neutrinos, the resulting leptonic CP asymmetry is suppressed by a strong inverse-decay effect coming from larger neu-trino Yukawa couplings, and as a result, one needs to increase the scale M by a factor of ∼ 10 2−3 compared to the above value. Such a high leptogenesis scale M sets a lower bound on the the reheating temperature after inflation, which may endanger the successful prediction of the primordial nucleosynthesis due to gravitino overproduction [7]. Of course, the above-mentioned constraint on M is model-dependent. For instance, nearly mass-degenerate right-handed neutrinos can lead to an increase in the asymmetry [8]. In fact, the quasi-degeneracy of the lowenergy neutrinos could be a consequence of that of the high-energy right-handed neutrinos. An extreme possibility along this line is to have the right-handed neutrino mass difference comparable to their decay rate ∆M ∼ Γ, which leads to the leptonic CP asymmetry resonantly enhanced to its near maximum value and thus the mass scale M can go down to the TeV scale [9]. An interesting way of realizing such a resonant enhancement is to invoke a radiatively induced mass splitting through the renormalization group running from the flavor scale to the mass scale M [10,11]. Such radiative resonant leptogenesis has also been studied in the context of minimal flavor violation [12] and µ-τ symmetry [13]. An almost exact degeneracy requires a theoretical justification; in a flavor model of neutrino masses and mixing, an (nearly) exact degeneracy of the singlet right-handed neutrino sector can be a consequence of the flavor symmetry and should also be protected from its breaking effect [11]. In this work, we have taken the viewpoint that, since baryogenesis via leptogenesis is a theoretically elegant explanation of the baryon asymmetry of the Universe, a requirement of successful leptogenesis (with a low reheating temperature to avoid the gravitino problem) can be added to the list of phenomenological constraints that a neutrino seesaw mass model should observe 1 . We illustrate this point by investigating the properties of a quasidegenerate neutrino mass model based on the L µ −L τ flavor symmetry, which also provides a successful realization of the radiative resonant leptogenesis. The L µ −L τ flavor symmetry is motivated by the fact that the symmetrypreserving right-handed neutrino mass term, M N µ N τ , naturally leads to a maximal mixing required for the atmospheric neutrino oscillation, θ 23 = π/4 [14,15,16,17]. Note also that such a pseudo-Dirac structure of in the µ-τ sector implies an exact degeneracy for two right-handed neutrinos M 2 = M 3 = M . As a consequence of it, the resulting low-energy neutrino mass pattern is required to be quasi-degenerate, and a fine-tuning has to be introduced to arrange a small mass splitting for the solar neutrino oscillation as we will discuss in detail. We will analyze how the atmospheric and solar mass splitting can arise in connection with a small reactor angle and a large solar angle from the flavor symmetry breaking which introduces small complex order parameters λ i , suppressed by a factor λ = O(0.1) with respect to the symmetry preserving ones. Such a flavor symmetry breaking effect can be exempt in the right-handed neutrino mass matrix by assigning an additional discrete symmetry, as a result of which the resonant leptogenesis can naturally explain the observed baryon asymmetry of the universe for tan β ∼ 10 (thereby partially compensating the abovementioned fine-tuning). Our scheme predicts generically order-one CP phases for the neutrino oscillation and lep-togenesis which are unrelated to each other. A. General remarks Let us write down the Lagrangian with three righthanded neutrinos N as which leads to the seesaw mechanism explaining the smallness of the neutrino masses: where m ν is the mass matrix of the light neutrinos, M is the Majorana mass matrix of the heavy right-handed neutrinos and Y ν is the matrix of the neutrino Yukawa coupling. The matrix m ν can be diagonalized by a unitary flavor transformation: where we take m 1 , m 2 , m 3 to be a priori complex and the neutrino mixing matrix has a CKM-like form: generated through leptogenesis, and (iii) that the mechanism of supersymmetry breaking predicts the gravitino mass in the range potentially dangerous for primordial nucleosynthesis. There exist models abandoning some of these assumptions. The challenge of building a neutrino flavor model consists in reproducing this peculiar observed pattern of the neutrino mass squared differences and two large and one small mixing angles. Writing Eqs. (2)-(4), we tacitly assumed that they are valid at the low energy scales at which the neutrino experiments are performed. In order to match these expressions with the neutrino Yukawa couplings and the masses of the right-handed neutrinos defined at the high scale at which the right-handed neutrinos are integrated out, one needs to compute quantum corrections which typically contain large logarithms due to a vast difference in the energy scales. These large logarithms can be conveniently summed up with the use of the renormalization group (RG) technique [18]. The RG corrections to the neutrino masses and mixing angles in the Supersymmetric Standard Model are particularly large for the degenerate mass spectrum with definite CP parities (∓, ∓, ±) and for large tan β. In particular, for the overall neutrino mass scale ∼ 0.1 eV and tan β > ∼ 15, the RG corrections normally drive sin 2θ 12 towards a small fixed-point value inconsistent with experimental constraints, unless the neutrino mass model is very finely tuned at high scales [19]. Here, we adopt a perspective that the observed pattern of the neutrino masses and mixing does not accidentally emerge from the RG corrections, but that it reflects features of the underlying flavor model, and therefore our scheme fits better for tan β < ∼ 15. We illustrate this point in Fig and t 2 12 fixed at their best-fit values (2σ deviations) at the low scale, as given in (5), are shown as central solid (outer dashed) lines. We also chose m 1 = 0.1 eV, s 13 = 0.075 and M NA = (1.0, 1.1, 1.2)×10 8 GeV. For such low masses of the right-handed neutrinos, the effects of the contributions from the neutrino Yukawa couplings are negligible, nevertheless, for M X = 10 14 GeV we include them into the RG equations, choosing the texture corresponding to the Casas-Ibarra matrix R = 1 [20] and 'switching on' the relevant neutrino Yukawa coupling at the appropriate thresholds. It is convenient to note that the procedure of deriving the RG equations for the neutrino masses and mixing angles allows maintaining an arbitrarily chosen phase convention for the neutrino masses and the neutrino mixing matrix [21], and we shall utilize it to adhere to the phase choice corresponding to Eq. (4) throughout the entire RG evolution. B. The Lµ − Lτ flavor model Flavor structure of the Yukawa couplings of quarks and leptons are often explained with the use of a Froggatt-Nielsen mechanism with one or more flavons Φ A , i.e. scalar fields acquiring vacuum expectation values (vevs), spontaneously breaking a beyond-SM flavor symmetry, but coupling to the matter fields in a symmetrypreserving manner. One of numerous attempts to address the empirically determined pattern of the neutrino observables is to postulate an approximate L µ −L τ global U (1) symmetry in the lepton sector, which has a virtue of predicting almost maximal atmospheric mixing from a pseudo-Dirac structure of the right-handed neutrino mass matrix. In addition, we assume that there is a discrete Z n symmetry in the lepton-flavon sector. A full field content and charge assignment is given in Table I. The couplings allowed by symmetries give rise to the almost maximal atmospheric mixing, while those arising by the spontaneous breaking of U (1) and thus flavor-scale suppressed allow reproducing the remaining features of the neutrino masses, given that certain constraints are fulfilled. The presence of Z n symmetry prevents the flavon vevs from contributing to the mass matrix of the righthanded neutrinos, which shall turn out to be important for the possibility of radiative resonant leptogenesis. We would, however, like to stress that our goal consist in ex- ploring a phenomenologically motivated neutrino flavor pattern which may provide successful leptogenesis rather than in pretending that our construction is the ultimate model of leptonic flavor. The resulting neutrino Yukawa matrix and the Majorana mass matrix of the right-handed neutrinos are: and The parameters X and Y consistent with the U (1) symmetry are of the same order of magnitude, and the same is true for a, b, d. On the other hand, the flavor symmetry breaking parameters λ i are smaller than the latter as they arise from the flavon vacuum expectation values: is the flavor symmetry breaking scale. We take the flavor suppresion factors λ i /a of the order λ ∼ O(10 −1 ). It follows from the pseudo-Dirac structure the 2-3 sector of M that two right-handed neutrinos are exactly degenerate in masses, while the mass of the third right-handed can be slightly different. So far, we have not chosen any specific phase convention for the right-handed neutrinos. We can use the transformations N 1 → e ıϕ1 N 1 and N 2,3 → e ıϕ2 N 2,3 to ensure that X and Y are real and positive. The remaining phase redefinition N 2,3 → e ±ıϕ3 N 2,3 leaves M invariant, but it changes phases in the second and third row of the neutrino Yukawa matrix. We can also make the phase redefinitions of the charged lepton doublets, L i → e ıφi L i (i = e, µ, τ ). First, we can redefine the overall leptonic phase φ e + φ µ + φ τ and the phase ϕ 3 so that d is real and positive, and b is real and negative (these transformations do not depend on the phase convention imposed by Eq. (4)). The remaining freedom of the phase choice must be then utilized to ensure that the neutrino mass matrix is diagonalized with a matrix of the form (4). As we shall see, this will introduce some consistency constraints. These unphysical phases correspond to the freedom of φ e (allowing to set an arbitrary phase to a) and to the freedom of shifting ϕ 3 , −φ µ and φ τ by the same value; it would be a symmetry of the neutrino Yukawa matrix, if the L µ − L τ breaking were absent. Given the form of the neutrino mass matrix from Eq. (6)- (7), the low-energy observables like neutrino mass splitting and mixing angles can be explicitly calculated perturbatively treating the small symmetry breaking entries as expansion parameters. Using Eqs. (3) and (4), we can expand the neutrino mass matrix around s 13 = 0 and θ 23 = π/4 for an arbitrary θ 12 as: In Eq. (8), m ν is the neutrino mass matrix in the limit λ i , δ where ∆s 13 and ∆θ (1) 23 are corrections to the leading pat-tern of the neutrino mixing, and m (1) i are corrections to the eigenvalues of the neutrino mass matrix. It is straightforward to derive higher order terms of this expansion. Now we shall compare the neutrino mass matrix decomposed as described above with the mass matrix resulting from (7) via (2). At the leading order, O(λ 0 ), we obtain a neutrino mass matrix which has a pseudo-Dirac structure in the 2-3 sector. This picture can be extended to the first generation, predicting an exactly degenerate mass spectrum, given that: where δ 3 ) = (−1, −1, +1) × |b|d/Y the atmospheric mixing is maximal, s 13 vanishes and the solar mixing remains undetermined. Clearly, Eq. (10) indicates that our model requires a fine-tuning to describe the neutrino masses and mixing correctly. We shall address the issue of actual fine-tuning compared to other neutrino mass models in the following section. Let us turn to calculating O(λ) corrections to this result. From now on, we shall further simplify our model by setting λ 1 = λ 5 = 0, which is ensured by the absence of the flavon field Φ −1 . Such an assumption does not change qualitative features of the model, while simplifying the following formulae. Comparing the sum and the difference of the 12 and 13 entries of m (1) ν with the appropriate combinations of the O(λ) entries of the neutrino mass matrix resulting from (6) and (7) via (2): (entries denoted by * are given by symmetry of m ν ) we obtain: 2 )s 12 c 12 . (13) Since we know from the data that the solar mass splitting is much smaller than the atmospheric one, the quantities in Eq. (13) should be smaller than naively assumed O(λ), which yields m for large s 12 . From the point of view of the flavor model, the two contributions to the left-hand sides of Eqs. (12)-(13) should either interfere destructively or be small. The first option represents another fine-tuning, while the second can be achieved with a small hierarchy among the flavon vevs; λ 2,6 /λ 3,4 ∝ Φ +1 / Φ ±2 ∼ λ. Irrespective of the actual origin of this feature, it seems more appropriate to defer the discussion of the terms proportional to − aλ2 X − dλ6 Y to the analysis of the O(λ 2 ) corrections. A similar comparison for the remaining entries of m (1) ν gives: where we chose such combinations of the 11, 22, 23 and 33 entries that the results are particularly simple. It may appear that the phases of λ 3 and λ 4 must be aligned so that dλ 3 + |b|λ 4 is real up to O(λ 2 ) corrections. However, we still have one phase redefinition, which we can use to impose this condition, so it does not represent another fine-tuning. By taking a linear combination of Eqs. (14)- (17), we obtain a consistency condition for a small solar mass splitting: 1 ) ≈ 0 (18) which determine the unphysical phase α and imposes a constraint on δ (1) a , thereby increasing the already present fine-tuning (10). A relation of this type seems unavoidable in any neutrino mass model predicting a degenerate spectrum. The atmospheric mass splitting is then: which is naturally of the order O(λ) with respect to the neutrino mass scale and, for fixed |λ 3 |, |λ 4 |, it is maximal if λ 3 and λ 4 are approximately real. Using this approach, one can also write the relations between the flavor model and the phenomenological parameterization at the O(λ 2 ) order. These lengthy expressions, which we omit here, give 6 independent relations between the flavor model parameters and the variables m (2) i , θ 12 , ∆θ (2) 23 and s δ ∆s (1) 13 or c δ ∆s (2) 13 . Hence, no further fine-tunings appear at this stage and the solar splitting is then given by: Finally, we note that the model considered here corresponds in some limiting cases to models already present in the literature. Therefore, the following considerations regarding the viability of our model and, in particular, the amount of fine-tuning necessary to describe the neutrino oscillation data can also be applied to those models. For λ 3 = λ 6 = 0 and real Y ν , we obtain the model studied previously in Ref. [17]. We also note that for λ 3 = λ 6 = 0 and X = Y the neutrino mass matrix (11) is identical to that considered in an A 4 -inspired model of Ref. [22]. C. Fine-tuning In Section II B, we have seen that our model requires a fine-tuning, necessary for arranging a small solar neutrino mass-squared splitting. Here, we shall discuss this issue in more detail and compare our model to other models of neutrino masses and mixing. Addressing the issue of fine-tuning in a quantitative way is a cumbersome task, since it inevitably requires introducing a probability measure in the parameter space. We shall therefore make a comparative study, checking the performance of our neutrino mass model (with λ 1 = λ 5 = 0) versus another neutrino mass model which also arises from breaking of an U (1) flavor gauge symmetry through Froggatt-Nielsen mechanism and is regarded as rather natural. As the reference model, we chose the HII model of Altarelli, Feruglio and Masina (AFM) [23], which predicts a hierarchical spectrum of neutrino masses. In order to make a comparison with another model explaining quasi-degenerate neutrino masses, we shall also analyze the model of He, Keum and Volkas (HKV) [24] based on A 4 symmetry. For completeness, we shall also compare our model with an anarchical seesaw model, i.e. one exhibiting no structure in Y ν or M [25]. The comparison has been performed along the lines of the analysis presented by AFM. Each entry O(λ n ) allowed by symmetry was parameterized as f e ıω λ n , where 0.8 ≤ f ≤ 1.2 and 0 ≤ ω ≤ 2π were chosen randomly with a constant probability density. We used an optimized value λ = 0.35 for the AFM model, while we set a suggestive value λ = 0.22 in our model. For the HKV model, we assumed that all unperturbed entries are O(λ 0 ) and that the perturbations are O(λ) with λ = 0.1. For the anarchical model all the entries in Y ν and M were assumed to be f e ıω . We then diagonalized numerically the resulting neutrino mass matrices and calculated four dimensionless observables unambiguously constrained by the present neutrino data: ∆m 2 sol /∆m 2 atm , s 13 , t 2 12 and t 2 23 . This procedure was repeated 10 6 times for each model. The resulting probability distributions of the observables are shown in Figure 2. The overall success rate of each model can be defined as the fraction of points lying in a four-dimensional box whose sides correspond to 3σ ranges of the observables allowed by the present data. Such a success rate was approximately 3 × 10 −3 for the AFM model, 2 × 10 −2 for the HKV model and 4 × 10 −4 for our model, the actual number depending on the RG corrections (admitting λ 1,5 ∼ λ 2,6 does not change this result qualitatively). A purely anarchical model has the success rate twice smaller than our model. If we consider the success rate an unambivalent measure of naturalness, the AFM model and HKV model are favored over ours by the oscillation data. As regards ∆m 2 sol /∆m 2 atm , the AFM distribution, peaked around 10 −2 is rather wide and it could easily account for a wide range of values of this observable, whose experimentally allowed 3σ range (with RG correction neglected) is as- sumed with 15% probability. In contrast, the lower value of this observable in our model and in the HKV model, the larger fine-tuning is required, and the probability of obtaining ∆m 2 sol /∆m 2 atm in the allowed range is ∼ 1% and ∼ 4%, respectively. The sign of this observable in our model is positive in more than 95% of cases in our model, which justifies a posteriori the assumptions made in Section II B. Values of s 13 come out small in all models but the anarchical one, with ∼ 30% (HKV), ∼ 50% (AFM) and ∼ 60% (our) of the distribution in the allowed range. The atmospheric mixing is peaked around the maximal mixing models, with the HKV distribution being the most narrow. As we already argued in Section II B, there is no point in discussing the solar mixing independently of the solar mass splitting in our model, as the consistency with experimental data introduces some correlation between observables. As shown in Figure 2 (where we also plot the distribution of conditional probability density given that the solar-to-atmospheric ratio lies within the experimentally allowed range), once the fine-tuning required for the solar-to-atmospheric mass ratio is achieved, the distribution of t 2 12 becomes peaked around values consistent with experiment. Similarly, the distribution of condi-tional probability density for s 13 given that the solar-toatmospheric ratio (empty cyan boxes) is shifted towards smaller values of this observable, pursuant to Eq. (13). In conclusion, in comparison to the AFM model and the HKV model, our model's overall performance is worse by a factor of 10 to 100, following mainly from the finetuning necessary for the small solar mass splitting. However, if our model can explain the baryon asymmetry of the Universe as resulting from leptogenesis with a low reheating temperature, while the AFM and HKV model cannot, this may be a hint that a quantitatively moderate fine-tuning discussed above allows a glimpse at the structure of a more fundamental physics rather than being an unnatural coincidence. III. LEPTOGENESIS In the MSSM, the effects of supersymmetry breaking in leptogenesis can be safely neglected. The CP asymmetries are twice larger than those in the Standard Model and the number of channels through which the lepton asymmetry is generated is also doubled. This is compensated by doubled amplitudes of the washout processes and an almost doubled number of relativistic degrees of freedom after leptogenesis. The conversion factors, relating the generated lepton asymmetry with the final baryon asymmetry are also very similar [26]. Therefore, the order of magnitude of the baryon asymmetry of the Universe resulting from leptogenesis can be approximated by the nonsupersymmetric formula [27]: where α runs over distinguishable lepton flavors α = e, µ, τ (we assume a reheating temperature < ∼ 10 9 GeV) and ε iα are CP asymmetries in the decays of the righthanded neutrinos of mass M i into flavor α. The washout parameters are defined as and y is the neutrino Yukawa matrix written in the basis of the mass eigenstates for the righthanded neutrinos. It follows from Eq. (22) that for the light neutrinos with masses 0.1 eV, we need |ε iα | ∼ 10 −4 for successful leptogenesis. However, for M i < ∼ 10 9 GeV which allows avoiding the gravitino problem, a natural scale for the CP asymmetries is (we neglect various O(1) factors), unless, e.g., the righthanded neutrinos are almost degenerate in masses, since in that case the CP asymmetries in the decays of these right-handed neutrinos are resonantly enhanced. In our model, the right-handed neutrinos mass matrix M in Eq. (7) has two exactly degenerate eigenvalues, M 2 = M 3 = Y , which may become slightly split by RG corrections if the scale Q f of U (1) breaking is larger than the leptogenesis scale. If the splitting δ N = 1 − M 2 /M 3 is much smaller than (yy † ) 22,33 /8π, the relevant CP asymmetries are given by [9] (see also [28]): It may appear that for M 2 = M 3 a transformation N 2 → cos ζN 2 + sin ζN 3 and N 2 → − sin ζN 2 + cos ζN 3 is a symmetry of the mass matrix of the right-handed neutrinos, but it allows for rearranging the neutrino Yukawa couplings. However, it has been noted in [10] where y τ ∼ 10 −2 tan β is the tau Yukawa coupling and ∆t = (4π) −2 ln(Q f /Y ) ∼ 0.1 + 0.006 ln(Q f /10 7 Y ). Other combinations of parameters appearing in (24) receive negligible RG corrections and can be replaced by their values at the scale Q f . The CP asymmetries can be then expressed as: where the upper, middle and lower factors correspond to α = e, µ, τ , respectively. The other CP asymmetries, ε 1α , are much smaller and can be neglected. We estimate from (27) that |ε 2α | < ∼ 10 −6 tan 2 β, so the CP asymmetries are sufficiently large for successful leptogenesis if tan β > ∼ 10. The predictions of our neutrino mass model for the CP asymmetries given in Eq. (27) are presented in Figure 3, where we plot the distribution probability for ε max = max{|ε j |; j = 1, 2, 3}, where ε j = α ε jα . The numerical procedure is identical to that described in Section II C, with the exception that we scan over 10 7 points in the parameter space for the conditional probability distribution. For definiteness, we assume that ∆t = 0.1, tan β = 10 and M 1 = 10 9 GeV. The black, solid lines Figure, we also show the estimates the baryon asymmetry of the Universe η B given by formula (21). According to the discussion in Section II B, a requirement of a small solar neutrino mass splitting favors almost real λ 3 and λ 4 , which in turn leads to a certain suppression of the CP asymmetry (27). Moreover, the fact that ε jµ and ε jτ are proportional to −|b 2 | and d 2 , respectively, is the reason for another slight suppression of the resulting baryon asymmetry. These suppressions can, however, be easily overcome by the enhancement of the tau Yukawa coupling for tan β > ∼ 10, and we conclude that we can easily have the CP asymmetry of the order of 10 −4 which can account for the baryon asymmetry of the Universe. As regards the AFM and HKV models, with an optimistic assumption that the largest values of ε max correspond to washout as small as 0.1, we conclude that these models can be only marginally consistent with baryogenesis via leptogenesis for a low reheating temperature. The crucial ingredient of our model which allows for a low-scale leptogenesis is the assumption that the L µ − L τ flavor symmetry is broken at a scale M X much larger than the leptogenesis scale M NA and that this breaking is transmitted to the mass matrix of the right-handed neutrinos only through RG corrections. At the leptogenesis scale, the masses of the pseudo-Dirac right-handed neutrinos are then split by a factor proportional to the neutrino Yukawa couplings and it is precisely this small splitting which makes it possible to overcome the naive scaling (23) [10]. For comparison, in model described in [17], also based on L µ −L τ symmetry, the scale of symmetry breaking is identified with the leptogenesis scale, the RG corrections are absent and the pseudo-Dirac righthanded neutrinos are exactly degenerate, which leads to a vanishing CP asymmetry in their decays. (Besides, only one CP violating phase is assumed in this model; although it is straightforward to include other phases in the neutrino sector, this would not change the latter conclusion.) Hence, the scaling (23) holds approximately and large values of the right-handed neutrino masses of are necessary for successful leptogenesis. In contrast, in the model of [16] (in which the symmetry breaking scale is also identified with the leptogenesis scale), the L µ − L τ symmetry is broken in both the neutrino Yukawa matrix and the mass matrix of the right-handed neutrinos which again leads to the approximate scaling (23) and large values of the right-handed neutrino masses necessary for successful leptogenesis. IV. CONCLUSION In this work, we have considered a neutrino mass model where the neutrino Yukawa and mass structures are dictated by the flavor symmetry, L µ − L τ , and its breaking patterns which is controlled by an additional discrete symmetry. The model requires a fine-tuning to correctly predict the smallness of the solar mass splitting. Taking the expansion parameter λ = 0.22, we have made a quantitative discussion on the fine-tuning in the combined explanation of all the low-energy observables and the demanded baryon asymmetry of the Universe. Once such a fine-tuning is ensured, the bi-large pattern of mixing angles and a successful leptogenesis with a low reheating temperature becomes a natural prediction of this model. In addition, the Dirac CP phase is generically order-one while the reactor mixing angle θ 13 is peaked at θ 13 = 0.06.
2014-10-01T00:00:00.000Z
2007-03-06T00:00:00.000
{ "year": 2007, "sha1": "1fff4ebf859bb5c88ec4da9fc370db3534e02d90", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0703070", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c5e67aea5ca78a93274aa0c6510394bc581cf85e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
272133185
pes2o/s2orc
v3-fos-license
Let’s Listen -Read-Discuss: Improving Reading Comprehension ABSTRACT Introduction One of the most important skills students should acquire is reading.This is due to the fact that reading skills allow students to increase their vocabulary, practice pronouncing words correctly, and enhance their spoken English (Agung et al., 2022;Huang, 2010;Krismayani & Menggo, 2022).Reading is more than just learning words and comprehending them; it's also about making connections between what you've read so far and the content you already know (Longlong et al., 2017;Mustikasari, 2020;Tawali, 2021).Then, having good reading skills improves your chances of developing personally and succeeding professionally.(Wilkinson et al., 2016) Thus, having good reading skills will enable students to comprehend the reading text's content or context.(Krismayani, 2022). The ability to comprehend what a text or piece of information is about is known as reading comprehension.According to (Duffy, 2009) reading comprehension is the reader's prior knowledge of the world affects their ability to comprehend what they read.Students who read comprehension exercises are better able to understand each sentence in the text by building meaning.Because reading comprehension is a complex task requiring cognitive skills and abilities, it goes beyond simply understanding the meaning of the text to encompass broad learning, success in education, and employment (Oakhill,Jane, Kate Cain, 2015).Therefore, a teacher's role in the teaching and learning process is necessary to support students in developing their reading comprehension. The previous statement suggests that teaching reading comprehension to students correctly is necessary.The process of deriving meaning and information from a text is known as reading comprehension. Understanding the context of the information presented in a text is known as reading comprehension.Throughout the teaching and learning process, the students attempted to decipher the text's meaning and pinpoint its central idea.When the reader is knowledgeable, the text will automatically make sense to them and they will understand its point. For the reader to comprehend the book and convey the terms, they need to read with a more modern vocabulary.Comprehending tasks aid students in understanding each sentence in the book more thoroughly by helping them develop meaning.Reading comprehension is a challenging task requiring cognitive abilities and capacities, in addition to being a means of comprehending text content and broader learning for success in the worlds of school and job (Burns, 2010).Therefore, if students are to improve their reading comprehension, the teacher's engagement in the teaching and learning process is crucial. One of ways to improve reading comprehension is through genre.It is types of text based on social functions.There are many types of text to improve reading comprehension.One of them is recount.Recount texts are those that describe someone's past action, story, or activity.Recount text is texts that retells historical events typically do so in the chronological sequence in which they occurred (Reichenbach et al., 2019;Wilkinson et al., 2016).The objective of a recount text is to provide the author with a written account of events and personal experiences.Recount texts also entertain readers and give information about the author's experiences.There are three generic structural parts in recount text: orientation, events and reorientation (Reichenbach et al., 2019).The most common learning challenge that students have is understanding the text and identifying the four aspects of reading comprehension which are general and specific information, textual meaning and textual reference. Based on initial observations made by the English teacher at SMPN 4 Kuta Utara, it seemed that the students were having difficulty with reading comprehension.Because of this, students needed the teacher's assistance to distinguish between general and specific information, textual meaning and references in the reading.It might occur because most of students only read the assigned content without researching or considering its meaning.Teachers usually require that students read aloud from a text in front of the class, but they rarely ever give students the opportunity or encourage them to check up the text's meaninginstead, they only concentrate on teaching them how to read what they are reading correctly.Then, the teacher invited the class to answer to a question regarding the passage they had read.The students needed helps in understanding the text.Therefore, teachers should use appropriate strategies in improving students' reading comprehension. When it comes to learning activities, the cooperative learning model places a substantial emphasis on students communicating with each other in multiple small groups.Collaborative learning involves students in small groups to learn to achieve common goals (Gillies, 2016;Zagoto, 2016).Students that participate in cooperative learning are better able to work through challenges as a group.According to (Kagan & Kagan, 2009) it is acceptable to use collaborative learning to award individual grades for group tasks.Because cooperative learning helps students comprehend the subject, it is beneficial and effective to use in the learning process. One of cooperative learning that may be used to improve reading comprehension is The Listen-Read-Discuss (L-R-D).The L-R-D method enables and supports group work among students.Students can benefit from using the Listen-Read-Discuss (L-R-D) technique when working in groups.(Tawali, 2021)claims that L-R-D reading literacy helps kids understand what they read.This suggests that since the L-R-D method allows students to learn about texts they read through group conversations, it is appropriate for teaching reading.As a result, the L-R-D method helps pupils because it promotes group discussion of the solutions to arrive at the best solution. The Listen-Read-Discuss steps are as follows: (1) Listen: As the teacher presents and reads the assigned reading, the pupils are expected to pay attention.(2) Read: The instructor gives the students instructions to read the text aloud.After that, the teacher splits the class up into four to five discussion-focused small groups with a variety of students.(3) Discuss: The teacher invites the class to respond to questions generated from the reading text after instructing them to search for information or evaluate the reading text's significance.The group can discuss the problem and come up with a solution by first extensively examining the reading material to determine the core idea relevant to the topic using the L-R-D technique.When students work in groups and actively seek out answers, they are better able to comprehend the text's theme or content.In light of the preceding description, the students require assistance in locating general information, specialized information, textual meaning, and textual allusions when reading texts.To get over this, students need to employ engaging and entertaining tactics.The researchers in this study used the L-R-D strategy, a reading instruction approach that includes fundamental phases.Because of this, the research challenge may be stated as follows: Could the L-R-D technique help SMPN 4 Kuta Utara eighth-grade students' reading comprehension in the 2023-2024 academic year?Moreover, by putting this strategy into reality, the researchers anticipate that the children will become adept readers and have better reading comprehension. Method To help students become more proficient readers, the researchers use classroom action research.The researchers carry out an action research in the classroom while assuming the role of teachers in order to resolve issues or discover answers to context-specific problems.Action research in the classroom can be used to plan the learning process and choose and apply the best teaching techniques.Then, classroom action research is defined as the process of enhancing education by merging change and including educators in enhancing their practice (Ary, D., Jacobs, L. C., Sorensen, C., Razavieh, 2010).Action research in the classroom is also helpful for improving students' comprehension through practice.The prospective instructor in this situation can devise a plan to examine the teaching methodology.One way to enhance the teaching and learning process is through classroom action research, which involves reflecting on classroom practices and solving problems collaboratively.Plan, action, observation, and reflection are the four steps in the study cycle that classroom action research entails, according to (Kemmis et al., 2014). Planning is the first step in which the researchers identify the problem in the class and develop a plan of action to bring improvements to the classroom problem.Action is the second step in which the researchers conducted the research as the teacher by doing teaching and learning processes in the class.Observation is the third step in which the researchers observed the effect of the action already conducted.Reflection is the last step in which the researchers evaluate the impact of the action after observing it. There were two cycles to the teaching and learning process that the researchers conducted in this classroom.Every cycle had two sessions in it.Important data is required for classroom action research in order to enhance students' reading abilities.In order to get accurate data demonstrating the subject's progress toward L-R-D, the researchers in this study use research instruments.In the present study, two instruments were used.Pre-and post-tests and also questionnaires are the instruments. Before the teacher starts treating the students, a pre-test is given to them to gauge their basic understanding.Following receipt of the pre-test results, the researchers act as teachers, providing worksheets and material explanations to the students.After that, proceed by giving the post-test.The posttest assesses the student's performance after the instructor's intervention.The test format that was administered to the students included of 10 short answer questions.Subsequently, the purpose of the questionnaire is to gather data regarding the students' motivation, feelings, answers, and issues arising from using the L-R-D technique to teach reading.Once every cycle was completed, it was administered.The questionnaire, which comprised of statements asking students to choose one answer from five multiplechoice options, was analysed on a scale from five (5) to one (1).Strongly disagree, agree, disagree, disagree, and agree were among the options. Results The descriptions of the data the researchers collected utilizing research instruments were used to present the study's findings.This study used an action research design in the classroom to answer the subjects' difficulties through the application of the L-R-D method.This study on recount text reading was primarily concerned with the four components of reading comprehension: general information, specific information, textual meaning, and textual allusions.To gather data for this study, the researchers used questionnaires, pre-and post-tests, and other tools.Prior to using the L-R-D technique, the researchers administered a pretest questionnaire to the students to ascertain their level of reading comprehension.The reading comprehension issues that the students were experiencing were then identified using the pre-test data. Meanwhile, when the L-R-D method to reading comprehension support was put into practice, students took a post-test.Following cycle I and II, a post-test was administered.Furthermore, at the end of the most recent cycle, a questionnaire was distributed to the participants to collect their feedback on the application of the L-R-D technique in teaching and learning.In the meantime, students were given a post-test following the application of the L-R-D approach in reading comprehension instruction.After the conclusion of cycle I and II, there was a post-test.In addition, a questionnaire was given to the participants at the conclusion of the most recent cycle to gather their responses following the application of the L-R-D technique in teaching and learning. Giving the research instrument to the subjects could help the data gathering process address the research problems.In this study, the L-R-D technique was successfully applied through a cycle of classroom action research.In classroom action research, the cyclical procedure that began with a pre-cycle and continued with a cycle is utilized.Three score-containing data sets were found: post-test 1, post-test 2, and pre-test.This score demonstrated the subjects' improvement following the implementation of the L-R-D technique for teaching reading comprehension.To quantify and examine exact results, the study results from the test administration were tallied.The gathered information from cycle I, cycle II, and pre-test was thus visible in the following tabulations: Additionally, the researcher supplied a questionnaire to the subjects to get their responses after using the L-R-D strategy in order to gather supporting data.Ten statements written in Indonesian made up the questionnaire, which was designed so that the subjects could comprehend the meaning of the statements more readily.The questionnaire had five options: (1) Strongly Agree or Sangat Setuju, (2) Agree or Setuju, (3) Undecided or Ragu -Ragu, (4) Disagree or Tidak Setuju, and (5) Strongly Disagree or Sangat Tidak Setuju.The additional data showing the subjects' total responses, so that could be tabulated in the following table : Table 3.The pre-test and post-test results were tabulated in the table, demonstrating the students' progress in reading comprehension, particularly in recognizing general and specific information, textual meaning, and textual references following the application of the L-R-D strategy.Furthermore, as stated in the questionnaire, the tabulation displayed the subjects' responses following the application of the L-R-D strategy in the teaching and learning process.In addition, the cycle was preceded by a pre-cycle consisting of planning, doing, observing, and reflecting in order to identify the subjects' reading comprehension difficulties.Cycles I and II were then added, each consisting of two sessions.The following is how this cyclical process would be explained: Pre-Cycle Pre-cycle was the start of this study.Prior to applying the strategy and starting cycle I, the subjects' reading comprehension was assessed.Interviewing the English teacher who worked with the eighth-grade students at SMPN 4 Kuta Utara during the academic year 2023-2024 was the first step in this initial activity.Additionally, the researcher looked at the previous circumstances of the students as well as the methods the English teacher employed when instructing and learning.In order to obtain more precise preliminary data regarding the subjects' reading comprehension, the researcher also gave a pre-test before putting the strategy into practice. According to the English teacher's interview, students struggle with reading comprehension, particularly when it comes to differentiating between general and specific information as well as textual meaning and references.The teacher's strategies, which required the students to do more than just read and then respond to a question, were also less successful, according to the observation.Consequently, the participants lacked interest in learning English and continued to struggle with identifying the four components of reading.They also did not succeed to comprehend the text's content.As a result, the researcher used the L-R-D approach to solve their problem. In order to determine the study subject's prior reading comprehension, the researcher administered a pretest to them during the pre-cycle.During the pretest, the participants were required to respond to 20 items of short answer questions based on the reading text that was provided.Four recount texts with five questions focusing on identifying general information, specific information, textual meaning, and textual references were given to the subjects.The time allotment which was given to do the pre-test was 30 minutes and it should be done individually.When responding to the questions, the students forbade consulting dictionaries or having conversations with their peers.Table 4.1 shows that the combined pre-test scores of the 32 subjects were 1885.The mean score of the subjects' pre-test results from this classroom action research could be calculated as follows: Mean Score of Pre-test = Σx N = 1885 32 = 58.9The pre-test that 32 subjects took had a mean score of 58.9, according to the data above.It indicates that the pre-test average and the interview outcomes were in line.The results also show how well the subjects could read, particularly when it came to recognizing the general information, specific information, textual meaning, and textual references in recount texts.To help the subjects improve their reading comprehension, the researcher used the L-R-D strategy, which was based on the outcomes of the aforementioned conditions.In order to address the participants' difficulties with reading comprehension, the researcher chose to carry out the first cycle of the current study. Cycle I Cycle I of the current study was conducted subsequent to the completion of the pre-cycle.The pre-test results indicated that the study subjects' reading comprehension was poor and far below the minimal passing score.To solve this issue, the way that teaching tactics were chosen needed to be improved.The four phases of the cyclical processes the researcher carried out in this cycle were planning, action, observation, and reflection.Furthermore, cycle I consisted of two sessions, namely session 1 and session 2. The application involved either in-person or offline instruction and learning.The L-R-D technique was used by the researcher in the classroom to enhance instruction and learning.The steps of teaching and learning process were done in chronological order to get the maximum results. Cycle I started with planning.When planning, the researcher organized and concentrated on creating a lesson plan that matched the curriculum followed by the eighth-graders at SMPN 4 Kuta Utara.Furthermore, the research prepared all the learning materials, student worksheets, and post-tests needed to teach reading comprehension to the subjects using L-R-D.Each session had a two-time allocation of forty minutes.Definitions of recount texts, their general structure, and their linguistic characteristics were all included in the learning materials.Following the explanation of the content, a student worksheet was created so that the subjects could practice reading comprehension.At the end of cycle I, directly at the second meeting, the researcher prepared post-test about recount text material that had been taught previously through the application of L-R-D strategy. The researcher planned first, then decided what to do next.There were three educational tasks included in the activity: the pre-, while-, and post-tasks.Recount texts and active application of the L-R-D approach were employed in the classroom by the researcher, especially in the VIII A class at SMPN 4 Kuta Utara.In the first part, the students were required to follow along with a recount text that was given to them.The prepared pre-planned content was then reviewed by the researcher.Subsequently, the participants paid attention to the researcher while she recounted a paragraph in front of the class (Listen).The individual was then asked to read the passage orally (Read).The researcher divided the subjects into groups of four or five.The investigator then gave them instructions to finish Student Worksheet 1 (Discuss) and have a group discussion about the book.After the participants had debated and answered the issue, the researcher requested them to share and present their answer or the discussion's conclusion to all the other groups. In session 2, the researcher employed the L-R-D strategy in the process of teaching and learning.Like the first session, it focused on identifying general and specific information, textual meaning, and allusions while also assisting with reading comprehension.The student worksheet that the researcher supplied consisted of two passages.The participants were then asked questions and given instructions so they may answer in groups by the investigator.At the end of cycle I's application of session 2, the researcher gave them a post-test to see how much their reading comprehension had improved because of using the L-R-D approach.For post-test 1, the researcher supplied four recount texts with five questions per. Additionally, the researcher observed while the students were being taught and learned.The purpose of this observation was to ascertain the subjects' reactions to the L-R-D strategy's application.Additionally, by observing how students responded to the questions, the researcher also observed the behavior of the subjects.The students did a good job of following the lesson through to the end when I applied cycle.Students who paid close attention to what they were taught were able to comprehend the material and provide clear, thorough answers on the student worksheet.But some students were still too shy to ask questions, and others continued to have trouble focusing during the class.Some participants who became distracted as a result found it difficult to comprehend the information and provide an accurate response.Furthermore, when the subjects are asked questions about the material, they still hesitate to respond to the questions. At the conclusion of the cycle, the researcher administered a post-test with the goal of determining the subjects' progress following the application of the L-R-D strategy.Thirty-two subjects then took this posttest.Table 3.1 displays the findings of the reading comprehension progress of the subjects following the administration of the post-test.The subjects' reading comprehension improved as a result of using the L-R-D strategy in the teaching and learning process, according to the results.Post-test 1 yielded a total score of 2330.The following formula could be used to determine cycle I's mean score: Mean Score of Pre-test = Σx N = 2330 32 = 72.8The subjects' reading comprehension increased from the pre-cycle to cycle I, as indicated by the mean score calculation of the aforementioned post-test.Post-test 1 had a mean score of 72.8.According to the post-test results from this first cycle, 23 subjects were able to receive at least a passing grade.These findings clearly demonstrated that using the L-R-D strategy improved the subjects' mean reading comprehension score.However, some participants were still unable to recognize the textual reference and meaning from the reading texts.The success indicator has not yet been met in this initial cycle.Because of this, the researcher made the decision to use the same approach to improve reading comprehension in cycle II. Cycle 2 It was carried out using cycle II application following cycle I application.It was due to the post-test result indicating that cycle I application could still not be considered successful.This cycle was carried out to address the issues that the subjects encountered during the previous cycle's teaching and learning process and to enhance their comprehension of reading.Session 3 and Session 4 of Cycle II were the same as those of Cycle I. Four interconnected activities-revised planning, action, observation, and reflection-were used to carry out the sessions. Revisions to the planning were the first things the researcher needed to do.To increase the subjects' reading comprehension to significantly higher levels than cycle I using the L-R-D strategy, revised planning was required.The researcher created a teaching module during planning that was exactly the same as cycle I.Furthermore, the researcher created educational materials that included a definition, a general recount text structure, a language feature, and a few examples of recount texts.Additionally, the researcher created a student worksheet for the subjects to practice comprehension and a post-test 2 to determine the subjects' improvement in reading comprehension.In addition, a questionnaire was created by the researcher to find out how the subjects felt about the L-R-D strategy being used in the teaching and learning process.Cycle II comprised two sessions, numbered 3 and 4, with a time allotment of two times 40 minutes for each. The planning that was done in the teaching module was followed during cycle II.The process of teaching and learning was largely the same as cycle I.The third session began with questions about recall texts and reading comprehension in order to assess the students' retention of the material.The researcher went on to repeat the explanation of the information.The researcher gave the subjects a student worksheet after going over the information again.The subjects paid attention to the researcher as she loudly read the text in front of the class (Listen).The subject was then instructed to read the text aloud by the researcher (Read).The researcher then prompted and assisted them in having a discussion about the text and responding to the questions on student worksheet 3 (Discuss).The subjects were then asked to share their response with the other groups by the researcher.The L-R-D technique was also used by the researcher to instruct the subjects in session 4, and student worksheet 4 was provided.In addition, the researcher administered a post-test 2 to them at the conclusion of cycle II to assess their progress in reading comprehension. During the teaching and learning process, the primary goals of the observation were to ascertain the learning environment and the subjects' reactions to the application of the L-R-D strategy.During the second cycle of teaching and learning, students started to ask more questions about the subject matter.Additionally, the subjects are more attentive to the explanation and focused.It enabled the students to comprehend the material and complete the student worksheet in an easy and thorough process. At the end of cycle II in session 4, the researcher gave a post-test to the subjects to gauge their improvement in reading comprehension after using the L-R-D approach.The outcomes of post-tests 1 and 2 were examined to assess the efficacy of the teaching and learning process.This post-test was then taken by thirty-two individuals.The results showed that applying the L-R-D method increased the subjects' reading comprehension.The total score obtained on post-test 2 was 2605.The mean score for cycle II could be calculated using the formula below: Mean Score of Pre-test = Σx N = 2605 32 = 81.4The students' performance significantly improved as indicated by the post-test II data, with an average score of 81.4.Additionally, the subjects' improved reading comprehension was demonstrated by the L-R-D strategy, as evidenced by the post-test 2 mean score.Additionally, the post-test showed that every student could receive the required minimum passing score.Thus, cycle II of the study may be where it ends. Additionally, this study was supported by other data, specifically questionnaires.In session 4, the questionnaire was given out at the conclusion of cycle II.Strongly Agree (SA), Agree (A), Undecided (U), Disagree (D), and Strongly Disagree (SD) were the five options on a Likert rating scale ranging from 5 to 1.According to the table, there were 715 responses overall that indicated strong agreement; 648 respondents indicated agreement, 39 indicated undecided, 4 indicated disagreement, and none of the subject respondents indicated strong disagreement.The following computations could be made using the questionnaire data shown in table 4.2, which displayed the subjects' total responses following the application of the L-R-D strategy in the teaching and learning of reading comprehension: 1.The results of the questionnaire indicate that the L-R-D approach was applied and that the individuals responded positively to it.It may be argued that L-R-D was successfully put into practice.The amount of participants who responded positively to the use of the L-R-D method served as evidence of it.The L-R-D strategy's use in the teaching and learning process was strongly agreed upon by 50.85% of respondents, 46.08% agreed, 2.77% were unsure, 0.28% disagreed, and none of the subjects strongly disagreed.Pre-and post-test findings showed significant gains in the subjects' reading comprehension, and the questionnaire results showed that the L-R-D technique was well-received in the teaching and learning process. Tests and a questionnaire are the two instruments the researcher used in this classroom action research to collect the necessary data.The subjects' reading comprehension improved from the pre-cycle to cycle II, as indicated by the mean score above.Cycle II was the last cycle in this study as a result of the subjects' achievement of the success indicators, which were demonstrated by their improvements in reading comprehension, and their ability to receive the minimal passing score.The results of this study using the L-R-D approach demonstrated that the mean score of the questionnaire, post-test 1, post-test 2, and pre-test could be displayed as two graphs as shown below: The subjects' reading comprehension was gauged using these results.The current study may come to an end since it has met the success indicator, according to the findings.Furthermore, the data demonstrated an increase in the pre-test mean score compared to post-tests 1 and 2. The subjects' performance significantly improved once the L-R-D strategy was put into practice.A questionnaire was also included as extra supporting information.The questionnaire's results indicated that students responded favorably to the use of the L-R-D strategy in the teaching and learning process, particularly in reading comprehension. Discussion The four interrelated tasks that comprise the classroom action research used in this study are planning, activity, observation, and reflection.The current study sought to determine the impact of the L-R-D method on the reading comprehension of eighth-grade students at SMPN 4 Kuta Utara.The study was organized into two cycles, with two sessions apiece, and a pre-cycle reflection in between.Cycle I consisted of Sessions I and II, and Cycle II was made up of Sessions III and IV.The researchers then used questionnaires and tests as research tools to collect data.There were two kinds of tests: pre-and post-tests.It must be analyzed in the context of the data from the pre-cycle, cycle I, and cycle II in order to gain a deeper understanding. In order to ascertain the subjects' true reading comprehension levels throughout the pre-cycle, the researchers conducted interviews with Kuta Utara, the English instructor at SMPN 4, who instructed class VIII A. Prior to using the L-R-D approach, the subjects took a pre-test, which was designed to determine the individuals' prior reading comprehension skills.The subjects had 30 minutes to complete a short response activity consisting of 20 questions for the pre-test.The students' reading comprehension skills, particularly in recognizing the four components of reading comprehension, were low, according to the findings.A rubric consisting of three criteria was used to evaluate the pre-test findings.The pre-test mean score was 58.9, which was then followed by the 32 participants in the pre-cycle.It was evident that all of the students still needed to work on their reading comprehension skills because none of them achieved the required minimum score of 75.It had to be improved as a result, thus the researchers chose to carry out cycle I of the cyclical procedure.
2024-08-29T15:48:45.896Z
2024-06-25T00:00:00.000
{ "year": 2024, "sha1": "3e93b2bc97f4b58eeacea5def51ef2a573a5f94d", "oa_license": null, "oa_url": "https://doi.org/10.32585/ijelle.v6i1.5270", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "582d7411e92768755e3f84d122412a07a4fba057", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
119585945
pes2o/s2orc
v3-fos-license
On freely generated semigraph $C^*$-algebras For special universal $C^*$-algebras associated to $k$-semigraphs we present the universal representations of these algebras, prove a Cuntz--Krieger uniqueness theorem, and compute the $K$-theory. These $C^*$-algebras seem to be the most universal Cuntz--Krieger like algebras naturally associated to $k$-semigraphs. For instance, the Toeplitz Cuntz algebra is a proper quotient of such an algebra. Introduction In this paper we continue our study of higher rank semigraph C * -algebras from [3]. The class of higher rank semigraph C * -algebras present a flexible generalisation of Tomforde's ultragraph algebras [8] or C * -algebras of labelled graphs [1] to higher rank structures. In this work we compute the K-theory of weakly free higher rank semigraph algebras in Theorem 5.5. Roughly speaking, a weakly free semigraph algebra is something like the Toeplitz graph algebra [7] in the theory of graph algebras. Actually, Theorem 5.5 extends the K-theory computation of Toeplitz graph algebras in [4]. Also the proof is not much work: it is merely an application of a theorem in [4]. The second aim of this paper covering the main work is the introduction of special higher rank semigraph algebras which we call quell semigraph algebras and which are induced by higher rank graphs or semigraphs. A quell semigraph algebra is defined to be universally generated by a k-semigraph as a generator set, thereby satisfying only a minimum quantity of relations such that it still contains naturally the inducing k-semigraph. We find a concrete faithful representation of a semigraph algebra on a Hilbert space as a left regular representation of a semimultiplicative set in Proposition 4.8. A quell semigraph algebra is free, weakly free, cancelling, and satisfies a Cuntz-Krieger theorem, see Theorem 4.13. Every semigraph C * -algebra is generated by partial isometries with commuting source and range projections (see Lemma 2.13). There is some recent interest in universal C * -algebras generated by partial isometries and their K-theory, see for instance Cho and Jorgensen [5] and Brenken and Niu [2]. Although there is no immediate motivation for a quell semigraph algebra, we suspect it is somehow the freest Cuntz-Krieger like algebra which naturally embedds a given k-graph, see also Lemma 4.14. Moreover we think every universal algebra generated by partial isometries for which one knows the K-theory is a benefit; even more so, as there is (despite [9]) an ongoing programm initiated by G. Elliott to classify certain subclasses of nuclear C * -algebras (as the subclass of purely infinite nuclear C * -algebras) where the K-theory of a C * -algebra plays a major role. We give a brief overview of this paper. In Section 2 we recall the theory of semigraph algebras. In Section 3 we consider two conditions for a semigraph algebra which we call weakly free (Definition 3.3) and free (Definition 3.6). Freeness implies weakly freeness (Proposition 3.9), and free semigraph algebras are cancelling (Proposition 3.10), and so satisfy a Cuntz-Krieger uniqueness theorem. In Section 4 we introduce the quell semigraph algebras (Definition 4.3) and show that they are free and cancelling (Theorem 4.13). In Section 5 we explicitely compute the K-theory of weakly free semigraph C * -algebras in Theorem 5.5. In Section 6 we use Theorem 5.5 for the computation of the K-theory of a quell semigraph C * -algebra in Theorem 6.2. Semigraph algebras In this section we recall briefly the definition and some basic facts about semigraph algebras [3] for further reference. Definition 2.1. A semimultiplicative set T is a set equipped with a subset T (2) ⊆ T and a multiplication T (2) −→ T : (s, t) → st, which is associative, that is, for all s, t, u ∈ T , (st)u is defined if and only if s(tu) is defined, and both expressions are equal if they are defined. Definition 2.2. Let k be an index set (which may be regarded as a natural number if k is finite). A k-semigraph T is a semimultiplicative set T equipped with a degree map d : T −→ N k 0 satisfying the unique factorisation property which consists of the following two conditions: (1) For all x, y ∈ T for which the product xy is defined one has d(xy) = d(x) + d(y). We call k-semigraph also a higher rank semigraph or just a semigraph. We shall also write |t| rather than d(t) for elements t in a k-semigraph. Let T be a k-semigraph. We denote the set of all elements of T with degree n by T (n) (n ∈ N k 0 ). If x ∈ T and 0 ≤ n 1 ≤ n 2 ≤ d(x) then there are unique Proof. Transitivity is clear. So assume s ≤ t and t ≤ s. Then there are α, β ∈ T such that t = αs and s = βt. Then s = βαs, and so d(s) = d(s) + d(α) + d(β), which implies d(α) = d(β) = 0. Hence one has βαs = βαβαs. By the unique factorisation property s = αs. So s = t. Definition 2.5. A k-semigraph T is called finitely aligned if for all x, y ∈ T the minimal common extension of x and y, which is the set is finite. It might help to recall the meaning of the relation (α, β) ∈ T (min) (x, y) by visualising it by the tautologies α ≤ xα and β ≤ yβ. Definition 2.6. T is called a non-unital k-semigraph if there exists a ksemigraph T 1 which has a unit 1 ∈ T 1 such that T = T 1 \{1}. We shall use the following notions when we speak about algebras. A *algebra means an algebra over C endowed with an involution. An element s in a * -algebra is called a partial isometry if ss * s = s, and a projection p is an element with p = p 2 = p * . We define P a := aa * and Q a := a * a for elements a of a * -algebra. If I is a subset of a * -algebra then I denotes the self-adjoint two-sided ideal generated by I in this * -algebra. Assume that we are given a set P and a nonunital k-semigraph T with P ∩ T = ∅. Recall that in a non-unital k-semigraph one has d(t) > 0 for all its elements t. We denote by T 1 = T ⊔ {1} the unital k-semigraph of Definition 2.6. Define F to be the free nonunital * -algebra generated by P ∪ T . We call a * -monomial in the letters of P ∪ T a word. for any x i ∈ T ∪ P. Definition 2.8. The fiber space of F is the union of all fibers span(W n ), where W n denotes the set of words with degree n ∈ Z k . Definition 2.9. Let σ : T k −→ Aut(F) be the gauge action defined by σ λ (p) = p and σ λ (t) = λ d(t) t for all p ∈ P, t ∈ T and λ ∈ T k . Lemma 2.10 ( [3], Lemma 4.7). X is the quotient of F by a subset of the fiber space if and only if there is a gauge action on X (defined like in Definition 2.9). Definition 2.11 (Semigraph algebra). A k-semigraph algebra X is a *algebra which is generated by disjoint subsets P and T of X, where (i) P is a set of commuting projections closed under taking multiplications, (ii) T is a set of nonzero partial isometries closed under nonzero products, (iii) T is a non-unital finitely aligned k-semigraph, (iv) for all x ∈ T and all p ∈ P there is a q ∈ P such that px = xq, (v) for all x, y ∈ T there exist q x,y,α,β ∈ P such that αq x,y,α,β β * , and (vi) X is canonically isomorphic to the quotient of F by a subset of the fiber space (Definition 2.8). It is understood in identity (1) that the unit 1 in T 1 = T ⊔ {1} is also a unit for all elements of X. The universal C * -algebra C * (X) generated by X is called the k-semigraph C * -algebra associated to X. We often write also xα=yβ rather than (α,β)∈T (min) 1 (x,y) in (1). Definition 2.12. We call an element of {spt * ∈ X| s, t ∈ T 1 , p ∈ P} a standard word (of the semigraph algebra X). We call an element of {sp ∈ X| s ∈ T 1 , p ∈ P} a half-standard word. (a) The word set of X is an inverse semigroup of partial isometries. Corollary 2.14. (a) A semigraph algebra is linearly spanned by its standard words. (b) The range projection of a word is a sum of range projections of halfstandard words. (c) The source projection of a half-standard word is in P. Definition 2.16. A semigraph algebra X is called cancelling if for every standard word w with nonzero degree and every nonzero standard projection p there is a nonzero standard projection q such that q ≤ p and qwq = 0. Theorem 2.17 (Cuntz-Krieger uniqueness theorem, [3], Theorem 7.3). The universal representation X −→ C * (X) is injective on the core, and so nonvanishing on the nonzero standard projections, and up to isomorphism this is the only existing representation of X in a C * -algebra which is non-vanishing on nonzero standard projections and has dense image. Free semigraph algebras In this section we consider a condition on a semigraph algebra called freeness, and a weaker one called weakly freeness. The rough idea of freeness is that range projections of generators should not sum up to a unit. This is similar as P s 1 + P s 2 < 1 in the Toeplitz version of the Cuntz algebra O 2 with generating isometries s 1 and s 2 . The reader may be warned that this section tends to be very technical in a combinatorial and tedious sense. In this section X denotes a semigraph algebra. Definition 3.1. Write X \i for the * -subalgebra of X generated by the elements P and the non-unital finitely aligned semigraph Proof. All points (i)-(v) of Definition 2.11 are almost obvious. Point (vi) can be seen by Lemma 2.10 and the fact that the restriction of the gauge action on X to X \i is a gauge action on X \i . We shall regard X \i as a sub-semigraph algebra of X. When we work in X and say "w is a word in X \i " (and so on) then "word" refers to the semigraph algebra X \i ; so w does not mean a word in X which accidentally happens to be in X \i . Definition 3.3. A semigraph algebra X is called weakly free if for all coordinates i ∈ k, all nonzero standard projections p of X \i and all finite subsets B ⊆ T (e i ) p ≤ b∈B P b does not hold. The next two lemmas (Lemma 3.4 and Lemma 3.5) will only be used in the proof of Lemma 5.3. The reader only interested in Theorem 5.5 could go directly to Section 5 after these two lemmas. Lemma 3.4. If i ∈ k, a ∈ T (e i ) and w is a word in X \i then there exist b 1 , . . . , b n ∈ T (e i ) such that P a w = P a w(P b 1 + . . . + P bn ). Proof. We may write w = spt * for s, t ∈ T \i 1 and p ∈ P. Then (by Definition 2.11 (v)) for p β ∈ P with pβ = βp β (by Definition 2.11 (iv)), and where d(β) = d(a) = e i , and tβ = β ′ t ′ by the unique factorisation property with d(β ′ ) = e i (if tβ = 0, which, recall, is equivalent to tβ being a defined product in T ). Lemma 3.5. If X is weakly free then for every coordinate i ∈ k, every element x of the fiber space of X \i , and every finite subset Proof. Suppose that x is a nonzero element of the 0-fiber X \i 0 (i.e. the core) of X \i , p = b∈B P b , and px = x. The element x is in the core of X \i and so in a finite dimensional C * -algebra A as described in Corollary 2.15. Say, x = ij λ ij e ij for matrix units e ij , where each diagonal unit e ii is a sum of mutually orthogonal standard projections of X \i (Corollary 2.15). Since x = 0 we may suppose that 0 = λ i 0 j 0 = 1 for some fixed pair (i 0 , j 0 ). By Corollary 2.15, there is a nonzero standard projection q in X \i (so q ∈ P(T \i 1 P)) such that q ≤ e i 0 i 0 . Note that p commutes with e i 0 i 0 since standard projections commute. Then However, q ≤ p contradicts the weakly free condition in X. Thus , and the pairs (a k , b k ) are mutually distinct for different k's, for all 1 ≤ k ≤ l. Fix 1 ≤ k 0 ≤ l. Then, since px = x, and by several applications of Lemma 3.4, there exists a subset B ′ ⊆ T (e i ) such that for p ′ = b∈B ′ P b one has Thus r k 0 = p ′ r k 0 and r k 0 ∈ X \i 0 , and so by what we have already proved, r k 0 = 0. Since k 0 was arbitrary, Definition 3.6. A semigraph algebra X is called free if (i) P a is not in P for every half-standard word a ∈ P, and (ii) if p ∈ P and a 1 , . . . , a n are half-standard words with P a i < p for all Proof. We may write a = xs and b i = y i p i for certain x, y i ∈ T 1 and s, p i ∈ P. Assume that X is Toeplitz and P a (1 − P b i ) = 0 for all 1 ≤ i ≤ n. We have (by Definition 2.11 (v)) where p i,β is chosen in P such that p i β = βp i,β (by Definition 2.11 (iv)), and where in the last identity we successively used the formula (1 − p)(1 − q) = 1 − p − q for orthogonal projections p and q. Since the above is nonzero by assumption, we have Multiplying here from the left and right with x * and x, respectively, we get Similarly as for elements in a semigraph we write x 1 ≤ x 2 for half-standard words x 1 and x 2 if they allow a representation x = t 1 p 1 and x 2 = t 2 p 2 (t 1 , t 2 ∈ T and p 1 , p 2 ∈ P) with t 1 ≤ t 2 . Only in the next corollary H denotes the set of half-standard words. Corollary 3.8. If X is free then these two sets are the set of nonzero standard projections: Proof. That the first set is the set of nonzero standard projections follows from Lemma 3.7. For the second set just recall the identity (2) how we can write down a standard projection. The assertion can be directly read off from this expansion. To check weakly freeness, consider a finite subset B of T (e i ) and a nonzero standard where we used qe = eq e from Definition 2.11 (iv). The last inequality is here by freeness. Indeed, P eqeq x,b,e,f f * f / ∈ P by Definition 3.6 (i). On the other hand, We conclude from (4) and (5) by Lemma 3.7. Consequently p ≤ b∈B P b does not hold. Proposition 3.10. If X is free then X is cancelling. Proof. We are going to check the cancelling condition, Definition 2.16. Let w = αqβ * be a standard word with d(w) = 0 (α, β ∈ T 1 , q ∈ P). Let P be a nonzero standard projection. We must find a nonzero standard projection Q with Q ≤ P and QxQ = 0. We may write x ∈ T 1 , s ∈ P and the y i 's are half-standard words. If already P wP = 0 then the cancelling condition is verified. So assume that Then there is a pair (e, f ) ∈ T (min) 1 (x, α) such that v := pxs(eq x,α,e,f f * )qβ * pxsx * = 0 by (1). Consequently pxse = 0, and so is a standard projection, where x ′ := xe and s ′ ∈ P satisfies se = es ′ . (We intensively use the fact that the word set forms an inverse semigroup, Lemma 2.13.) Note that P ′ ≤ P . If already P ′ wP ′ = 0, then the cancelling condition is verified. So assume P ′ wP ′ = 0. Note that |x ′ | = |xe| ≥ |α| since (e, f ) ∈ T (min) 1 (x, α). Note that by (6) P ′ = px ′ s ′ x ′ * has the same shape as P , but with |x ′ | ≥ |α|. As we are going to search Q ≤ P ′ ≤ P , we may assume without loss of generality that we are given P with P wP = 0 and |x| ≥ |α|. Similar computations as above on the β-side show that we may also assume, by choosing a smaller projection than P , that also |x| ≥ |β|. Proof. By universality of the universal representation π : X −→ C * (X), the gauge map X σ λ −→ X π −→ C * (X) (λ ∈ T k ), Definition 2.9, induces a gauge mapσ λ : C * (X) −→ C * (X). Hence, by Lemma 2.10, π(X) is a quotient of the free algebra F by a subset of the fiber space. Thus π(X) must be a quotient of X by a subsets of its fiber space, and is thus a semigraph algebra by [3,Lemma 8.1]. Now assume that X is free. By the uniqueness theorem (Theorem 2.17) the cores of X and π(X) are isomorphic, and so the validity of Definition 3.6 (ii) carries over from X to π(X). If a halfstandard word π(a) in π(X) (a being a half-standard word in X) is not in π(P) then d(a) = d(π(a)) > 0, and so P a / ∈ P since X is free, and thus P π(a) = π(P a ) / ∈ π(P). This verifies Definition 3.6 (i) for π(X). Quell semigraph C * -algebras In this section we define quell semigraph algebras. Let us anticipate roughly what it is. A quell semigraph algebra could be most simply explained by considering a higher rank graph T [6]. Then the quell semigraph C * -algebra Q(T ) for T is a C * -algebra which is similar to the Toeplitz graph algebra T C * (T ) [7] but without the relations Q t = s(t) (t ∈ T ). So the Toeplitz graph algebra is a quotient of the quell semigraph algebra. Suppose that T is a finitely aligned k-semigraph. Define T (0) to be the set of elements of T which have degree zero (d(t) = 0). Note that if e ∈ T (0) and e 2 is defined then e is automatically idempotent. Indeed, by the factorizatrion property we may choose a unique decomposition e = ab for certain a, b ∈ T (0) . Then e 2 = abab. By the unique factorisation property e = a = b, and so e 2 = ab = e, which proves the claim. Moreover, if e ∈ T (0) is idempotent and x ∈ T then either ex is undefined or ex = x (since ex = e 2 x and so x = ex by the unique factorisation property). In particular, ef must be undefined for distinct e, f ∈ T (0) . In this section it is assumed that a semigraph T has only idempotent elements in T (0) . Let us summarise the consequences in a lemma. Suppose that T is a finitely aligned k-semigraph. Then one associates the quell semigraph algebra X to T . It is the universal * -algebra X generated by the set T subject to the following relations. (i) T consists of partial isometries, (ii) T (0) consists of projections, (iii) X respects the multiplication of T (that is, if xy = z holds in T for x, y, z ∈ T then this identity should also hold in X), (iv) xy = 0 for all x, y ∈ T whose product xy is undefined, (v) Q x and Q y commute for all x, y ∈ T , and (vi) (9) x * y = (e,f )∈T (min) (x,y) eQ yf f * for all x, y ∈ T . "Quelle" is the German word for source. Since the source projections play an extraordinary role in the last definition (as compared to ordinary graph algebras), we decided for the word quell. One may wish however to say "source semigraph algebra" rather than "quell semigraph algebra". Note that T is faithfully embedded in the free algebra F, but could perhaps degenerate in the quotient X. Soon we will see below (Corollary 4.9) however that T is also faithfully embedded in X. That is why we should not like to distinguish the k-semigraph T and its embedding in X. For the remainder of this section we shall assume that T is a finitely aligned k-semigraph and X its associated free semigraph algebra. We are going to check that ∆ is a semimultiplicative set. To this end we have to check associativity in the sense of Definition 2.1. Suppose that a, b, c ∈ ∆. If a, b, c ∈ T then (ab)c is defined if and only if a(bc) is defined as T is a semimultiplicative set. If a or b is in ∆ µ then both (ab)c and a(bc) are undefined. Suppose a, b ∈ T and c ∈ ∆ µ . Write c = yµ α . If (ab)c = ((ab)y)µ α is defined then (ab)y ≤ α i for some i. Thus α i = zaby for some z ∈ T and so also a(bc) = a((by)µ α ) = (a(by))µ α is defined and we have (ab)c = a(bc). Similarly we see the reverse conclusion. If a semimultiplicative set G has left cancellation, that means, st 1 = st 2 implies t 1 = t 2 (for all s, t 1 , t 2 ∈ G) then we can associate a left reduced C *algebra to G as defined next. (We shall write e i or δ i for the delta function 1 {i} .) Definition 4.7. For a semimultiplicative set G with left cancellation define λ : for all s ∈ G and α t ∈ C. The sub-C * -algebra of B(ℓ 2 (G)) generated by λ(G) is called the left reduced C * -algebra of G and denoted by C * r (G). Proposition 4.8. There is a representation ϕ : would be a well defined representation of the free algebra F generated by T . We need to show that this ϕ respects the defining relations of Definition 4.3. Note that the λ t 's are partial isometries with commuting range and source projections (these are canonical projections onto ℓ 2 (Z) for subsets Z ⊆ ∆). So, by the property of ∆ to be a semimultiplicative set and identity (10) the points (i)-(v) of Definition 4.3 (for ϕ(T ) = λ T rather than T ) are easy to see. (For (ii) recall Lemma 4.1.) Let us write down the adjoint operators ϕ(t) * . We have (11) ϕ(t) * δ tsµα = δ sµα , ϕ(t) * δ ts = δ s , ϕ(t) * δ a = 0 (else) To check Definition 4.3 (vi), consider x, y ∈ T . Suppose (12) ϕ(xx * yy * )δ a = δ a for an a ∈ ∆. Then a is a product a = ya y (since ϕ(y) * δ a = 0) for some a y ∈ ∆, and similarly a = xa x for some a x ∈ ∆. Say that a = ys y µ α = xs x µ α . Then v := ys y has degree d(v) ≥ d(x) ∨ d(y), and so there must exist a minimal common extension (e, f ) ∈ T (min) (x, y) such that v(0, |x| ∨ |y|) = xe = yf. Hence, multiplying in (14) from the left and right with x * and y, respectively, we see that the identity (9) holds in the image of ϕ. Corollary 4.9. The canonical map ι : T −→ X is an injective k-semigraph homomorphism and non-degenerate. Lemma 4.10. X is a semigraph algebra with generators Proof. We need to show Definition 2.11. That P is a commuting set of projections closed under multiplications (Definition 2.11 (i)) follows from Definition 4.3 (i) and (v). That T is a set of partial isometries closed under nonzero products (Definition 2.11 (ii)) follows from Definition 4.3 (i), (iii) and (iv). We are going to check that T 1 is a semigraph (Definition 2.11 (iii)). Let t ∈ T 1 , and t = t 1 t 2 be the unique decomposition in T subject to m = d(t 1 ) > 0 and n = d(t 2 ) > 0. This is the required decomposition in T 1 also. If however m = 0, say, then take the factorisation t = 1t. To prove (1) of Definition 2.11 (v), consider x, y ∈ T . Note that in (9) xe = yf and Q yf ∈ P, so (9) looks already similar like (1). We only have to take care whether T (x, y) and {(e ′ , f )} = T (min) (x, y). Thus xe ′ = yf so that e ′ ∈ T (0) is a right unit for xe ′ = yf by Lemma 4.1, that is, yf e ′ = yf . Hence, by the unique factorisation property in T , even f e ′ = e ′ . Thus 1f * y * yf f * = e ′ f * y * yf f * , so there is no difference. The cases d(x) < d(y) and d(x) = d(y) are treated similarly. If d(x) and d(y) are incomparable then there is obviously no difference. To check Definition 2.11 (iv), just note that Q x y = (x * x)y = x * (xy) = yQ xy 1 = yQ xy by (9) if x, y ∈ T and xy = 0. The algebra X is generated by P and T since e = e * e = Q e ∈ P for e ∈ T (0) by Definition 4.3 (ii). There is a gauge action σ on X given by σ λ (t) = λ d(t) t for t ∈ T, λ ∈ T k . Indeed, it exists on the free algebra F generated by T , and so also on X, as the relations of Definition 2.11 are invariant under the σ λ 's. Hence, Lemma 2.10 verifies Definition 2.11 (vi). Definition 4.11. Let T be a semigraph and X its free semigraph algebra. Then Q(T ) := C * (X) is called the quell semigraph C * -algebra associated to T . Let us use the following abbreviation. If α = (α 1 , . . . , α n ) ∈ T n then µ α denotes s(α 1 )µ α . The next lemma shows us that the representation ϕ can distinguish the elements of P. The restriction a → ϕ(a)| ℓ 2 (T ) is not able to do this, and this is why we considered ∆ at all and not just T , which would have been much simpler. , and p α is nonzero, s(α 1 ) = s(α i ) for all i by the orthonality of the idempotent elements of T (Lemma 4.1 and Definition 4.3 (iv)). Consequently, This proves ϕ(p α )δ µα = δ µα . We may write q = Q y 1 . . . Q y l for some y i ∈ T (see (15)). Note that either ϕ(q)µ α = µ α or ϕ(q)µ α = 0. Assume the first case. Then by a similar computation as in (17) we see that for every i there is a j i such that y i ≤ α i j . Thus, for every i Q y i ≥ Q α i j ≥ p α . Hence q ≥ p α , which is a contradiction to the assumption q < p α . Theorem 4.13. The quell semigraph algebra X associated to a k-semigraph T is free, weakly free and cancelling. It thus satisfies the Cuntz-Krieger uniqueness theorem, that is, there is only one C * -representation of X (up to isomorphism) which is non-vanishing on nonzero standard projections. This universal C * -representation, which is also injective on the core, is the representation ϕ from Proposition 4.8. In particular, there is an isomorphism between the quell semigraph C * -algebra and the left reduced C * -algebra of the semimultiplicative set ∆, i.e. Q(T ) ∼ = C * r (∆). Proof. We are going to check that X is free (Definition 3.6). Let p ∈ P and y 1 , . . . , y n be half-standard words. Assume that P y i < p for all 1 ≤ i ≤ n. By (15) and Lemma 4.12 there is an α ∈ T n such that p = p α . Let ϕ : X → C * (∆) be the representation of Proposition 4.8. By (11) we have On the other hand, if d(y i ) = 0 then P y i ∈ P, and so ϕ(y i )δ µα = 0 by Lemma 4.12 (as y i = P y i < p α ). Summarising these facts we get ϕ n i=1 P y i δ µα = 0 and ϕ(p α )δ µα = δ µα (Lemma 4.12). Consequently n i=1 P y i < p α . This proves Definition 3.6 (ii). If y is a half-standard word with d(y) > 0 then ϕ(P y )δ µ β = 0 for any β by (11). Consequently, P y cannot be in P by Lemma 4.12. This verifies Definition 3.6 (i). We are going to check that ϕ is faithful on standard projections. By Lemma 3.8 a nonzero standard projection p allows a representation p = P a n i=1 (1 − P b i ) with P b i < P a and a ≤ b i . Say that a = tp α = tQ t p α for t ∈ T 1 and some α ∈ T n according to Lemma 4.12. We may incorporate Q t in p α and assume that α 1 = t. We have is not true then b i = tp β (for some p β ∈ P) since a ≤ b i , and then, as P b i < P a , i.e. tp β t * < tp α t * , one has t * tp β t * t < t * tp α t * t = p α (the last identity by the fact that α 1 = t). Hence, ϕ(p β Q t )δ µα = ϕ(p β )δ µα = 0 by Lemma 4.12. So also in this case we have (19). Identities (18) and (19) show that ϕ(p) = 0. By Corollary 2.15, ϕ is injective on the core. Hence also the universal representation of X must be injective on the core. Since we have also checked that X is free, X is weakly free and cancelling by Propositions 3.9 and 3.10. Thus, by the Cuntz-Krieger uniqueness theorem (Theorem 2.17), ϕ is the universal representation, which implies Q(T ) ∼ = ϕ(X) ⊆ C * r (∆). C * r (∆) is generated by the operators (λ t ) t∈T , since the operators λ t are zero for t ∈ ∆ µ (the composition ts is invalid in ∆ for any element t ∈ ∆ µ ). Consequently, the image of ϕ is dense in C * r (∆) and so Q(T ) ∼ = ϕ(X) = C * ((λ t ) t∈T ) = C * r (∆). The next lemma is intended to serve as an example for a particular quell semigraph algebra. Let ζ n be the graph induced by the skeleton consisting of one vertex ν and n arrows s 1 , . . . , s n starting and ending in this single vertex ν; n may be any cardinal number. Lemma 4.14. The quell semigraph C * -algebra Q(ζ n ) is the universal unital C * -algebra generated by the free inverse semigroup (of partial isometries) of n generators t 1 , . . . , t n with the additional relations that the range projections of these generators are mutually orthogonal, i.e. P t i P t j = 0 for all i = j. Proof. Set A = C * (1, t 1 , . . . , t n ). In A, 1 is a unit and the words in the letters t i form an inverse semigroup (where the inverse element should be the adjoint element in A, that is, inverse semigroup elements happen to be partial isometries); moreover P t i P t j = 0 for i = j. A is universal with respect to these relations. The quell semigraph C * -algebra Q(ζ n ) is a semigraph C *algebra (Lemma 4.10). We propose a homomorphism Since the generators s i form an inverse semigroup by Lemma 2.13, P s i P s j = 0 by Definition 4.3 (vi), and ν is a unit in Q(ζ n ) by Definition 4.3 (iii) and the fact that vs i = s i v = s i , the map α is a well defined homomorphism. We propose an inverse homomorphism A is generated by β ({1, s 1 , . . . , s n }), so β is surjective. We need to show that the relations of Definition 4.3, for β(ζ n ) rather than ζ n , hold in A. The first case is tautological, the third one reduces to a tautology in an inverse semigroup, so holds in the image of β. The second case we demonstrate for x = s 1 s 2 and y = s 1 s 3 s 4 , say. One has β(x) * β(y) = β(x) * β(x)β(x) * β(y)β(y) * β(y) = 0 since where we have used inverse semigroup rules (commutativity of projections) and the fact that P s 2 P s 3 = 0. Since all relations of Definition 4.3 evidently hold in the image of β, β must be a well defined homomorphism. This proves the lemma as α and β are inverses to each other. 5. K-theory of weakly free semigraph C * -algebras In this section we are going to compute the K-theory of a weakly free semigraph C * -algebra by an application of [4, Theorem 2.2]. Since the setting of [4] is somewhat lenghty, we do not recall it here but directly apply it to semigraph algebras. Definition 5.1. For a semigraph algebra X we say the source projections cover the generators if for every p ∈ P there is a t ∈ T such that p ≤ Q t . Let X be a k-semigraph algebra. Assume that k is finite, the universal representation X −→ C * (X) is injective (we shall regard X as a subset of C * (X)), and that the source projections cover the generators (Definition 5.1). Define A i = { tp | t ∈ T (e i ) , p ∈ P }\{0} for 1 ≤ i ≤ k. Lemma 5.2. X is generated by Proof. Let t ∈ T . Then t = t 1 . . . t n = (t 1 Q t 1 ) . . . (t n Q tn ) for certain t i ∈ T with d(t i ) = e i j . So t is a product of elements of A as Q s ∈ P for any s ∈ T . Let p ∈ P. Since the source projections cover the generators by assumption, there is a t ∈ T such that p = pQ t = pt * tp = (tp) * (tp). This is a product of elements of A again as we may write tp as tp = t 1 (t 2 p) where t 1 , t 2 ∈ T , d(t 2 ) = e i , so t 2 p ∈ A i , and t 1 may be further expanded as above. Note that by the unique factorisation property in T every standard word w may be written as T (e i ) and some p ∈ P. Define X to be C * (X). By Lemma 5.2 X is generated by a finitely partitioned alphabet A. We have a gauge action (as defined in Definition 2.9) with respect to this alphabet on X and consequently a degree map determined by d(a i ) = e i for a i ∈ A i . We define S to be the set of half-standard words. Their range projections commute by Corollary 2.14. The core is locally matrical by Corollary 2.15. We resolve the core by finite dimensional C * -algebras as they are described in Corollary 2.15, and in particular choose D to be the set of all their diagonal entries. Since the diagonal elements of these matrices of the core are expressable as direct sums of standard projections (Corollary 2.15), and every standard projection can be written as a sum of range projections of half-standard words (Lemma 2.13), we have D ⊆ P , where we set P := span Z { P x ∈ X | x ∈ S }, Q := Alg * { Q x ∈ X | x ∈ S }. We define W ′ to be the set of standard words. They linearly span X by Lemma 2.14. We have all requirements for [4, Theorem 2.2], except the technical conditions (a) and (b) from [4]. We have now all requirements for [4, Theorem 2.2] which states the following. By (15) we have Q = span(P). Since the projections P commute, we have K 0 (C * (Q)) = Ring(P), where Ring(P) denotes the subring of C * (X) generated by P, regarded then as an abelian group under addition. Since Q is a subset of the core, which is locally matrical, C * (Q) is an AF-algebra and thus K 1 (C * (Q)) = 0. Theorem 5.4 states that the K-theory of C * (X) is the K-theory of C * (Q), which we have now. Theorem 5.4, Lemma 5.3 and the above discussion now yield the following theorem. Theorem 5.5. Let X be a weakly free k-semigraph algebra (with k < ∞) whose universal representation X −→ C * (X) is injective. Suppose that the source projections cover the generators. Then the semigraph C * -algebra has the following K-theory: K 1 (C * (X)) = 0, K 0 (C * (X)) ∼ = Ring(P) via [p] ←→ p for p ∈ P. (Ring(P) denotes the subring of X generated by P, regarded then as an abelian group under addition.) We are going to write Ring(P) as a direct limit of subrings by using common refinements of projections in P. For each finite subset Q of P we consider the subring Ring(Q) generated by Q. This ring is generated by the base elements for all q ∈ Q\A. This follows from Lemma 3.7. LetQ denote the family of all subsets A of Q for which p Q,A is nonzero. Since the the p Q,A 's are mutually orthogonal, we have Ring(Q) = Z |Q| . We write Ring(P) as a direct limit (23) Ring(P) = lim − → Q⊆P Ring(Q) = lim − → Q⊆P Z |Q| . 6. K-theory of quell semigraph C * -algebras We aim now to apply Theorem 5.5 to the quell semigraph C * -algebra. To this end consider the quell semigraph algebra X associated to a finitely aligned k-semigraph T . We may go over to its image X ′ in Q(T ), and write again X rather than X ′ for simplicity. This semigraph algebra is weakly free by Theorem 4.13 and Lemma 3.11. By (15) it is clear that the source projections cover the generators. We can thus apply Theorem 5.5 if k is finite. If k is infinite then we write Q(T ) as the direct limit Q(T ) ∼ = lim − → k 0 C * (X (k 0 ) ), where k 0 runs over the finite subsets of k, and X (k 0 ) denotes the sub-semigraph algebra of X which is generated by all elements of T which have degree zero at any coordinate outside of k 0 (same proof as Lemma 3.2). Again, in X (k 0 ) the source projections cover the generators by (15). Since X is weakly free, X (k 0 ) is also weakly free. So we can apply Theorem 5.5 to each C * (X (k 0 ) ) and get (24) K 0 (Q(T )) = lim − → k 0 K 0 (C * (X (k 0 ) )) = lim − → k 0 Ring(P (k 0 ) ) = Ring(P).
2013-06-20T21:26:08.000Z
2011-11-18T00:00:00.000
{ "year": 2016, "sha1": "0f6dfcb3a0d2fd18300fa325698d352ba0706e81", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1111.4392", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8e6df44e0fdd0f53a4174659709e7b794a73c0a8", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
31870602
pes2o/s2orc
v3-fos-license
Phosphoenzyme Conversion of the Sarcoplasmic Reticulum Ca 2 1-ATPase MOLECULAR INTERPRETATION OF INFRARED DIFFERENCE SPECTRA Time-resolved Fourier transform infrared difference spectra of the phosphoenzyme conversion and Ca release reaction (Ca2E1-P 3 E2-P) of the sarcoplasmic reticulum Ca-ATPase were recorded at pH 7 and 1 °C in H2O and H2O. In the amide I spectral region, the spectra indicate backbone conformational changes preserving conformational changes of the preceding phosphorylation step. b-sheet or turn structures (band at 1685 cm) and a-helical structures (band at 1653 cm) seem to be involved. Spectra of the model compound EDTA for Ca chelation indicate the assignment of bands at 1570, 1554, 1411 and 1399 cm to Ca chelating Asp and Glu carboxylate groups partially shielded from the aqueous environment. In addition, an E2-P band at 1638 cm 21 has been tentatively assigned to a carboxylate group in a special environment. A Tyr residue seems to be involved in the reaction (band at 1517 cm in H2O and 1515 cm 21 in H2O). A band at 1192 cm 21 was shown by isotopic replacement in the g-phosphate of ATP to originate from the E2-P phosphate group. This is a clear indication that the immediate environment of the phosphoenzyme phosphate group changes in the conversion reaction, altering phosphate geometry and/or electron distribution. Time-resolved Fourier transform infrared difference spectra of the phosphoenzyme conversion and Ca 2؉ release reaction (Ca 2 E 1 -P 3 E 2 -P) of the sarcoplasmic reticulum Ca 2؉ -ATPase were recorded at pH 7 and 1°C in H 2 O and 2 H 2 O. In the amide I spectral region, the spectra indicate backbone conformational changes preserving conformational changes of the preceding phosphorylation step. ␤-sheet or turn structures (band at 1685 cm ؊1 ) and ␣-helical structures (band at 1653 cm ؊1 ) seem to be involved. Spectra of the model compound EDTA for Ca 2؉ chelation indicate the assignment of bands at 1570, 1554, 1411 and 1399 cm ؊1 to Ca 2؉ chelating Asp and Glu carboxylate groups partially shielded from the aqueous environment. In addition, an E 2 -P band at 1638 cm ؊1 has been tentatively assigned to a carboxylate group in a special environment. A Tyr residue seems to be involved in the reaction (band at 1517 cm ؊1 in H 2 O and 1515 cm ؊1 in 2 H 2 O). A band at 1192 cm ؊1 was shown by isotopic replacement in the ␥-phosphate of ATP to originate from the E 2 -P phosphate group. This is a clear indication that the immediate environment of the phosphoenzyme phosphate group changes in the conversion reaction, altering phosphate geometry and/or electron distribution. Muscle relaxation is mediated by the removal of cytosolic Ca 2ϩ by the Ca 2ϩ -ATPase of the sarcoplasmic reticulum membrane. The ATPase couples active Ca 2ϩ transport to the hydrolysis of ATP (1)(2)(3)(4) in a reaction cycle that is shown in Scheme 1. In an essential reaction step, ATP reacts with Asp-351 to form a phosphoenzyme intermediate (Ca 2 E 1 3 Ca 2 E 1 -P), which then converts from an ADP-sensitive form (Ca 2 E 1 -P) to an ADP-insensitive form (E 2 -P) that is more rapidly hydrolyzed. This phosphoenzyme conversion is associated with Ca 2ϩ release toward the sarcoplasmic reticulum lumen against the concentration gradient (1,3,5). The structural origin of the change of accessibility of the Ca 2ϩ sites and of the essential reduction of Ca 2ϩ affinity upon phosphoenzyme conversion Ca 2 E 1 -P 3 E 2 -P have yet to be elucidated. The potentially large body of structural information regarding the ATPase reaction cycle provided by infrared spectroscopy has only recently begun to be exploited using the approach of effector molecule-induced infrared difference spectroscopy (6 -13). The method uses the release of effector molecules from biologically "silent" photolabile derivatives, termed caged compounds (14 -17), to generate high quality infrared difference spectra. (The reaction products and infrared difference spectra of caged ATP photolysis and of side reactions have been characterized (18 -20).) The absorbance changes seen in these difference spectra give evidence for conformational changes of the polypeptide backbone and for alterations to the environment of amino acid side chains that take place in the reaction investigated. Spectra of the phosphoenzyme conversion reaction in the 1800 to 1000 cm Ϫ1 spectral range have previously been described (8) and in that work were calculated by subtracting two difference spectra obtained with two different types of samples; a normalized difference spectrum of the Ca 2 E 1 3 Ca 2 E 1 -P reaction was subtracted from a normalized difference spectrum of the Ca 2 E 1 3 E 2 -P reaction. The use of two different samples in the subtraction and the normalization to identical protein content limit the reliability of these spectra and make it desirable to obtain the phosphoenzyme conversion spectrum more directly. This is possible using time-resolved rapid scan Fourier transform infrared (FTIR) 1 spectroscopy (10). Using this approach, we present here the spectra of the phosphoenzyme conversion reaction obtained in H 2 O and 2 H 2 O after the release of unlabeled ATP or [␥- 18 O 3 ]ATP. In particular, bands of the putative Ca 2ϩ chelating carboxylate groups and of the phosphoenzyme phosphate group are discussed. MATERIALS AND METHODS Sample Preparation-Samples for time-resolved infrared spectroscopy of the Ca 2 E 1 P 3 E 2 -P reaction were prepared as described previously (8, 10) by removal of free water from a sarcoplasmic reticulum suspension equilibrated in H 2 O or 2 H 2 O buffer. Samples were immediately rehydrated with H 2 O or 2 H 2 O with or without 20% Me 2 SO. This method resulted in active ATPase samples (6). Approximate concentrations were 0.7 mM ATPase, 300 mM imidazole, pH 7.0, 1 mM CaCl 2 , 20 mM glutathione, 20 mM caged ATP, 0.5 mg/ml A23187, 2 mg/ml adenylate kinase, and 20% Me 2 SO in approximately 1 l of sample volume. FTIR Measurements-Time-resolved FTIR measurements of the Ca 2 E 1 -P 3 E 2 -P reaction were performed at 1°C with a modified Bruker IFS 66 spectrometer as described previously (10). Difference spectra for the reaction were obtained by subtracting a spectrum representing predominantly Ca 2 E 1 -P, and to a small extent E 2 -P (recorded 3.3-11 s after photolysis of caged ATP in H 2 O or after 11-19 s in 2 H 2 O), from a spectrum representing E 2 -P (recorded after 88 and 146 s in H 2 O and 2 H 2 O, respectively). The resulting spectrum was normalized as described (10) to the full amplitude of the absorbance changes associated with the phosphoenzyme conversion reaction. * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. As the difference spectra were obtained directly from the time-resolved measurements, a normalization of spectra to an identical protein concentration is in principle not necessary. However, for a better comparison of samples in H 2 O and 2 H 2 O and to prevent the possible predominance of individual samples with high protein content in the averaged spectra, spectra were normalized to an identical protein concentration before averaging, as described (21). Absorbance spectra in H 2 O and 2 H 2 O of the model compound for Ca 2ϩ release EDTA were recorded with and without Ca 2ϩ at 20°C and pH 12.8. Small variations in the path length between different samples were corrected by normalizing the water absorption of every sample to a single water spectrum that served as a standard for the path length. Sample concentrations were 84 mM. Band-narrowing Procedures-Apart from second-derivative spectra, the following procedure was applied. The original spectrum was smoothed over 24 cm Ϫ1 and multiplied by 0.95. This spectrum was then subtracted from the original spectrum, which eliminates broad features of the original spectrum. The resulting spectrum is dominated by the fine-structure in the original spectrum, and thus we term the method "fine-structure enhancement" in the following text. When tested with protein absorbance spectra, this method gives results similar to Fourier self-deconvolution (data not shown). For a clearer presentation, finestructure enhanced spectra were multiplied by 10 and second-derivative spectra by Ϫ15. To assess whether peaks in the fine-structure enhanced spectra correspond to "true" component bands or are the result of artifacts in the band-narrowing procedure (see Fig. 3), the original difference spectra were fitted, and the resulting fit was fine-structure enhanced and compared with the fine-structure enhanced original spectra (data not shown). Interestingly, it was found that the spectral region of 1700 -1670 cm Ϫ1 can accurately be fitted by only three bands at 1693 (Ϫ), 1686 (Ϫ), and 1670 cm Ϫ1 (ϩ) despite the plateau at 1678/1676 cm Ϫ1 in the fine-structure enhanced H 2 O spectrum (solid line in Fig. 3B). It is not necessary to introduce an additional band in the fitting model to reproduce that plateau. RESULTS AND DISCUSSION Model Spectra of Ca 2ϩ Release-From site-directed mutagenesis studies it is thought that Glu-309, Glu-771, and Asp-800 form part of the high affinity Ca 2ϩ binding sites of Ca 2 E 1 and Ca 2 E 1 -P (4). Thus, we studied the effect of Ca 2ϩ binding on the infrared spectrum of carboxylate groups using the Ca 2ϩ chelator EDTA. Fig. 1A shows absorbance spectra in 2 H 2 O of free EDTA (solid line) and of the EDTA complex with Ca 2ϩ (dotted line). The bands near 1590 and 1410 cm Ϫ1 can be assigned to the COO Ϫ antisymmetric stretching vibration ( as ) and the symmetric stretching vibration ( s ), respectively. The shoulder at 1612 cm Ϫ1 indicates some heterogeneity of the COO Ϫ groups, resulting from a small proportion of EDTA with protonated nitrogen atoms (22,23). Upon Ca 2ϩ release there is a downshift of 4 -8 cm Ϫ1 for both of the main bands (at 1590 and 1414 cm Ϫ1 ), which translates in the difference spectrum of Ca 2ϩ release (absorbance of free EDTA minus absorbance of the Ca 2ϩ complex) into the two minimum/maximum features at 1590/1568 and 1418/1400 cm Ϫ1 . When 2 H 2 O is replaced by H 2 O, the bands of the antisymmetric stretching vibration are downshifted by 6 -8 cm Ϫ1 for free EDTA and for the complex (data not shown). The band of the symmetric stretching vibration is less sensitive and shifts SCHEME 1. downwards only for free EDTA (by 2 cm Ϫ1 ). A similar behavior has been observed for sodium acetate (24) and for Asp and Glu (25,26). Thus, the model spectra have identified marker band pairs for Ca 2ϩ release from carboxylate groups near 1575 and 1410 cm Ϫ1 . The former is expected to be sensitive to H 2 O 3 2 H 2 O replacement with a higher frequency in 2 H 2 O. The Phosphoenzyme Conversion Spectrum- Fig. 2A shows the difference spectrum of the phosphoenzyme conversion and Ca 2ϩ release reaction, Ca 2 E 1 -P 3 E 2 -P (solid line), and the respective spectrum in 2 H 2 O buffer (dashed line). Positive bands are characteristic for E 2 -P, negative bands for Ca 2 E 1 -P. (25). However, a strong salt bridge or hydrogen bond to one of the oxygens of a carboxylate group could shift the band well above 1600 cm Ϫ1 (24, 32), and a reasonable scenario for the band at 1638 cm Ϫ1 could be that the parting Ca 2ϩ is replaced by a strong hydrogen bond donor or a positive charge. Bands at 1758 and 1710 cm Ϫ1 have tentatively been assigned to the protonation of at least two chelating carboxylate groups (33), which is in line with the small downshift observed upon Protein Backbone: the Amide I Region-The largest absorbance changes (up to 0.5% of the total protein absorbance) are observed in the amide I region of the spectrum (1700 -1610 cm Ϫ1 ) as are the strongest effects of protein deuteration. In this region, the amide I mode of the polypeptide backbone as well as the side chains of Asn, Gln, Arg, HisH ϩ , and Lys absorb strongly (25,26,30). In the lower wavenumber part of that region, Asp and Glu COO Ϫ groups with strong interactions may contribute as discussed above. Large downshifts (Ն30 cm Ϫ1 ) of bands upon H 2 O 3 2 H 2 O replacement are characteristic for the side chain absorptions of Asn, Gln, Lys, and Arg, as are small upshifts as discussed above for COO Ϫ bands and downshifts of up to 10 cm Ϫ1 for amide I and HisH ϩ bands. No band shifts are expected for groups that are located in parts of the protein that are not accessible to deuteration. The large number of alterations in this spectral region, when H 2 O is replaced by 2 H 2 O, makes it difficult to arrive at a unique explanation for the band shifts, and additional experiments as well as data processing were necessary, as described below. The recording of spectra as soon as possible after H 2 O 3 2 H 2 O replacement (30 min to 1 h at 1°C) shows that most of the observed effects take place in protein regions readily accessible to deuteration. The spectrum shortly after deuteration (data not shown) is very similar to the one after prolonged incuba- tion. The only clear exception is the minimum at ϳ1630 cm Ϫ1 that develops over a few hours of incubation. This position is characteristic of amide I modes of ␤-sheet structures and of the CϭO group of deuterated Asn or Gln residues (26). Band-narrowing procedures (see "Materials and Methods") were applied to identify band shifts upon 1 H 3 2 H exchange. Fig. 3A shows the original difference spectra and Fig. 3, B and C, the resulting spectra after band narrowing. These reveal essentially the same peak positions in H 2 O and 2 H 2 O (see Fig. 3, B and C), with one exception. The negative band at 1689 cm Ϫ1 in the unprocessed original H 2 O spectrum (solid line in Fig. 3A) is composed of at least two bands, giving minima in the processed spectrum at 1693 and 1685 cm Ϫ1 (solid line in Fig. 3, B and C). The highest wavenumber component of the negative band seems to be nearly unaffected by protein deuteration and is observed in 2 H 2 O at 1692 cm Ϫ1 . It could be caused by a conformational change of ␤-sheet or turn structures or an Asn, Gln, or Arg side chain that is located in the core of the protein and is inaccessible to deuteration. The other component of the 1689 cm Ϫ1 band seems to shift from its position in the processed spectrum (Fig. 3B) (Fig. 3A). This shift is characteristic of amide I modes of the polypeptide backbone, and the position then indicates a conformational change of ␤-sheet or turn structures. These structures are predicted to occur only in the extramembraneous domains of the protein (34,35), and thus the observed conformational change is likely to take place in these protein domains. As the positions of the other bands in the amide I region are hardly affected by deuteration, the isotope effects are the result either of intensity changes or of bands that are not evident after applying the band-narrowing procedures because they are broader than the ones detected. Narrow bands tend to dominate the processed spectra. The bands hardly affected by 1 residue in a helical or unordered conformation. The imide band is found approximately 20 cm Ϫ1 lower than an amide band (36). Thus, the band at 1607 cm Ϫ1 , the position of which seems to be too low for an amide I band, could also be caused by a Pro imide group. Interestingly, there are three Pro residues in the putative Ca 2ϩ binding transmembrane helices M4 and M6, mutation of which affects Ca 2ϩ affinity and phosphoenzyme conversion (4). Alternatively, both bands may be caused by COO Ϫ side chain groups not interacting with bulk water. This assumption relies on the fact that the bands do not show the upshift upon H 2 O 3 2 H 2 O replacement characteristic for carboxylate groups in water. Protein Backbone: the Amide II Region-None of the three bands in the amide II region (1570 -1530 cm Ϫ1 ) shows the strong sensitivity toward 1 H 3 2 H exchange expected for the amide II mode of the protein backbone. The amide II mode of backbone elements accessible to deuteration therefore does not seem to be affected by phosphoenzyme conversion and Ca 2ϩ release. Two of the bands (at 1570 and 1554 cm Ϫ1 ) have been tentatively attributed above to the as vibration of COO Ϫ groups. Protein Backbone: the Amide III Region-The amide III mode absorbs between 1400 and 1200 cm Ϫ1 and is sensitive to deuteration (37). This property is observed in the spectra for the bands at 1337 and 1318 cm Ϫ1 (see Fig. 2A), which therefore might be attributed to amide III modes. The position of these bands is characteristic for turn structures (37), and they are probably related to the amide I band at 1687 cm Ϫ1 , which has tentatively been assigned to turn or ␤-sheet structures. Alternatively, the bands at 1337 and 1318 cm Ϫ1 may be caused by the ␦(COH) mode of Ser, Asp, or Glu with a weakly bonded OH group. Side Chain Modes Other than Carboxyl Modes-As mentioned above, bands of the side chains of Asn, Gln, Arg, and Lys in the amide I region show relatively large shifts upon deuteration. The extinction coefficient of the former three residues is relatively high (25,26), whereas that of Lys is smaller. Thus, Lys bands may be masked by stronger bands. The HisH ϩ mode near 1631 cm Ϫ1 (30) shows a 10 cm Ϫ1 downshift in 2 H 2 O (26) and absorbs relatively strongly in H 2 O (30). These characteristic shifts have not been observed in the spectra, and thus there is no clear evidence for the participation of Asn, Gln, Arg, and HisH ϩ in the phosphoenzyme conversion and Ca 2ϩ release reaction. This statement holds only for those residues that are accessible to 1 H 3 2 H exchange, which should include residues in the ATP binding site, the catalytic site, and at least part of the Ca 2ϩ binding sites, because several bands that were tentatively assigned to the Ca 2ϩ binding sites show an effect upon The position of the band at 1517 cm Ϫ1 and its slight down-shift of 2 cm Ϫ1 in 2 H 2 O is characteristic for a ring mode of protonated Tyr (25,26,30). Also, the band pairs at 1283 and 1264 cm Ϫ1 may be attributed to the (C-O) mode of Tyr but also to a Trp mode, which is observed for indole at 1276 cm Ϫ1 (38,39). It has been suggested that Tyr-763 is involved in the cytoplasmic gate to the Ca 2ϩ binding sites (4). The very small band at 1365 cm Ϫ1 might be caused by a ␦ s (CH 3 ) mode of aliphatic side chains. Absorption of the Phosphate Group- Fig. 2B shows a comparison between the conversion spectrum obtained with unlabeled ATP and that obtained with [␥-18 O 3 ]ATP. With the heavier isotope, phosphate bands are expected to be downshifted, thereby enabling the identification in the spectrum of alterations to the phosphate group. As the ␥-phosphate is transferred to Asp-351 before phosphoenzyme conversion, differences between the spectra with the two isotopes will identify the absorbance of the phosphoenzyme phosphate group. As expected, the two spectra superimpose very well above 1250 cm Ϫ1 , where phosphate groups do not absorb. However, the band at 1192 cm Ϫ1 is reduced upon isotopic substitution. Instead, the intensity is higher for [␥- 18 O 3 ]ATP between 1180 and 1150 cm Ϫ1 . A difference spectrum between labeled and unlabeled phosphate group (data not shown) shows that the band at 1192 cm Ϫ1 for the unlabeled phosphate seems to shift to 1157 cm Ϫ1 for the labeled phosphate, in agreement with the expected isotopic shift of 20 -30 cm Ϫ1 for phosphate groups (18,40). This identification of a phosphate band in the difference spectrum clearly shows that the conversion reaction considerably changes the environment of the phosphoenzyme phosphate group, thus affecting its electron density distribution and/or binding geometry. The band position at 1192 cm Ϫ1 is rather unusual for a phosphate group and could be explained in two ways: (i) a widening of the P-O angles leading to a stronger coupling between the P-O vibrations and thus to a stronger splitting between the as and s modes; or (ii) an increase in electron density of some or all of the P-O bonds, either by breaking of the hydrogen bonds to all phosphate oxygens or by strong hydrogen binding to one or two of the phosphate oxygen atoms, thus increasing the electron density in the other P-O bond(s). The band is also affected by H 2 O 3 2 H 2 O replacement, which seems to indicate that the phosphate oxygen interacts with either a water molecule or deuterated protein residues. The active site of E 2 -P was shown to be in a closed conformation (41) shielded from bulk water with a hydrophobic environment detected close to the ribose OH groups of a fluorescent ATP analogue (42,43). This finding is in contrast to the properties of Ca 2 E 1 -P, and it is thought that the decrease in the active site water activity upon the Ca 2 E 1 -P 3 E 2 -P conversion is responsible for the higher hydrolysis rate of E 2 -P as compared with Ca 2 E 1 -P (44). The infrared spectra presented here show that this conformational change has a direct effect on phosphate conformation and/or electron density distribution. Creation of a hydrophobic environment alone however, may not be sufficient to explain the phosphate band at 1192 cm Ϫ1 , because dehydration of the model compound acetyl phosphate shifts the as PO 3 2Ϫ band only from 1132 to 1177 cm Ϫ1 at most (data not shown). Comparison with Spectra of ATPase Phosphorylation- Fig. 4 shows spectra in 2 H 2 O of ATPase phosphorylation Ca 2 E 1 ⅐ATP 3 Ca 2 E 1 -P (solid line) (21) and of the overall reaction of phosphorylation and phosphoenzyme conversion Ca 2 E 1 ⅐ATP 3 E 2 -P (dotted line). These spectra were obtained from timeresolved measurements of the same set of samples. Negative bands are characteristic for Ca 2 E 1 ⅐ATP, positive bands either for Ca 2 E 1 -P (solid line) or E 2 -P (dotted line). (The spectrum of Ca 2 E 1 ⅐ATP 3 Ca 2 E 1 -P also shows a small contribution because of the E 2 -P that is already formed, as indicated by the E 2 -P marker band near 1750 cm Ϫ1 .) The spectral region shown includes the amide I region (1700 -1610 cm Ϫ1 ) that is sensitive to conformational changes of the protein backbone. The comparison indicates that bands characteristic for Ca 2 E 1 -P are still present in E 2 -P, which is especially obvious for the bands at 1682, 1630, and 1593 cm Ϫ1 . This finding indicates that at least some of the alterations to the protein conformation induced by ATPase phosphorylation seem to be preserved or even "enhanced" in the subsequent transition to E 2 -P. It seems as if the enzyme conformation on the way from Ca 2 E 1 -P to E 2 -P goes further away from the pre-phosphorylation conformation of Ca 2 E 1 ⅐ATP instead of returning to it. CONCLUSIONS Infrared difference spectra show that the protein conformation changes in the phosphoenzyme conversion reaction, preserving conformational changes of the preceding step of enzyme phosphorylation. ␤-sheet or turn structures of the extramembraneous domains and most likely ␣-helical structures seem to be affected. The net change of secondary structures, however, is small (10). The release of Ca 2ϩ proceeds from carboxylate groups that seem to be partly shielded from the aqueous environment. It is associated with the protonation of at least two carboxylate groups presumably involved in Ca 2ϩ chelation. A change of environment of a Tyr residue was detected as well as a direct effect of phosphoenzyme conversion on the geometry and/or electron density distribution of the phosphoenzyme phosphate group. The latter is likely to be a prerequisite for the higher susceptibility toward hydrolytic attack of E 2 -P as compared with Ca 2 E 1 -P. Interestingly, the currently assigned bands of the phosphate group and of Ca 2ϩ release from carboxylate groups appear with the same reaction rate (10), showing that Ca 2ϩ release and the change of phosphate environment proceed at the same time. This finding rules out a significant population of Ca 2 E 2 -P, a postulated state (45) with phosphate properties of E 2 -P that still binds Ca 2ϩ .
2018-04-03T02:01:08.212Z
1999-08-06T00:00:00.000
{ "year": 1999, "sha1": "32a0f6b4d0b11231e78fa4b0419884d369dcf53d", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/274/32/22170.full.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "acf30960616afe62fb423b340b27ff70a4ef5900", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
155093477
pes2o/s2orc
v3-fos-license
Nine-, but Not Four-Days Heat Acclimation Improves Self-Paced Endurance Performance in Females Although emerging as a cost and time efficient way to prepare for competition in the heat, recent evidence indicates that “short-term” heat acclimation (<7 days) may not be sufficient for females to adapt to repeated heat stress. Furthermore, self-paced performance following either short-term, or longer (>7 days) heat acclimation has not been examined in a female cohort. Therefore, the aim of this study was to investigate self-paced endurance performance in hot conditions following 4- and 9-days of a high-intensity isothermic heat acclimation protocol in a female cohort. Eight female endurance athletes (mean ± SD, age 27 ± 5 years, mass 61 ± 5 kg, VO2peak 47 ± 6 ml⋅kg⋅min−1) performed 15-min self-paced cycling time trials in hot conditions (35°C, 30%RH) before (HTT1), and after 4-days (HTT2), and 9-days (HTT3) isothermic heat acclimation (HA, with power output manipulated to increase and maintain rectal temperature (Trec) at ∼38.5°C for 90-min cycling in 40°C, 30%RH) with permissive dehydration. There were no significant changes in distance cycled (p = 0.47), mean power output (p = 0.55) or cycling speed (p = 0.44) following 4-days HA (i.e., from HTT1 to HTT2). Distance cycled (+3.2%, p = 0.01; +1.8%, p = 0.04), mean power output (+8.1%, p = 0.01; +4.8%, p = 0.05) and cycling speed (+3.0%, p = 0.01; +1.6%, p = 0.05) were significantly greater in HTT3 than in HTT1 and HTT2, respectively. There was an increase in the number of active sweat glands per cm2 in HTT3 as compared to HTT1 (+32%; p = 0.02) and HTT2 (+22%; p < 0.01), whereas thermal sensation immediately before HTT3 decreased (“Slightly Warm,” p = 0.03) compared to ratings taken before HTT1 (“Warm”) in 35°C, 30%RH. Four-days HA was insufficient to improve performance in the heat in females as observed following 9-days HA. INTRODUCTION Hot ambient temperatures and elevated humidity are known to negatively impact endurance exercise performance (Tatterson et al., 2000;Périard et al., 2011). Heat acclimation is an effective strategy to drive favorable physiological adaptations, thereby reducing athletic performance impairments caused by these challenging environments (Sawka et al., 2011;Périard et al., 2015;Racinais et al., 2015). Heat acclimation typically consists of repeated daily heat stress exposures, with exposure durations commonly lasting between 60 and 90 min. Traditionally, 10 days of heat exposure are undertaken to elicit the heat acclimation phenotype and improve endurance performance in the heat (Armstrong and Maresh, 1991;Lorenzo et al., 2010;Sawka et al., 2011), though 75-80% of physiological adaptations occur in the first 4-7 days of heat acclimation in male cohorts (Pandolf, 1998;Shapiro et al., 1998). Based on this, Garrett et al. (2009) first demonstrate meaningful performance improvements following just 5 days of isothermic heat acclimation, termed "short-term heat acclimation" (STHA). STHA has since been defined as being <7 days in length (Garrett et al., 2011), and is promoted as a cost and time efficient option for athletes preparing for competition in the heat. Successful STHA lasting 4-7 days in male cohorts has been well documented (Petersen et al., 2010;Fujii et al., 2012;Chen et al., 2013;Garrett et al., 2012Garrett et al., , 2014Best et al., 2014;Costa et al., 2014;Gibson et al., 2015;Mee et al., 2015;Racinais et al., 2015;Guy et al., 2016;James et al., 2016;Willmott et al., 2016). However, few studies to date have examined STHA effects in female cohorts. It was initially shown that there were no sex differences (4 females vs. 4 males) in adaptations to 10-days heat acclimation when aerobic fitness and surface area to mass ratios were matched (Avellini et al., 1980). However, Mee et al. (2015) more recently reported a significant sex difference in the time course of heat acclimation (i.e., 5-vs. 10-days), challenging past assumptions. In this study Mee et al. (2015) found that following STHA females did not exhibit a lower resting core temperature, or an attenuated rise in core temperature and heart rate (HR) when exercising at a fixed workload, critical requirements in demonstrating the heat acclimation adaptation. However, the male cohort successfully attained these adaptations following the same protocol. Mee et al. (2018) then demonstrated that a longer daily heat exposure (achieved via 20-min of sitting in sauna suits in 50 • C, 30%RH immediately before 90-min isothermic heat acclimation) successfully induced heat adaptations in a female cohort following STHA (5-days). The authors therefore concluded that females require either a longer daily heat exposure (Mee et al., 2018), or a greater number of heat exposures to elicit favorable physiological adaptations. It is unclear if females can achieve meaningful performance improvements following STHA or if a longer heat acclimation period is needed. Sunderland et al. (2008) reported a 33% improvement in distance run during a repeated shuttle run performance test following STHA in a female cohort as well as a reduced rate of rise in rectal temperature (T rec ). The authors attributed performance improvements to the high-intensity intermittent exercise performed during the acclimation sessions, a strategy implemented by Pethick et al. (2018) to successfully induce plasma volume expansion in a female cohort following 5-days high-intensity heat acclimation. Therefore, the addition of high-intensity interval training (HIIT) may offer a means to increase effectiveness of STHA in a female cohort. However, the performance test employed by Sunderland and colleagues was a time to exhaustion trial, which is a less reliable test and subject to greater variation than self-paced performance tests (Hopkins et al., 2001;Borg et al., 2018). Self-paced performance outcomes and their improvements following heat acclimation have only been documented in male (Garrett et al., 2012;Keiser et al., 2015;Racinais et al., 2015;Guy et al., 2016;Wingfield et al., 2016) or mostly male (Lorenzo et al., 2010) cohorts, and remain to be investigated in females. Such information is timely for female athletes competing at upcoming international competitions in hot climates, such as the 2020 Olympic Games in Tokyo and the 2019 IAAF World Athletics Championships in Doha. Currently, female athletes must either depend on conflicting literature or fill knowledge gaps with information inferred from male cohorts. Therefore, the aim of the present study was to investigate self-paced endurance performance in hot conditions following 4-days (STHA), and 9-days of a high-intensity isothermic heat acclimation protocol in a female cohort. It was hypothesized that females would not exhibit performance improvements in self-paced exercise following STHA, as previous studies indicate that 4-days of 90-min heat acclimation is unlikely to be a sufficient stimulus for the thermoregulatory and cardiovascular adaptations necessary for performance improvements in the heat. We further hypothesized that performance improvements in power output and time trial distance would occur following 9days heat acclimation. General Overview and Design This study was approved by the University of Birmingham Ethics Committee, and conformed to the standards set by the Declaration of Helsinki 2013. All participants were informed of the experimental procedures and possible risks involved in the study before their written consent was obtained. Each participant completed a general exercise questionnaire and a menstrual cycle questionnaire (detailing the day their menstrual cycle commenced, premenstrual symptoms, and contraceptive medication or devices) to ascertain what phase of their cycle they were in for each time trial. All experimental procedures were completed in the environmental chamber (TIS Services, Hampshire, United Kingdom) in the School of Sport, Exercise and Rehabilitation Sciences building at the University of Birmingham. Participants performed all heat acclimation and testing sessions at the same time of day (±2 h), and at similar times to their normal training sessions so as not to disrupt their normal circadian rhythms (Reilly and Brooks, 1986). This included mornings, afternoons, or evenings. Participants were familiarized with the 15-min time trial performance tests in cool conditions (15 • C, 30% RH) on three occasions, with the FIGURE 1 | Schematic diagram of the time trials and heat acclimation sessions (HA). Time trials were conducted in hot conditions [HTT; 35 • C, 30% relative humidity (RH)], before HA (HTT1), after 4-days (HTT2) and after 9-days (HTT3) HA. On days 2-4 and 7-9, participants completed 15-min of high-intensity intervals (HIIT), where maximum effort was given for 15-s, with 45-s of active recovery. Participants then undertook 75-min of isothermic heat acclimation (where exercise intensity was manipulated to increase and maintain rectal temperature at ∼38.5 • C; 40 • C, 30%RH) with permissive dehydration. There was one rest day following 5-days HA. Cool Time Trial refers to a 15-min cycling time trial in cool conditions (15 • C, 30% RH), which was part of a larger dataset that are not reported herein. final occasion 48 h prior to beginning the protocol. Participants performed a 15-min cycling time trial in hot (35 • C, 30% RH) conditions pre-acclimation, following 4-days (STHA), and following 9-days isothermic heat acclimation (HA). An overview of the hot time trials and HA sessions are displayed in Figure 1. This experiment was conducted in the United Kingdom during the months of February, April, May, and June, when mean ambient temperatures were below 20 • C (exclusive of 3 days where the mean daily temperatures were 23, 24, and 27 • C, respectively). The protocol was performed in addition to normal training sessions (i.e., weight training and normal conditioning such as swimming and running). Participants' activity was not restricted, except on the day prior to (no exhaustive exercise) or the day of (no other activity) time trials in hot conditions (HTTs). Participants were asked to refrain from alcohol and overly strenuous exercise outside of the laboratory 48 h before time trials. Participants Eight recreational endurance athletes aged 21-35 years volunteered for and completed this study. An additional participant volunteered, but dropped out due to relocation after preliminary testing and was not included in the results. All participants were familiar with competitive, race-style endurance events, and trained 5 ± 1 days per week, averaging 9 ± 4 h of weekly endurance exercise training. Participants were eumenorrheic or using various forms of hormonal contraceptives ( Table 1) and did not report any negative premenstrual symptoms that may have affected performance during time trials (Giacomoni et al., 2000). Participants had not previously undergone a heat acclimation protocol and had not been in hot conditions for the past 2 months. Participants also completed an incremental (20 W · min −1 stages) exercise test on a cycle ergometer (Sport Excalibur, Lode, Groningen, Netherlands) to determine maximal aerobic capacity (VO 2peak ), with expired air (Vyntus CPX, Jaeger, Wuerzberg, Germany) and HR (Polar Electro, Kempele, Finland) measured continuously. Personal characteristics are summarized in Table 1. Heat Acclimation Sessions A combination of HIIT, permissive dehydration and isothermic heat acclimation (Garrett et al., 2014;Sunderland et al., 2008) was used to construct a "high intensity" HA protocol. Participants OCP, oral contraceptive pill user (pill brand); IUD Coil, copper coil intrauterine device; EU, eumenorrheic natural cycle. Frontiers in Physiology | www.frontiersin.org voided their bladder upon arrival to the laboratory to provide a urine sample. Towel-dried, nude body mass (NBM) was recorded to 0.1 kg using digital scales (Seca 877, Seca, Hamburg, Germany) before and immediately after each session to estimate sweat loss. Conditions during HA sessions were set to 40 • C, 30%RH with a fan-generated airflow of ∼3 m second −1 facing participants. All heat acclimation sessions and time trials were completed using a cycle ergometer (Velotron, RacerMate Inc., Seattle, WA, United States), which was calibrated according to manufacturer instructions for the chosen temperature and confirmed to exhibit <1% deviation from calibration settings before each use. Following a 5-min, self-selected warm-up, participants completed 15-min of high-intensity intervals, where participants were asked to give maximum effort for 15-s, with 45-s of active recovery. The aim of the high-intensity intervals was to rapidly increase T rec . This was followed by an additional 75-min of continuous cycling at an intensity manipulated with the aim to further increase T rec and maintain it at ∼38.5 • C (Patterson et al., 2004;Garrett et al., 2012), totalling 90-min HA plus 5-min warm up. On days that hot time trials (HTT) preceded HA sessions, the HTTs were used in place of the high-intensity intervals. On these test days, the temperature of the environmental chamber was immediately increased to 40 • C, 30%RH following the time trial. There was one rest day following 5-days HA. Cool Time Trial refers to a 15-min cycling time trial in cool conditions (15 • C, 30% RH), which was part of a larger dataset that are not reported herein. Power output, HR, and T rec across HA sessions during STHA (days 1-4) and days 5-9 are depicted in Figure 2. Ratings of perceived exertion (RPE; Borg, 1982), thermal sensation, and thermal comfort were recorded at 15-min intervals during HA sessions. Participants were instructed to refrain from fluid consumption as much as could be tolerated during HA sessions to induce the added stressor of dehydration (permissive dehydration; Garrett et al., 2014). Fluid consumed (295 ± 235 ml each session) was recorded by weighing water bottles to 0.001 L (Oertling, United Kingdom) before and after HA sessions, and was considered in the calculations of total body sweat loss. Heat acclimation involved 9 consecutive days of HA sessions, except for 1 day of rest following STHA (Figure 1). Hot Time Trials Time trials were performed in hot conditions (35 • C, 30%RH with a fan-generated airflow of ∼3 m · second −1 facing participants) on the 1st day of HA (Day 1; HTT1), and following 4-days HA (Day 5; HTT2), and 9-days HA (Day 10; HTT3). Participants were instructed to maintain normal hydration before each HTT, which was verified with a urine osmolality value of ≤700 mOsm · kg −1 (Sawka et al., 2007). Participants lay supine for 10 min of stabilization at room temperature prior to each trial to collect resting measures of T rec and blood lactate. Participants entered the environmental chamber and commenced a 5-min warm up at a self-selected pace, before completing a 15-min, self-paced cycling time trial. Power output and distance cycled were recorded continuously by the Velotron Coaching Software. Participants were aware of the time elapsed, as displayed by a stop-clock mounted to the handles of the cycle ergometer, however, they were blinded to any other physiological or performance feedback (i.e., HR, power output, distance cycled, etc.). Participants were given equal verbal encouragement by the same researchers at similar time points during the HTT. Free drinking was permitted during HTTs. RPE, blood lactate and sweat gland activity were recorded immediately following the HTTs. Ratings of thermal comfort and thermal sensation were reported inside the environmental chamber, preceding the warm-up for HTTs, as well as immediately after. Following HTTs, participants completed 5 min of selfpaced active recovery before proceeding with the acclimation session for that day. Measures Urine osmolality was measured prior to each experimental session to assess hydration (Osmocheck, Vitech Scientific Ltd., West Sussex, United Kingdom). T rec was measured using a rectal thermistor inserted 10 cm past the anal sphincter prior to beginning each experimental session (Mon-a-Therm, Covidien, FIGURE 2 | Power output (left), heart rate (middle), and rectal temperature (right), at 10 min intervals during the heat acclimation protocol. Open squares represent mean ± SD of data from days 1 to 4, and solid squares represent mean ± SD of data from days 5 to 9. Shaded area represents a 15-min time trial or 15-min high-intensity intervals. Frontiers in Physiology | www.frontiersin.org Mansfield, MA, United States). Weighted mean skin temperature (T sk ) was recorded using skin thermistors (Squirrel Thermal Couples, Grant Instruments, Cambridge, United Kingdom) attached to four sites: the mid-point of the right pectoralis major (T chest ), midpoint of the right biceps brachii (T arm ), right rectus femoris (T thigh ), and right gastrocnemius lateral head (T lower leg ). Skin and rectal thermistors were connected to a Squirrel Data Logger (Squirrel 2020 series, Eltek, Ltd., United Kingdom) and were recorded at 30-s intervals throughout HA sessions and HTTs. HR (Polar Electro, Kempele, Finland) was also recorded throughout each session. Power output and distance cycled were recorded by the Velotron Coaching Software (Velotron CS 2008, RacerMate Inc., Seattle, WA, United States). Blood lactate measures were taken from a finger-tip blood sample and immediately analyzed using a Lactate Plus analyzer (Lactate Plus, Nova Biomedical, Waltham, MA, United States). Active sweat glands were quantified using a modified-iodine paper technique with computer aided analysis (Gagnon et al., 2012). Samples were collected from the dorsal side of the thickest segment of the forearm. Thermal sensation and thermal comfort ratings were measured using 13-point and 10-point scales, respectively, which were modified from scales used by Gagge et al. (1967). Data Analysis Mean T rec for the final 75 min of the session, which followed the 15-min high-intensity intervals, is represented by T rec75 . Maximum T rec recorded during the session (Max T rec ) was used to calculate T rec increase from rest ( T rec ). T sk was calculated as a weighted average according to Ramanathan (1964): Estimated sweat rate relative to body surface area (SR BSA ) was calculated from changes in NBM pre-to post-session with considerations of water consumed body surface area [(BSA); calculated using the formula derived by Du Bois and Du Bois, 1916] and normalized for exercise time: Two values were obtained for measurements of resting blood lactate and an additional two values were obtained for blood lactate immediately following HTTs. The results were averaged to yield a single value for each time point (pre-and post-trial). Extreme outliers falling outside the physiological range were excluded, and only the rational value was used (Goodwin et al., 2007; n = 3 incidences). Power output (watts) was recorded each second during HTTs, and an average of each minute's power output was used to calculate area under the curve (AUC; Pruessner et al., 2003). AUC was also calculated for T rec (recorded at 30-s intervals) during HTTs. All data were analyzed using SPSS statistical software (SPSS version 24.0.0, SPSS, Chicago, IL, United States). To assess performance and physiological differences during HA days 1-4 vs. days 5-9, a mean value was calculated for each participant across the aforementioned days, and analyzed using a repeated-measures one-way analysis of variance (ANOVA). Mean performance values during HTTs (i.e., power output and speed), AUC comparisons (power output and T rec ), distance cycled, and physiological measures between HTT1, HTT2, and HTT3, were also analyzed using a repeated-measures one-way ANOVA. Additionally, 1 min averages of power output were analyzed using a two-way repeated-measures ANOVA (3 HTT × 15 time points). Normality of the data was assessed using Mauchly's test of sphericity, and Greenhouse-Geisser corrections were applied where assumptions of sphericity were violated. When a significant main effect was found, Bonferroni-corrected post hoc comparisons were made. Main effect sizes for both oneway and two-way ANOVAs were calculated using partial eta-squared (η 2 p ), with η 2 p > 0.06 representing a moderate difference and η 2 p > 0.14 representing a large difference (Cohen, 1988). To assess ordinal data (i.e., RPE, thermal sensation and thermal comfort) differences during HA days 1-4 vs. days 5-9, and between HTT1, HTT2, and HTT3, Friedman's test was performed with post hoc analysis by Wilcoxon signrank tests. Absolute data are expressed as mean ± standard deviation (SD) and mean within-subject differences are presented with 95% confidence limits (mean difference, 95% CL: lower limit, upper limit). Significance was set at p < 0.05 for each analysis. A power analysis indicated that eight participants were a sufficient sample size to detect an 8-10% difference in power output during time trial performance (as observed by Lorenzo et al., 2010;Keiser et al., 2015). This analysis used an accepted parameter of power (β ≥ 0.80) at an α level of 0.05. ; p = 0.01) were lower during HA sessions on days 5-9 as compared to HA sessions on days 1-4. Mean HR and percentage of age-estimated maximum heart rate (%HR max ) (−5 beats · minute −1 , [−1, −10]; p = 0.03, and −3% [−1, −5]; p = 0.02, respectively) were also lower during HA sessions on days 5-9 as compared to HA sessions on days 1-4. These physiological changes were present in spite of a significantly higher workload (i.e., power output) on days 5-9 as compared to HA sessions on days 1-4 (−9 W, [−3, −14], p = 0.01). These results were matched by the comparison of minute averages of power output. A two-way ANOVA yielded a large Sweat loss (%BM) 2.9 ± 0.5 2.9 ± 0.6 Data are presented as mean ± SD. T rec , rectal temperature; T rec75 , mean rectal temperatures recorded during the final 75-min of the session; T rec , maximal change in rectal temperature during exercise from rest; HR, heart rate; %HR max , percentage of age-estimated maximum heart rate; SR BSA , estimated sweat rate relative to body surface area; %BM, percentage of body mass. * Significantly different from Days 1-4 (p < 0.05). Mean (p = 0.63), peak (p = 0.97), and change (p = 0.46) in T rec during the HTTs were not affected by HA (Table 4). A two-way ANOVA showed no significant main effect of condition (p = 0.36) or condition-time interaction (p = 0.65) for T rec measured each minute of HTTs (Figure 5). There was an average reduction in T rec at rest, although this was not significant (p = 0.07; Table 4). AUC for T rec during HTTs (calculated from minute averages) was not significantly different between HTTs (p = 0.39). Mean (p = 0.26) and peak (p = 0.13) skin temperatures (T sk ) during HTTs were not affected by HA (Table 4 and Figure 5). Mean HR (p = 0.48; Figure 5) and mean (p = 0.45) and peak (p = 0.38) percentage of age-estimated HR maximum (%HR max ) during HTTs was not different between HTTs (Table 4). There was no significant difference in SR BSA during HTTs and including the 75 min of HA that followed (main effect: p = 0.08). There were no differences in blood lactate at rest (immediately preceding HTTs; p = 0.34) or immediately following HTTs (p = 0.41; Table 4). Ratings of thermal sensation taken in the environmental chamber immediately before exercise were significantly different between the HTTs (main effect: p = 0.02; Table 4), and on average, corresponded to "Warm" (HTT1), "Warm" (HTT2) and "Slightly Warm" (HTT3). Post hoc pairwise comparisons indicated that differences were between HTT1 and HTT3 (p = 0.03). There were no significant differences between HTT1 and HTT2 (p = 0.10) or between HTT2 and HTT3 (p = 0.08). Ratings of thermal comfort taken in the environmental chamber immediately before exercise were not significantly different between HTTs (average ratings corresponded to ratings DISCUSSION This study was designed to determine whether STHA (4days) is sufficient to improve self-paced endurance performance in hot conditions in females, as has been observed in males, or whether a longer heat acclimation stimulus (i.e., 9days) is required. In this study's female cohort, STHA did not significantly improve time-trial performance in the heat; however, 9-days HA did. These results were consistent with the study hypothesis, which predicted that STHA would be insufficient to improve self-paced performance in females, and that a longer heat acclimation stimulus would be required to induce the physiological adaptations needed for performance improvements in the heat. Self-Paced Endurance Performance Following STHA, female participants showed no significant performance improvements in distance cycled, mean power output, or speed during HTT2, as compared to HTT1. This is in direct contrast to a number of studies in male cohorts, where males have shown meaningful physiological adaptions and improved endurance performance in the heat following STHA (Garrett et al., 2009(Garrett et al., , 2012Chen et al., 2013;Racinais et al., 2015;Guy et al., 2016;James et al., 2016;Willmott et al., 2016;Wingfield et al., 2016). Thus, it appears that STHA using 90 min of daily exercise heat stress is insufficient to improve endurance performance in females, reflecting the lack of physiological adaptation to heat acclimation previously demonstrated in females following STHA . The current study's performance results following STHA differ from those observed by Sunderland et al. (2008), who reported a 33% improvement in distance run during a repeated shuttle run performance test (Loughborough Intermittent Shuttle Test) following STHA (4-days) in a female cohort. Of note, time to exhaustion is the main outcome measure of the Loughborough Intermittent Shuttle Test. This outcome is influenced by technique (i.e., ability to change direction and accelerate; Mendez-Villanueva and Buchheit, 2013), making it less reliable and subject to greater variation than the self-paced performance trial used in the current study (Hickey et al., 1992;Gosens et al., 2015;Borg et al., 2018). Furthermore, the behavioral regulation of performance possible in a self-paced time trial is not available in a time to exhaustion protocol (Schlader et al., 2011). Indeed, the lower pre-exercise thermal sensation reported by participants after 9days HA may be an indication of perceptual changes contributing to behavioral regulation (i.e., pacing). Thus, the self-paced performance test used in the current study is a more reliable and holistic assessment of performance than a time to exhaustion test. Despite efforts in the current study to create an "intense" heat stimulus by combining isothermic heat acclimation, HIIT, and permissive dehydration, it still appears that females require either a longer daily heat exposure (Mee et al., 2018), or a greater number of heat exposures (as observed in the current study and by Mee et al., 2015) to improve exercise performance in the heat. This is the first study to quantify improvements in selfpaced time trial performance following a longer (i.e., 9-days) heat acclimation stimulus in a female cohort. The ∼8% mean improvement in mean power output in HTT3 as compared to HTT1 is comparable to performance improvements observed in male, or mostly male cohorts following similar heat acclimation protocols. Keiser et al. (2015) showed that male participants experienced a ∼10% improvement in power output during a 30-min self-paced time trial following 10-days heat acclimation (daily bouts: 90-min cycling at 50% VO 2max in 38 • C, 30%RH). Lorenzo et al. (2010) also found that participants (10 males and 2 females) had an 8% mean improvement in power output during their 1-h self-paced time trial following 10-days heat acclimation (daily bouts: 90-min cycling at 50% VO 2max in 40 • C, 30%RH). In the current study, improvements in mean power output coincided with improvements in mean cycling speed and distance covered from HTT1 to HTT3. These data demonstrate that 9-, but not 4-days heat acclimation, improves endurance performance outcomes in females. Physiological Measures Participants exhibited reduced markers of physiological strain (i.e., T rec , T sk and HR) during days 5-9 of HA, as compared to days 1-4. These physiological changes occurred in spite of an increased mean power output during days 5-9 of HA. Although these data indicate a reduction in the desired stimulus across the heat acclimation protocol, it also indicates that the greatest heat stimulus was administered during STHA. Furthermore, the improved performance in HTT3 as compared to the previous HTTs indicates that this reduced stimulus during days 5-9 was still effective in producing HA-related performance improvements. Also, this HA protocol produced FIGURE 6 | Example of sweat gland activity measured immediately following time trials (A) pre-acclimation (HTT1), (B) following 4-days heat acclimation (HTT2), and (C) following 9-days heat acclimation (HTT3). Bottom row images are scanned copies of iodine-cotton paper applied to participant's forearm. Top row of images are the same images following computer processing (ImageJ; Gagnon et al., 2012). a sufficient dehydration stimulus, as the ∼3% body mass loss achieved across HA days 1-4 and 5-9 in addition to permissive dehydration presumably exceeded the osmotic threshold required for compensatory fluid regulatory responses (i.e., 2% body mass loss; Cheuvront and Kenefick, 2014). However, as we did not measure changes in plasma volume, it is unknown whether participants experienced the fluid regulatory responses typically associated with heat acclimation. There was a trend for a lower T rec at rest before HTT3, which appeared to influence T rec during the initial minutes of HTT3 (albeit not significantly). Menstrual cycle phase and associated changes in female sex hormones influence resting T rec (Inoue et al., 2005) and the overall thermoregulatory set point range (Charkoudian and Stachenfeld, 2016). This may have contributed to the non-significant change in resting T rec observed in the current study. By the end of each HTT, T rec reached similar values (∼38.1 • C). This is perhaps unsurprising as a previous study has shown that heat acclimation does not change the maximal T rec reached (40.1-40.2 • C) during a 43.4-km time trial in the heat, despite a lower T rec for the first 80% of the post-acclimation time trial . In the current study, there was an observed increase of active sweat glands at the end of HTT3 (Table 4). This contrasts findings in male cohorts, where sweat gland activation did not increase following 8-10-days heat acclimation (Inoue et al., 1999;Lee et al., 2010;Poirier et al., 2016). In the current study, the number of active sweat glands (75 ± 25 per cm 2 ) at the end of HTT3 were lower than values previously reported in acclimated males (∼96-108 per cm 2 ; Inoue et al., 1999;Lee et al., 2010;Poirier et al., 2016) and unacclimated females (∼93 per cm 2 ; Knip, 1969). Therefore, changes observed following a 15-min HTT may not indicate improved maximal sweat gland activation per se, but rather earlier activation of the sweat glands. Although there is large intra-subject coefficient variation associated with this measure, the 33 and 22% mean improvements following HTT3 in comparison to HTT1 and HTT2, respectively, surpass the ∼11% coefficient of variation reported by Gagnon et al. (2012). Perspectives These results contribute to the limited research that informs the expected performance outcomes of heat acclimation for female athletes. The results of this study indicate that while heat acclimation can be an effective training component in preparation for competition in the heat, female athletes may require up to 9 days of 90-min heat acclimation sessions before experiencing performance improvements. However, there will be individual variation in how athletes (male or female) respond to heat acclimation (Racinais et al., 2012). In the current study, three participants' performance deteriorated in HTT2 as compared to HTT1, whereas four participants showed improvements and one participant showed no change. Thus, some female athletes may achieve meaningful performance benefits after 4-days heat acclimation, while others could require longer than 9-days. A heat acclimation protocol lasting longer than 9-days has yet to be initiated in a female cohort, which would be hypothesized to further stabilize adaptions and improve performance . It is also unclear how different phases of the menstrual cycle/contraception may affect heat adaption during acclimation. Future research is also needed to clarify the impact of mixed-intensity heat acclimation on longer performance tests in both male and female athletes. Considerations Despite the absence of a control group, it is unlikely that performance improvements in HTT3 were due to learning or training effects. After preliminary testing and familiarizations, HTT1 was the fourth time that participants would have completed the 15-min time trial, minimizing learning effects. Furthermore, performance improvements in the current study are similar to previous studies (Lorenzo et al., 2010;Keiser et al., 2015), where control groups showed no improvements. It is possible that the high-intensity heat acclimation protocol used in the current study may have caused a general fatigue that impaired performance during HTT2 and HTT3 (Schmit et al., 2018;Reeve et al., 2019). However, this is a negative bias as fatigue-related performance impacts would presumably have been greatest at HTT3. A further consideration is that heat acclimation adaptions are specific to the type/intensity of exercise employed (Wingfield et al., 2016). Therefore, the 15min of HIIT undertaken at the beginning of each HA session may have facilitated specific adaptations. Whether this type of mixed-intensity heat acclimation (15-min HIIT + 75-min isothermic HA) would be equally or more effective than steadystate isothermic heat acclimation protocols typically reported in the literature remains unknown. This study did not control for menstrual cycle. Recent data has shown that performance under heat stress is not affected by menstrual cycle or oral contraceptive pill (OCP) use in trained female athletes (Lei et al., 2017(Lei et al., , 2018, nor does menstrual cycle affect whole-body heat loss (Notley et al., 2018). Eumenorrheic participants and OCP users did not cross over phases between HTT1 and HTT2. Participants were counterbalanced in their phases in HTT3, with both eumenorrheic participants being in opposite phases and both OCP users being in opposite phases (i.e., pill-taking, or non-pill-taking). None of the other four participants [contraceptive implant or copper intrauterine device (IUD)] were menstruating during the protocol, mitigating concerns of premenstrual symptoms that could affect performance (Giacomoni et al., 2000). Despite this, variable hormonal states may have affected the degree of relative heat stimulus administered when targeting an absolute core temperature of 38.5 • C during heat acclimation sessions. Finally, it should be noted that measures of sweat gland activity were taken from sites on the forearm and are not a precise indication of whole-body sweat gland adaptations given the regional heterogeneity of sweat gland activity. While an increased sweat gland activity may imply a better use of body surface area to dissipate heat, sweat gland activation is not directly proportional to local sweat output of the area (Poirier et al., 2016). In future, measures of local sweat output should be combined with measures of sweat gland activation to fully understand sex differences in peripheral sudomotor adaptations. CONCLUSION This study was the first to document performance outcomes during self-paced time trials in a female cohort following STHA (4-days) and 9-days high-intensity, isothermic HA. In the current study, females did not show an improvement in selfpaced endurance performance following STHA. This differs from the well-documented performance improvements previously observed in male cohorts following STHA. However, following 9-days HA, females achieved meaningful improvements in selfpaced endurance performance. These improvements included an ∼8% increase in mean power output, a ∼3% increase in distance cycled, and a ∼3% increase in speed when performing a 15min self-paced time trial in hot conditions (HTT). These data offer a reference for the changes which female athletes can expect when undergoing heat acclimation with the aim of improving self-paced endurance exercise performance in hot conditions, and provides further evidence that STHA may be insufficient for female athletes. ETHICS STATEMENT This study was carried out in accordance with the recommendations of the University of Birmingham Ethics Committee with written informed consent from all subjects.
2019-05-16T13:04:17.720Z
2019-05-16T00:00:00.000
{ "year": 2019, "sha1": "98c0697763006c8f99333d35f3e5644dbb6a6ede", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2019.00539/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "98c0697763006c8f99333d35f3e5644dbb6a6ede", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
266358942
pes2o/s2orc
v3-fos-license
Outcomes of carbon ion radiotherapy compared with segmentectomy for ground glass opacity-dominant early-stage lung cancer Purpose This study aimed to compare the outcomes of patients with ground-grass opacity (GGO)-dominant non-small cell lung cancer (NSCLC) who were treated with carbon ion radiotherapy (CIRT) versus segmentectomy. Methods A retrospective review of medical records was conducted. The study included 123 cases of clinical stage 0/IA peripheral NSCLC treated with single-fraction CIRT from 2003 to 2012, 14 of which were determined to be GGO-dominant and were assigned to CIRT group. As a control, 48 consecutive patients who underwent segmentectomy for peripheral GGO-dominant clinical stage IA NSCLC were assigned to segmentectomy group. Results The patients in CIRT group, compared with segmentectomy group, were significantly older (75 ± 7.2 vs. 65 ± 8.2 years, P = 0.000660), more likely to be male (13/14 vs. 22/48, P = 0.00179), and had a lower forced vital capacity (91 ± 19% vs. 110 ± 13%, P = 0.0173). There was a significant difference in the 5-years overall survival rate (86% vs. 96%, P = 0.000860), but not in the 5-years disease-specific survival rate (93% vs. 98%, P = 0.368). Discussion Compared with segmentectomy, CIRT may be an alternative option for patients with early GGO-dominant NSCLC who are poor candidates for, or who refuse, surgery. Introduction Lung cancer is the leading cause of cancer morbidity and mortality in men, whereas in women, it ranks third in incidence, after breast and colorectal cancer, and second in mortality, after breast cancer [1].Surgery is the gold standard treatment for early-stage lung cancer [2].A Japanese lung cancer registry study of 18,973 lung cancer patients treated with surgery in 2010 revealed 5-years overall survival (OS) rates of 97.0%, 91.6%, 81.4%, 74.8%, 71.5%, 60.2% and 58.1% for clinical stages 0, IA1, IA2, IA3, IB, IIA, and IIB disease according to the TNM classification of malignant tumors, eighth edition, respectively [3]; these rates indicate improvements compared with previous reports [3][4][5]. Although lobectomy has long been the standard surgical procedure for early-stage lung cancer [6], the efficacy of sublobar resection, such as segmentectomy or partial lung resection, has been explored by various investigators.The results of the 1995 Lung Cancer Study Group randomized clinical trial of surgical reduction for lung cancer showed a three-fold increase in the local recurrence rate and a higher mortality rate in the sublobar resection group compared with the lobectomy group [7].As a result, the sublobar resection became a passive procedure for patients who cannot tolerate lobectomy, e.g., those with low lung function or other comorbidities. Around the same era, carbon ion radiotherapy (CIRT) was proposed as an alternative to lung resection for lung cancer patients who were deemed inappropriate for, or refused to undergo, surgery.The first clinical trials of CIRT for non-small cell lung cancer (NSCLC) were initiated at the National Institutes for Quantum Science and Technology QST hospital in June 1994 [8][9][10][11][12][13][14][15][16].For peripheral stage I lung cancer, the number of radiation fractions and treatment period were reduced from 18 fractions over 6 weeks to 9 fractions over 3 weeks, and then further to 4 fractions over 1 week, respectively, while maintaining safety and efficacy.Based on the results of those clinical trials, a phase I/II study (protocol 0201) was performed in which a dose escalation method was used to determine the optimal dose over a 9-years period, from April 2003 to February 2012 [17].The initial treatment dose was 28 Gy (relative biological effectiveness [RBE]) administered in a single fraction using respiratory-gated and four-portal oblique irradiation directions, with the total dose escalated to a maximum of 50 Gy (RBE) at increments of 2.0 Gy (RBE).In 123 cases of clinical stage T1N0M0 NSCLC, the 3-years local control rates after irradiation with 28-34 Gy (RBE), 36-42 Gy (RBE), and 44-50 Gy (RBE) were 80.7%, 88.0%, and 90.8%, respectively.Accordingly, we concluded that single-fraction CIRT for clinical stage T1N0M0 NSCLC obtained excellent results, comparable with those of previous fractionated regimens.Therefore, in this cohort, 14 NSCLC cases determined to be GGO-dominant with a GGO diameter to maximum tumor diameter ratio ≥ 50%, according to high-resolution computed tomography (CT), were selected and included in the CIRT group. The Japan Clinical Oncology Group (JCOG) reported in the JCOG0201 trial that among NSCLC with a maximum tumor diameter ≤ 2 cm, those with extensive GGO on chest CT are pathologically noninvasive [18].Based on those results, the JCOG0802/WJOG4507L randomized phase III clinical trial was conducted to compare lobectomy with segmentectomy for peripheral NSCLC with a maximum tumor diameter ≤ 2 cm and a maximum diameter of the largest consolidation to maximum tumor diameter ratio > 0.5 [19].The 5-years overall survival rate was better after segmentectomy than after lobectomy at a median follow-up of 7.3 years (94.3% vs. 91.1%).We obtained significant evidence-based results suggesting that segmentectomy is an acceptable option for peripheral NSCLC with an overall tumor diameter ≤ 2 cm, including tumors with GGO.At the time of this clinical trial, the Department of General Thoracic Surgery, Chiba University Hospital began to perform more aggressive segmentectomies, which led to a more standardized technique.Therefore, 48 consecutive patients who underwent segmentectomy at Chiba University Hospital from 2008 to 2015 for peripheral GGO-dominant clinical stage IA NSCLC were included in segmentectomy group.To our knowledge, this is the first study to compare the outcomes of patients with GGO-dominant NSCLC who were treated with CIRT versus segmentectomy. Patients In a phase I/II study (protocol 0201) conducted at the National Institutes for Quantum Science and Technology QST hospital from April 2003 to February 2012, the optimal CIRT dose was determined using a dose escalation method [17].The initial treatment dose was 28 Gy (RBE) administered in a single fraction using respiratorygated and four-portal oblique irradiation directions, with the total dose escalated to a maximum of 50 Gy (RBE) in 2.0 Gy (RBE) increments.Single-fraction CIRT was applied to 123 clinical stage 0/IA (TNM classification of malignant tumors, eighth edition) peripheral NSCLC, of which 14 were determined to be GGO-dominant by high-resolution CT and were included in CIRT group of this study. We have participated in several sublobar surgery clinical trials for lung cancer [19][20][21] and performed highquality lung sublobar surgeries, which have undergone internal and external reviews and continue to undergo quality assurance.During those clinical trials, the Department of General Thoracic Surgery, Chiba University Hospital began to perform more aggressive segmentectomies, which led to a more standardized technique.Therefore, 48 consecutive patients who underwent segmentectomy at Chiba University Hospital from 2008 to 2015 for peripheral GGO-dominant clinical stage IA NSCLC were included in segmentectomy group of this study. The study has been approved by the institutional ethical committees of both Chibaken Saiseikai Narashino Hospital (approval number: 2019-12) and Chiba University (approval number: 3350).This study complied with the protocol, the current version of the Declaration of Helsinki.Accordingly, the medical records of all 62 patients were reviewed and analyzed retrospectively according to the approved protocol. Administration of CIRT A single carbon-ion beam treatment using the fourdimensional radiotherapy (4DRT) technique was performed for clinical stage I peripheral non-small cell lung cancer [17].Briefly, carbon ion beams (290, 350, and 400 MeV) generated by the Heavy Ion Medical Accelerator at the Chiba Synchrotron were shaped threedimensionally to fit the tumor contour.A diffuse Bragg peak (SOBP) ensured dose coverage with the center of the SOBP as the reference point; the HIPLAN system was used for CT planning, and respiratory-gated CT images were used.A fixation device was used for patient positioning, and respiratory-gated irradiation was used to minimize tumor movement.A margin of 10 mm was taken from the gross tumor volume, including the spinous process and pleural indentation, where possible, as the clinical target volume (CTV).The internal margin (IM) corresponded to the movement of the target during gating, and the planned target volume (PTV) was the CTV plus IM [7].The carbon ion dose was expressed in Gy (RBE), calculated by multiplying the physical dose by the relative biological effect (RBE), approximately 3.0 at 0.8 cm from the distal end of the SOBP. CIRT was performed within 1 week after treatment planning.The 14 patients were prescribed doses of 32.0-46.0GyE in 1 fraction (Protocol #0201) (Table 1).Toxicity to organs such as the lung parenchyma, lung hilum, parietal pleura, and skin was assessed according to the Radiation Therapy Oncology Group/European Organization for Research and Treatment of Cancer criteria [22].Given the increased risk of radiation-induced pneumonitis in pulmonary carbon-ion radiotherapy, caution should be exercised when employing carbon-ion radiotherapy for non-small cell lung cancer without utilizing 4DRT, which is our main concept. Follow up The first follow-up examination was performed 4 weeks after CIRT and included a physical examination, blood chemistry analysis, and CT.Subsequent follow-up was performed every 3-4 months.If recurrence was Statistical analysis Statistical analysis was performed using the StatMate V (version 5.01) software package (Nihon 3B Scientific Inc., Niigata, Japan), abiding by the statistical and data reporting guidelines [24].Means, standard deviations, medians, and ranges were calculated for continuous variables, and percentages for categorical variables at baseline.The equal-variance two-sample t-test and chi-square test were used to compare patient demographic and clinical characteristics at baseline between the two treatment groups (CIRT versus surgery).All continuous variables, except the median follow-up period after treatment, are expressed as means ± standard deviation.Spirometric data, expressed as continuous variables, were evaluated using the paired t-test.Survival analysis was performed using the Kaplan-Meier method [25].Survival probabilities were compared by the log-rank test.P < 0.05 was considered statistically significant. Reasons for undergoing CIRT in CIRT group and CIRT details and results The decision to use CIRT rather than surgery in the patients in CIRT group was determined as described below.The decision on whether or not a patient was tolerant to surgery was made by our cancer board, consisting of thoracic surgeons, respiratory medicine physicians, and radiotherapy specialists.Case #1 in CIRT group (CIRT#1) was being treated for a second primary lung cancer in the S9 lower lobe of the left lung after bi-lobectomy of the middle and lower lobes of the right lung, and the patient was determined to be intolerant to additional surgery due to poor pulmonary function.Similarly, CIRT#5 was being treated for a second primary lung cancer in S1 + 2 of the left upper lobe after right upper lobectomy, and the patient was considered to be intolerant to surgery due to poor pulmonary function. CIRT#4 was judged to be inoperable due to advanced age and previous lung cancer surgery.CIRT#7 was judged to be operable; however, the patient had end-stage renal failure and refused to undergo surgery.Of the 14 cases, 9, including CIRT#7, were judged to be operable, but the patients refused surgery, and thus CIRT was selected.The tumor diameter, consolidation diameter/tumor diameter ratio (C/T ratio), CIRT dose, treatment-related complications of grade ≥ 2 [22], post-treatment observation period, and post-treatment results for each patient in CIRT group are shown in Table 2.After CIRT, one patient died of lung cancer due to local recurrence and distant metastasis after 29 months of treatment.On the other hand, 9 of 14 patients died of diseases other than the targeted lung cancer at 80 ± 39 months, and details of the cause of death are provided in Table 2. Four patients did not have recurrence, but new GGOs were detected in one patient by follow-up CT. Reasons for undergoing segmentectomy in segmentectomy group The reasons for undergoing segmentectomy in segmentectomy group were as follows: (1) maximum tumor diameter < 3 cm, (2) tumor localization in the periphery (outer one-third field of the lungs), and (3) GGOdominant cancer (C/T ratio < 0.5).Segmentectomy was indicated in 43 of 48 cases based on these criteria.On the other hand, 5 of 48 patients underwent segmental resection for other reasons such as comorbidities and/ or inability to tolerate lobectomy; the details of these selected cases are shown in Table 3. Discussion Radiotherapy is the primary treatment for medically inoperable patients with early-stage NSCLC, and CIRT is a promising treatment for medically inoperable patients with localized NSCLC because its excellent dose localization allows intensive irradiation of the target while sparing surrounding healthy tissue.Grutters et al. performed a meta-analysis to compare photons, protons, and carbon ions in radiotherapy for NSCLC.They reported adjusted pooled estimates of 2-and 5-years overall survival rates after CIRT for stage I inoperable NSCLC of 74% and 42%, respectively, significantly higher than those for conventional radiotherapy [26].CIRT with utilizing 4DRT appears to substantially improve the prognosis of earlystage lung cancer compared with conventional radiotherapy.Accordingly, we have reported that CIRT is superior to SBRT and proton beam in therapeutic efficacy and fewer adverse events because CIRT offers better dose distribution and less damage to the normal lung [27]. The population of CIRT group in the present study was obtained from CIRT protocol #0201, a single-fraction dose-escalation clinical study that started in April 2003.In that trial, the total dose was increased from 28 to Fig. 1 Comparison of overall survival after segmentectomy versus CIRT for lung cancer.Kaplan-Meier estimates of the survival probability at 2.5, 5, and 7.5 years after segmentectomy in 48 patients were 98%, 96%, and 96%, respectively (blue line), and those after CIRT in 14 patients were 93%, 86%, and 64%, respectively (red line).The log-rank test showed inferior survival in CIRT group compared with segmentectomy group (P = 0.000860) Fig. 2 Comparison of disease-specific survival after segmentectomy versus CIRT for lung cancer.Kaplan-Meier estimates of the survival probability at 2.5, 5, and 7.5 years after segmentectomy in 48 patients were 98%, 98%, and 98%, respectively (blue line), and those after CIRT in 14 patients were 93%, 93%, and 93%, respectively (blue line).The log-rank test showed non-inferior survival in CIRT group compared with segmentectomy group (P = 0.368) 50 Gy (RBE).The resulting 3-year local control rates were 80.7%, 88.0%, and 90.8% after treatment with 28-34 Gy (RBE), 36-42 Gy (RBE), and 44-50 Gy (RBE) for stage T1 NSCLC, respectively.Of these doses, 44-50 Gy (RBE) achieved the best results, with no significant adverse reactions [17].The only case of recurrence in CIRT group (CIRT#1) that occurred in this study was likely due to the low irradiation dose of 32 Gy (RBE), the lowest of the 14 cases. An analysis conducted using the National Cancer Database in the United States compared the prognoses of 10,032 cases of partial resection and 4296 cases of stereotactic radiotherapy (SBRT) for lung cancer and revealed comparable survival between partial resection with positive margins and SBRT [28].However, complete lung resection had a lower risk of death compared with SBRT [28].When we perform segmentectomy resection, the surgical margins are determined according to the report of Sawabata et al.; i.e., a margin of at least the tumor diameter length, or 2 cm, is necessary to prevent marginal recurrence [29].On the other hand, CIRT is indicated for early-stage lung cancer, especially in patients deemed operable but who refuse surgery, and a sufficient irradiation dose and irradiation coverage should be applied. The JCOG0802/WJOG4507L trial conducted by Saji et al. revealed surprising results.In particular, overall survival was better in the segmentectomy group than in the lobectomy group despite the high incidence of local recurrence.This was thought to be due to the difficulty in achieving a cure for second cancers after lobectomy [19].In CIRT group in this study, one of the patients (CIRT#1) was judged to be intolerable to surgery due to low pulmonary function after bi-lobectomy for lung cancer and was consequently assigned to CIRT treatment.If CIRT provides adequate local control of a second lung cancer in patients who have already undergone lobectomy for lung cancer, it may be a curative treatment option.In addition, completion lobectomy may be required for local recurrence following segmentectomy.Because of the high degree of adhesion and extraordinary difficulty in dissecting pulmonary arteries, completion pneumonectomy must be selected in some cases.In our surgery group, local recurrence occurred in case #38 in the residual right upper lobe at 7 years and 1 month after right S3 segmentectomy, and right completion lobectomy was required.Surgery for local recurrence after segmentectomy is very difficult, but if CIRT is effective in such cases, it may become an alternative option. In conclusion, CIRT group had a significantly older age, more men, lower forced vital capacity in spirometry, and a larger maximum tumor size, but no significant difference in 5-years disease-specific survival compared with segmentectomy group, which predominantly comprised patients with the aggressive segmentectomy criterion.Compared with segmentectomy, CIRT may be an alternative option for patients with early GGO-dominant NSCLC who are poor candidates for, or who refuse, surgery. Table 1 Demographic characteristics of patients in CIRT and segmentectomy groups before treatment CIRT group (n = 14)Segmentectomy group (n = 48) P value Table 2 Reasons for undergoing CIRTin CIRTgroup and CIRT details and results Table 3 Reasons for undergoing segmentectomy as a passive limited resection and the outcomes AAA Abdominal Aortic Aneurysm, AR Alive without Recurrence, CCI Cervical Cord Injury, CUP Cancer of Unknown Primary, MG Myasthenia Gravis, LC Laryngeal Cancer, LPF Low Pulmonary Function, RF Renal Failure, RP Rheumatic Polymyositis, TA Takayasu's Arteritis Table 4 A decline in spire-metric parameters after CIRT CIRT Carbon Ion Radio-Therapy, DLCO Diffusing Capacity of Carbon Monoxide, FEV1 Forced Expiratory Volume in One Second, FVC Forced Vital Capacity
2023-12-20T05:05:50.938Z
2023-12-18T00:00:00.000
{ "year": 2023, "sha1": "ac3101959ec6d29375c1d7bec02e0f05100cafe5", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ac3101959ec6d29375c1d7bec02e0f05100cafe5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
73542001
pes2o/s2orc
v3-fos-license
Tyre/Road Noise Modelling for Concrete Surface Material The study describes the result of a statistical model to estimate tyre/road noise. A total number of 1635 tyre/road noise measurements were collected, using a close-proximity vehicle, from 112 trials along 3 selected road sections on concrete surface material pavements in 2012, to develop such a model in Hong Kong. Five parameters, including vehicle speed, absolute acceleration, road temperature, road surface age and road gradient were considered. Interaction effect between vehicle speed and absolute acceleration was found by principal factor analysis. The interaction effects between vehicle speed and absolute acceleration are found to be important for noise levels at frequencies below 1250 Hz, which match the similar tyre/road noise measurement in Hong Kong. INTRODUCTION summarized factors that affected tyre/road noises. They include speed, road surface, tyre loading and temperature. The influence on tyre/road noise ranged from 3-25 dB. Regarding the effects of road and driving conditions on road noise, Braun et al. (2013) showed that the tyre/road noise depended on the state of driving. Accelerating conditions exhibited higher intensity levels than the constant-speed and coast-down conditions. Jabben (2011) observed that there was a negative relationship between noise level and ambient air temperature, i.e., the measured noise level increases with decreasing ambient air temperature. This study tries to apply statistical techniques to relate tyre/road noise with all these influencing factors. (2015) developed a statistical modelling for Hong Kong tyre/road noise in 2015. The corresponding statistical model can quantify all the possible factors and their interaction effects on tyre/road noise. A similar two stages of modeling were used in this paper with five driving factors. The development of our tyre/road noise model starts from the principal component analysis on five urban driving and road surface conditions (road gradient, surface age, road temperature, speed and absolute acceleration). The model development starting from statistical principal component analysis, it was aimed to identify the most important variables and quantify the percentage of contribution of each component. Furthermore, the result of principal components analysis can be used for indicated interaction effect between variables. The corresponding interaction effect between corresponding factors was further used for the statistical tyre/road noise model development. Mak and Loh Statistical stepwise regression is applied to develop our statistical model. The statistical regression model can be shown as the following form: (1) where, α i is the coefficient of parameter X i , X i , X j are the parameters have significant impacts on tyre/road noise and β i is the coefficient of the interaction effect between parameters X i and X j . Data collection: The CPX method in measuring tyre/road noise as specified in the ISO/DIS 11819-2 was employed. The CPX measurement equipment was developed according the ISO standard with using two mandatory microphones; one positioned at 200 mm from the tyre side wall and the other at 100 mm above the surface. The authors (Mak et al., 2011) applied a microwave speed sensor on the CPX vehicle to measure instantaneous speed in parallel with the tyre/road noise measurement and recorded a very reliable result. The similar setting was adopted in this survey, microstar speed sensor from corrsys-datron was used for recording the continuous vehicle speed. Concrete surface is selected for study in this survey. The survey involves driving the CPX vehicle repeatedly along these selected road sections. A total of 112 segments along the 3 selected road sections were made in urban areas with a pair of SRTT, the pavement surface age, road gradient and road temperature were also recorded for analysis. RESULTS AND DISCUSSION A total of 1635 tyre/road noise measurements were obtained along the 3 selected road sections. The descriptive statistics of each variable are shown in Table 1. Model development: The principal component analysis was carried out on the data and the eigenvalues and percentage of variance represented by corresponding components are shown in Table 2. The five components are road gradient, surface age, road surface temperature, driving speed and acceleration. The results show that two components had eigen values greater than one, so these two components were selected as the principal factors. Varimax with Kaiser normalization with rotation method was applied to extract the principal components. The corresponding components are shown in Table 3. Two components were extracted in the principal component analysis. The first component was dominated by the factor vehicle speed, which was also the only significant coefficient in the first component. This represent that 35.6% of the overall variation in the data can be represented by the factor of vehicle speed. The second component had two significant coefficients, which were vehicle speed and absolute acceleration. The second component represented 31.8% of the overall variation. While two factors were covered in same component, it represented an interaction effect between these two factors (vehicle speed and absolute acceleration). Furthermore, after the principle component analysis, 67.4% of the overall variation was represented by the two new principal components. As an interaction factor was found to be significant, the effect of the interaction between vehicle speed and absolute acceleration on tyre/road noise was further investigated by statistical regression model. Statistical stepwise multiple regression analysis was applied to factors for the overall SPL and 5 frequency bands (500, 800, 1000, 1250 and 2000 Hz) which are described in Table 4. The statistical models of each coefficient for the different types of pavement and then tyres were developed. Equation 1 can then be rewritten as follows: Where: α, β = Coefficients depending on types of road surfaces and tyres v = Vehicle speed in m secG 1 Age = Age of the road surface in months T = Road temperature in EC a = Absolute acceleration in m secG 2 g = Gradient CONCLUSION The interaction effect was proved to be significant in previous research by author on polymer modified friction course. Similar methodology was applied on concert surface in urban driving conditions. A same interaction effect between vehicle speed and absolute acceleration was obtained in concert surface as well. As tyre/road noise generation is too complicated to express in terms of single factor statistical model, seems considering in term of two or more factors interact effects is the solution in statistical tyre/road noise modelling. With introducing such a simple multivariate statistical analysis on principle analysis, the interaction factor was easily being single out. This will be very effective on improving urban statistical tyre/road noise modelling.
2019-04-15T13:03:48.588Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "5721c040a7c5e4b30e7b9dc70fc8fce920d6fc53", "oa_license": null, "oa_url": "http://docsdrive.com/pdfs/std/std/2016/51-53.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e1c7d3036c55b0fcaac9de9909b2196e548ef567", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
112205481
pes2o/s2orc
v3-fos-license
Italian-Hungarian Relationships over 100 Years of Monument Preservation (1849-1949) The beginnings of independent Hungarian monument preservation go back to the first half of the 19th century. The undertaking to form a legal body for the registration and protection of monuments began in the 1840s, with the National Monument Committee finally established in 1881. The paper analyses the most important Hungarian restoration works in the context of Italian monument restoration theories and practices of the investigated timespan; it also highlights the determining role of the Hungarian Academy in Rome through its patronage programmes for architects. had to be ensured from the start. A huge flying buttress was built on the north side of the eastern ridge, in front of the apse of the side-aisle. The brick addition was finished off with a wall crown made of stone; the centre of the ridge wall was protected from the damaging effect of penetrating damp with a similar crown. He had the stone finish of the remaining southern hatch wall demolished and replaced with bricks. The finish was once more carried out with stone elements. The most important interventions were carried out by the addition of the western facade portal. The portal elements on the right side were made of bricks, and followed the form of the wall structure. Möller had the fragments, which were in a poor state or already destroyed, recarved and replaced with stone or artificial stone, especially if the original form was clearly identifiable. Möller's activity ensured the preservation of a similarly important monument in Pécs, namely the restoration and presentation of the early Christian necropolis in the vicinity of the Cathedral. The principles of the Prima Carta del Restauro charter as applied to the key restorations of the interwar period. Camillo Boito, the professor of the Milanese Brera Academy, defined his views on monument preservation at the III Rome Congress of Architects and Engineers in 1883. The six clauses of Italian monument preservation are usually referred to as the Prima Carta del Restauro. A conference was held in 1912 to honour him, to which Gustavo Giovannoni produced a newly compiled volume. The 'La tutela delle Opere d'Arte in Italia' recognises the importance of all aspects of Boito's work. He identified himself with the previously much-disputed principle, according to which, of Viollet-le-Duc's views on monument preservation principles, only the theses regarding research and documentation should be applied; otherwise, new approaches are to be found. In addition to the importance of established historical knowledge, Giovannoni also treated the significance of the technical aspects of monument preservation accordingly. Italy had its great period of monument preservation in the 20s and 30s; large classical restorations were being carried out during this period throughout the country. They included the Forum region in Pompeii, the Temple of Jupiter, a complete set of apartment houses, such as the house of the Vettius family, the Menandros house, the Casa del Fauno, the peristyle of the Casa d'Argo in Herculaneum and the Cave of the Sibyl. The list of locations could also include Sabrath, Leptis Magna and Pula, where Italian architects carried out their activity. Giovannoni was one of the initial authors of the first Italian law on monument preservation (Carta del restauro italiana -1931). He also undertook an important role later when drafting the document that became known as the Athens Charter. The need for restoration to be carried out based only on objective and scientific aspects is very clearly defined in both texts, emphasising the historical and documentative aspects of the monument. With the cultural boom following the First World War, in Hungary, it was artists, together with art historians and architects who were able study the great periods of Italian culture at the Roman Hungarian Academy, established within the framework of the patronage programmes organised by Minister of Culture Kunó Klebelsberg. The Academy had applied a very conscious theoretical and ideological set of values ever since its creation. Tibor Gerevich's view, which defined the theoretical character of the Roman school and rebuffed extremes, can be linked to the values that recreated the Grand Roman style and related to the Italian viewpoint of the 20s and 30s. The change became logical and natural when Gerevich took over as President of the National Monument Committee after returning from Rome in January 1934. The five years' experience in Rome also helped the transformation of the body. Two excellent experts aided his work, István Genthon -who became one of the most determining personalities of national monument inventorying, as a former Roman scholar and art historian -and architect Kálmán Lux, Möller's disciple. Based on this, it is clear that the most important monument restorations carried out in the interwar period could only be executed based on principles generally accepted in Europe, as the leaders of the National Monument Committee were related to Italy and the Italian practice of monument preservation in their activities or professional background. Presenting the Royal Seat of Visegrád and Esztergom When preparing for the 900th anniversary of the death of Hungary's King Saint Stephen, the National Monument Committee decided to restore and present the medieval royal seats as monuments. It was Imre Henszlmann who began researching the Visegrád Citadel at the end of the 19th century. Lightning struck and destroyed the presentation Frigyes Schulek -the Purist restorer of Matthias Church in Buda -made of the Solomon Tower; the task of its reconstruction fell to János Schulek, architect of the National Monument Committee. The palace, initially built by Charles I of the House of Anjou, was extended first by Sigismund at the turn of the 14th century, then by Matthias in the last quarter of the 15th century; becoming a significant creation of late medieval and early Renaissance Europe. Exploration started in 1932 under the supervision of János Schulek. It was suspended for a short period in 1944 due to the war, then continued after the war until Schulek's death (1948); the restoration of the monument was only completed at the beginning of the 50s. In 1934, excavations revealed the ornamental palace built by King Matthias. Fragments of the famous ornamental fountain -also known from literary works -appeared in the courtyard. Uncovered on the mountain side of the courtyard, parts of the bases of the cloister vault structure and several rib parts still remained as imprints on the walls. The theoretical reconstruction of the courtyard from Matthias's age was set out by Kálmán Lux, and by 1953, the restoration of one of the cloister sides was complete together with the reconstruction of the ornamental fountain in its original place, being the work of sculptor Ernő Szakál. The walls, restored without a roof, were seriously damaged over time; János Sedlmayr restored the remaining parts in the 70s. When preparing for the millennial festivities, the cultural leadership decided to enlarge the royal seat. The last stage of restoration ended in the recent past, based on Gergely Buzás's research and according to Zoltán Deák's plans. The aim of the restoration was the reconstruction of the historical sites near the ornamental fountain based on the archaeological data and the available research results. The restorations and partial reconstructions of the 50s were made of bricks to clearly differentiate the original stone parts of the palace from its additions. The latest presentations were made of stone while the profiled parts are artificial stone. Stone surfaces were whitewashed -maybe too precisely -in order to differentiate them. Esztergom was explored and restored between 1934 and 1938 under the leadership of Kálmán Lux. Following the circumspect archaeological exploration that carefully considered the fine details, and based on Boito's principles, the remaining parts of the royal palace were presented in a way that respected architectural periods. Only the most necessary of material changes were implemented, together with additions justified for authenticity and illustration. The explored palace of the Árpád age, and especially the castle chapel, is considered an important example of 15th to 16 th -century Hungarian architecture. The original fragments were preserved during the restoration. The reconstruction of the historical sites -if research provided a sufficient base -was carried out with bricks thinner than the medieval ones. Almost all the vault ribs of the royal chapel concealing magnificent details were revealed. Thus, the vaults can be perceived partly as an anastylosis and partly as reconstruction in terms of vault caps. Parts of the mural paintings remain; these were presented to the public after preservation. As the loadbearing capacity of the structural elements was doubtful, concealed reinforced concrete elements were also used during the restoration. Concrete and reinforced concrete was widely used in the monument preservation activities of the 30s, as it was easy to use and shape, and could be readily differentiated from the original. Even the Athens Charter, accepted in 1931, encouraged the use of concrete as one of the modern materials to be used in monument preservation. Unfortunately, decades had to pass before the harmful consequences of the chemical processes between concrete and the original stonewall were recognised; consequently, the use of concrete in monument preservation was discontinued from the 1980s. When presenting the Royal Seat of Esztergom, it is important to mention the Bakócz-Chapel. Cardinal; Archbishop Tamás Bakócz commissioned Italian Renaissance builders to construct and annex a funeral chapel to the medieval St Adalbert Cathedral in 1506. The little chapel, made of red marble, miraculously survived the destruction of the Turkish age. The new, Classicist cathedral was started in the second half of the 18th century, a creation summarizing the best of the Renaissance masters. The castle was enlarged with a new wing within the framework of the millennial restoration programme. However, the connection between the new segments and the original mass was not solved harmoniously. The Roman ruins of Szombathely and the Árpádage ruins of Székesfehérvár Szombathely is an important scene of Hungarian monument preservation. János Szily, elected bishop in 1777, studied in Rome when Piranesi prepared his ancient engravings, and Winckelmann and Goethe lived in the city. Szily also discovered antiquity in Szombathely as the new Bishop's Palace was built on the former Roman settlement. He wished to explore scientifically the history of Savaria (Roman Szombathely), and Menyhért Hefele's building design for the palace already included a plan for the placement of the Roman carved stones. The Sala Terrena was built in a way that allowed the exhibition of the Roman stone ruins appearing from the construction site. István Schoenviesner wrote his work on the ancient predecessor of Szombathely relying on ancient Roman authors. Schoenviesner, the first in Hungary to erect a protective building over the remains of the Thermae Maiores unearthed on the grounds of the Aquincum Castrum, had the misfortune to be mandated to write his book only after the foundations of the Bishop's Palace had already been laid. Although he had the volume completed before the construction of the palace ended, he was unable to see the remains of Savaria revealed during the excavation. The Roman ruins revealed in the garden of the Bishop's Palace remained untouched during the 19th and 20th centuries. It was István Járdányi Paulovits who carried on the systematic excavations between 1937 and 1941. In order to protect the mosaics of the building he had found and identified as the imperial assembly hall, Kálmán Lux designed a protective building with an unplastered brick facade and a shallow roof evoking Tuscan architecture. It blends harmoniously into the view of the complex, presenting the preserved ruins in a park-like environment. The borders of the original space do not coincide with the area covered by the protective building, which thus could not evoke the atmosphere of a Roman interior. The remains of the Székesfehérvár basilica from Saint Stephen's age were first uncovered in the second half of the 19th century, when Imre Henszlmann cleared the ground plan of the church discovered in the garden of the Bishop's Palace. The church was unearthed once more between 1936 and 1938, as a preparation for the festivities related to the 900th anniversary of the death of Hungary's first king. The predecessor of the ruin garden, visible today was created at that time . The remains of the wall were not preserved; however, the church interior was restructured. The early medieval remains were only highlighted in 1972, to the detriment of the annex built by King Matthias. 1 The current appearance is the result of work carried out in the 80s. The light structured cover of the aisle section behind the Western entrance was constructed in preparation for the millennial festivities. The southern tower was also slightly raised. The related programme designated one of the neighbouring houses to serve as storage for stonework finds. In the current urban structure, the whole layout of the basilica is not fully visible. The bridge-like pedestrian passage built over the southern side-aisle covers the southern wall of the basilica. The protective roof exerts pressure precisely on this wall section with the site of the wall and buttress indicated on the floor. The protective roof leans on the row of pillars separating the northern side aisle from the main aisle. 2 At the end of the 1930s, storage for the unearthed valuable stone remains was built according to the plans of Kálmán Lux and Géza Lux, together with a mausoleum for Saint Stephen's sarcophagus. The new buildings integrated the former Episcopal garden into the city web as if it were a gate. The garden fence on the ruined side, along the east-west axis, was used to place less valuable stone materials under a protective roof with a half saddle. The arcade, with an unplastered brick façade and covered with a shallow roof, was placed behind the basilica apse. The best examples of medieval stoneware and stone remains uncovered during the excavations were displayed in a high-quality storage exhibit. The sarcophagus, known as Saint Stephen's resting place, found immediately next to the gate structure, is an elaborately carved stone chest, larger than the previous one. The interior of the small mausoleum is decorated with Vilmos Aba-Novák's fresco evoking emotions and representing the significant figures of Hungarian history. The historical nucleus of the new Óbuda city centre, the starting point of the Via Antiqua In Óbuda, at the crossroads of Nagyszombat and Lajos street, sat a group of single storey residential buildings on 16 plots covering an elliptical area known as Királyhegy (King's Mount). The Roman mural remains in their cellars were well known, but it was only in 1925 that the second amphitheatre of Aquincum was discovered here. The excavations, begun in 1935, quickly cleared the ground plan of the arena theatre, the largest of its kind in the provinces north of the Alps, with a capacity of 13,000 people. Its arena was also well suited for the organisation of naval battles (naumachia). The restoration of the Óbuda military amphitheatre became László Gerő's task, who as a scholar of the Collegium Hungaricum of Rome had personal knowledge of the restoration of the Forum Romanum. In accordance with Gerő's decision, walls in a good state were preserved. Where the original walls had already been removed, only a location inferred from the reconstruction of the base ditches and ground plan was possible, so Gerő planned an addition of didactical intent to these parts of the site. During the restoration of the amphitheatre's varied structure, an annular-vault substructure and a tribune placed on an earth mound were also revealed. The western quarter of the substructure was reconstructed to approximately half its original height, with the restoration works continuing during the war between 1940 and 1943. A tender for the restructuring of Óbuda was announced in 1937. Of the competing works submitted, Viktor Olgyay, also from the Roman school (having spent a year in the Palazzo Falconieri in 1936), and his twin brother designed Óbuda to link the two Aquincum amphitheatres with a wide boulevard. The existing narrow streets full of small single-storey houses were fated for demolition. Their idea was to have the northern end at the former Roman city gate of the civic town and place the Aquincum ruins there. The nearly three-kilometre axis would have ended at Flórián Square with a cross-axis between Bécsi Street and Árpád Bridge. The end of the road would have been the military amphitheatre, which would have closed off the route. Huge residential buildings constructed in the spirit of rationalist architecture would have lined the road up to Flórián square. The Olgyay brothers wished to provide space for the presentation of the Roman ruins on both sides of the road leading from the square to the ruin garden, as well as along Mozaik Street bordered by an industrial plant. Epilogue In the post-war years, following the mass restoration necessary due to the destruction left by the conflagration, Hungarian monument preservation continued in the spirit defined by Tibor Gerevich. Boito's principles defined the starting point of the most important task of the age, the restoration of Buda Castle, which took takes several decades. They also set the standard when preserving smaller churches, urban flats, castles and palaces for future generations.
2019-04-14T13:05:12.581Z
2016-03-10T00:00:00.000
{ "year": 2016, "sha1": "22fcce8cae72f24f4248f9b01c6a07e400e72055", "oa_license": "CCBY", "oa_url": "https://pp.bme.hu/ar/article/download/8245/7003", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "89b50987c812b5b337a3143c4e2a205f1889db17", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Engineering" ] }
231912930
pes2o/s2orc
v3-fos-license
Effects of Pre-Activation with Variable Intra-Repetition Resistance on Throwing Velocity in Female Handball Players: a Methodological Proposal Abstract The purpose of this study was to investigate the acute effect of pre-activation with Variable Intra-Repetition Resistance and isometry on the overhead throwing velocity in handball players. Fourteen female handball players took part in the study (age: 21.2 ± 2.7 years, experience: 10.9 ± 3.5 years). For Post-Activation Potentiation, two pre-activation methods were used: (I) Variable Intra-Repetition Resistance: 1 x 5 maximum repetitions at an initial velocity of 0.6 m·s-1 and a final velocity of 0.9 m·s-1; (II) Isometry: 1 x 5 s of maximum voluntary isometric contraction. Both methods were "standing unilateral bench presses" with the dominant arm, using a functional electromechanical dynamometer. The variable analysed was the mean of the three overhead throws. Ball velocity was measured with a radar (Stalker ATS). The statistical analysis was performed using ANOVA with repeated measures. No significant differences were found for either method (variable resistance intra-repetition: p = 0.194, isometry: p = 0.596). Regarding the individual responses, the analysis showed that 86% of the sample increased throwing velocity with the variable resistance intra-repetition method, while 93% of the sample increased throwing velocity with the isometric method. Both the variable intra-repetition resistance and isometric methods show improvements in ball velocity in female handball players. However, the authors recommend checking individual responses, since the results obtained were influenced by the short rest interval between the pre-activation and the experimental sets. Introduction In sports such as baseball, handball, javelin throwing and tennis, the velocity of overhead throwing is fundamental to success (Ertugrul et al., 2012;Van den Tillaar, 2004). In this context, Szymanski (2013) suggests that throwing technique must be taught from childhood and that, regardless of the loads used, the velocity of execution must always be high (Szymanski, 2013). In order to increase the performance of overhead throws, various strength and power training methodologies have been implemented (Zaras et al., 2014), which include sessions with dynamic exercises (Esformes et al., 2011) and plyometric training (Ertugrul et al., 2012). Post-activation potentiation (PAP) (Sale, 2004) allows for an increase in acute muscle power peaks after a contraction with submaximum intensities (Seitz and Haff, 2016;Tillin and Bishop, 2009). Post-activation potentiation has been attributed mainly to two sources. The first of these is the phosphorylation of the light regulatory chains during the previous contraction. This phosphorylation alters the structure of the myosin fibres present in the muscle, modifying the state of the crossed bridge of actin-myosin and producing greater sensitivity to the release of Ca 2+ to the sarcoplasmic reticulum (Ertugrul et al., 2012;Miyamoto et al., 2011). The other explanation is neurological: it has been observed that motor neurons have an increase in excitability during the contraction produced by PAP. Therefore, the subsequent recruitment of the motor units in that muscle sharply increases power levels (Esformes et al., 2011). The effectiveness of exercise in producing PAP depends on the relationship between fatigue and potentiation produced (Kobal et al., 2019;Seitz and Haff, 2016;Wilson et al., 2013). This relationship has been studied from multiple perspectives, among them the number of muscular contractions, the intensity of these contractions , the rest between the activity and the next potentiation (Crewther et al., 2011), the angle of the joints (Miyamoto et al., 2010) and the type of contraction (Ertugrul et al., 2012). Although these methodologies do not report a decrease in the manifestation of strength, and most of the results support a significant improvement in these variables, there are controversies about the size of the effect in different populations and sports disciplines (Seitz and Haff, 2016). In turn, some investigations have studied different types of contractions considering the efficacy of PAP production, comparing isometric (Esformes et al., 2011), concentric , eccentric (Esformes et al., 2011;Golas et al., 2016) and isokinetic (Miyamoto et al., 2011) preactivation methods. These studies suggest that isometric exercises allow for optimal and more lasting results in the increase of muscle strength and power associated with PAP (Seitz and Haff, 2016;Tillin and Bishop, 2009). However, most of the isometric methods used to trigger PAP have only been tested in men, in the lower extremities and with non-specific movement patterns Kabešová et al., 2019;Lim and Kong, 2013;Miyamoto et al., 2011;Sanchez-Sanchez et al., 2018). In contrast, the quantification of strength and power in gestural velocity, both in men and women, has been a problem for both researchers and coaches (Chamorro et al., 2017). While studying this, Chelly et al. (2014) concluded that after eight weeks of plyometric training, there was an increase in throwing velocity among elite handball players, but they also mention that the application of all training methods should be tested in all populations, determining the real effect that occurs within each group of subjects, especially in women (Chelly et al., 2014). Additionally, it is known that the torque produced during internal and external rotation movements of the shoulder, evaluated through isokinetic instruments, correlates significantly with throwing velocity in water polo players (Olivier and Daussin, 2018), and it must be taken into account. Since isokinetic movements do not resemble the reality of sports, it is necessary to implement evaluations and training that allow sports movements to be controlled in a natural way. Currently, there are functional electromechanical dynamometric devices that allow the evaluation of sports movements and techniques, accurately quantifying training loads and variables associated with strength (Chamorro et al., 2017(Chamorro et al., , 2018. Unfortunately, these devices have not yet been proven to effectively evaluate overhead throws. Based on the existing literature, the preactivation methodologies, with isometric loads and variable intra-repetition resistance (VIR-R), necessary to trigger PAP in sports movements in a natural way have not yet been fully defined, nor has the effect of isometric methods (ISO) on PAP production in women been quantified. Therefore, the main objective of this study was to determine the acute effect of pre-activation with VIR-R and isometry on overhead throwing velocity in handball players. Participants As shown in Table 1, fourteen Spanish Women's Silver Honour Division handball players took part in the study (age: 21.2 ± 2.7 years, body height: 167.6 ± 6.5 cm, body mass: 70.3 ± 9.5 kg, experience: 10.9 ± 3.5 years). Players were informed about the experimental procedures and the possible risks and benefits of the experiment and gave written consent to participate in the study. Both the study and the informed consent © Editorial Committee of Journal of Human Kinetics forms were approved by the Human Research Committee of the University of Granada, Spain (Registration 454 / CEIH / 2017), in agreement with the ethical standards established in the Declaration of Helsinki. Health status was considered as an inclusion criterion (the players should not present any type of musculoskeletal injury), another criterion was a minimum of eight years of experience in handball. Measures The baseline consisted of three overhead throws at maximum velocity (Vb). For both the baseline and the two experimental conditions, the throws were made standing five meters from a wall with the opposite foot forward to the executing arm. The average of the three baseline throws and of each set was considered for both experimental conditions. The projectile thrown was a Size 2 handball, with a circumference of 54-56 cm and a weight of 325-400 g, while the maximum velocity of the throws was measured using a Stalkers ATS radar gun (Stalker Radar, Plano, TX, USA) with accuracy of 0.1 km·h -1 , a velocity range of 1-480 km·h -1 and a target acquisition time of 0.01 s. Design and Procedures The study had an intra-subject crossover design. All handball players had 48 hours of rest before each intervention. Players were asked not to ingest caffeine or any other substance that would increase the metabolism, during any phase of the experiment. The experiment lasted five days. Day 1 was the baseline for the entire sample, followed by a 48-hour recovery. On Day 2 of the intervention, 50% of the sample (n = 7) performed the experimental pre-activation condition through VIR-R, while the other 50% of the sample (n = 7) performed the experimental pre-activation condition through the isometric method (ISO), followed by another 48-hour recovery. On Day 3 of the intervention, each group performed the pre-activation with the previous method ( Figure 1). The first evaluation included anthropometric measurements using the Rowenta Premiss BS1060 digital scale, with a maximum weight of ≤ 150 kg with a graduation of 100 grams, and the HM200D Digital Height Meter, with a measuring range of 120 to 200 cm with a graduation of 1 mm. Before the baseline and the two experimental conditions (VIR-R and ISO), participants performed a standardised warm-up consisting of a five-minute jog, followed by five minutes of general and specific shoulder mobility exercises (ballistic movements and the overhead throw gesture). The first experimental condition was a pre-activation using the VIR-R method. This preactivation consisted of a set of five repetitions at an initial velocity of 0.6 m·s -1 and a final velocity of 0.9 m·s -1 in a "unilateral chest press" with the dominant arm while standing, using the Functional Electromechanical Dynamometer DYNASystem, Symotech, Granada (force: 0.5-2000 N, velocity: 0.005-2.00 m·s -1 and distance: 0.03-500 cm). The starting position was similar to the evaluation of the throwing velocity, the opposite foot forward to the executor arm, while the range of movement was individualised and measured before performing the set in the initial position of 90º with respect to the forearm until the complete extension of the elbow (Figure 2A). After the pre-activation, participants immediately (min 0) performed a set of three throws at maximum velocity. This set of three pitches was repeated at the 1 st (min 1), 2 nd (min 2) and 10 th (min 10) minute of recovery (Miyamoto et al., 2011). The evaluation of the velocity of each throw was made using the Stalkers ATS radar gun. For the statistical analysis, the average value of the three throws was considered. The second experimental condition was a pre-activation using the ISO method. This preactivation consisted of performing a set of fivesecond voluntary maximum isometric contraction in a "unilateral chest press" with the dominant arm in a standing position, using the Functional Electromechanical Dynamometer. Unlike other isokinetic devices which generate angular velocities, this device (DynaSystem, Model Research, Granada, Spain) generates linear isokinetic speeds between other dynamic modes (tonic, kinetic, elastic, inertial, conical) to static (isometric, vibratory) allowing to evaluate and train through resistance/velocity constant and variable. The starting position was similar to the evaluation of the throwing velocity, the opposite foot forward to the executor arm, while the angle of the elbow was 90º with respect to the forearm ( Figure 2B). As in the VIR-R method, following the pre-activation, participants immediately (min 0) performed a set of three throws at maximum velocity. This set of three pitches was repeated at the 1 st (min 1), 2 nd (min 2) and 10 th (min 10) minute Journal of Human Kinetics -volume 77/2021 http://www.johk.pl of recovery (Miyamoto et al., 2011). The evaluation of the velocity of each throw was made using the Stalkers ATS radar gun. For statistical analysis, the average value of the three throws was considered. Statistical Analysis The mean values of baseline throwing velocities and both pre-activation methods were subjected to the Shapiro-Wilk normality test. A repeated measures ANOVA was used to examine the effect of pre-activation with the VIR-R and ISO methods. For this analysis, the baseline (LB), min 0, the 1 st , 2 nd and 10 th minute of both methods were considered. The effect size (ES) for both cases was calculated using the Eta Square Partial test. The individual responses for both preactivation methods were analysed using the mean values, standard deviations (SD), deltas and percentages of variation between the baseline and the different sets of throws. Excel 2013® software and statistical software SPSS version 19® were used for data tabulation and analysis. For all the comparisons, a value of significance p ≤ 0.05 was accepted. Pre-activation with VIR-R The means and SD are presented in Table 2. ANOVA did not show any significant differences for the 0, 1 st , 2 nd and 10 th minutes (F = 1.56, p = 0.194, ES = 0.088) ( Table 2). Regarding the individual responses, the analysis showed that 6 of 14 players (43%) increased the throwing speed in all experimental sets (0, 1 st , 2 nd and 10 th min), while12 of 14 players (86%) increased the launching speed in some of the experimental sets, and only 2 of 14 players (14%) did not experience any changes after pre-activation through the VIR-R method (Table 3). Pre-activation with ISO The means and SD are presented in Table 2. ANOVA did not show any significant differences between the scores taken at 0, 1 st , 2 nd and 10 th min (F = 0.69, p = 0.596, ES = 0.041) ( Table 2). In relation to the individual responses, the analysis showed that 4 of 14 players (29%) increased their throwing velocity in all the experimental sets (0, 1 st , 2 nd and 10 th min), while 13 of 14 players (93%) increased throwing velocity in some of the experimental sets, and only 1 of 14 players (7%) did not experience any changes following pre-activation with the ISO method (Table 4). © Editorial Committee of Journal of Human Kinetics Discussion Considering that the effects of different methodologies and training loads on PAP have not yet been fully described (Seitz and Haff, 2016), and because the existing literature lacks coherence in some performance variables (Dobbs et al., 2018), it was necessary to determine the acute effect of two pre-activation protocols (VIR-R and ISO) targeting muscles of the dominant upper extremity on Bv in female handball players. In relation to the main objective of this investigation, the results showed a non-significant difference for both pre-activation protocols between baseline and 0, 1st, 2nd and 10th min of recovery (p > 0.05). Perhaps a deficit of strength in the upper limbs of handball players could affect the average effect of pre-activation on Vb, as it has been shown that stronger individuals have a greater possibility of producing PAP (Seitz and Haff, 2016). Likewise, and directly associated with Vb, it has been observed that more highly trained subjects have greater possibilities of generating PAP after a preactivation in a bench press on a throw . Along these lines, Smilios et al. (2016) suggest that PAP is manifested to a greater degree in athletes with high relative strength. Therefore, well-trained individuals with higher levels of strength may have a greater capacity for empowerment compared to the less trained population. Another variable to consider in the PAP protocols is the rest interval between preactivation and subsequent explosive activities (Dobbs et al., 2018;Golas et al., 2016;Tillin and Bishop, 2009). In this sense, Tillin and Bishop (2009) indicated that the balance between PAP and fatigue was fundamental to determining the effect of pre-activation on a subsequent explosive activity. In turn, Dobbs et al. (2018) recommended that untrained subjects rested seven to ten minutes between pre-activation and subsequent explosive activity. Similarly, Seitz and Haff (2016) concluded that weaker subjects responded better to longer recovery intervals. Despite these studies, and in the present study, PAP protocols with recovery times of less than one minute have been tested, with significant differences between preactivation and subsequent muscular activities (Miyamoto et al., 2010). Also, Smilios et al. (2016) showed that gender could influence rest times, associating the lower muscle mass presented by women with a greater possibility of fatigue after pre-activation. Consequently, one of the variables to be considered in the PAP protocols is the strength and performance level of the subjects, since this will influence and determine the rest interval between the pre-activation and the subsequent explosive activities (Seitz and Haff, 2016). Taking into consideration the effect of gender over PAP, results of this study are in line with data from Wilson et al. (2013); in that study women achieved post-activation effects, but with a lesser effect size (ES) than men (male = 0.42 ES, female = 0.20 ES). In addition, Ojeda et al. (2019) obtained similar results, where the effects of PAP on female athletes had to be reviewed individually due to the variation of results. However, due to the few studies conducted with female athletes, there is no consensus as to why these gender differences exist. According to Tillin and Bishop (2009), these differences may be due to lower muscle strength for the same level of competition in female atletes. This would lead to such differences occurring between the genders. In relation to the different types of contraction and intensities needed to produce PAP, it has been demonstrated that maximum or submaximal contractions, whether dynamic or isometric, trigger PAP (Tillin and Bishop, 2009). However, Tillin and Bishop (2009) suggest isometric contractions to increase strength acutely (PAP), and although these types of muscle contractions generate higher levels of fatigue, they are also able to recruit more motor units that have thresholds of higher activation, thus increasing the chances of triggering PAP. In this sense, in an investigation conducted by Gilmore et al. (2018), the effect of high intensity isometry was evaluated over 5 s. At the end of the study, an increase in the batting velocity was observed in all time intervals except the 1st min post activation (2nd, 4th, 6th, 8th, 10th and 12th min) (Gilmore et al., 2018). Similarly, there are studies that support the use of maximum dynamic movements to trigger PAP (Okuno et al., 2013), focusing on intensities ranging from 80 to 100% of 1RM (Gomez et al., 2011;Okuno et al., 2013;Zaras et al., 2013). In this regard, there is evidence that six weeks of strength training or ballistic power increase performance in pitchers (from 7.0 to 13.5% in strength and 6.0 to 11.5% in ballistic Journal of Human Kinetics -volume 77/2021 http://www.johk.pl power). On the other hand, there are few studies that relate loads with variable resistances and PAP, specifically VIR-R with PAP . This created two opposite situations: on the one hand, an innovative methodology was investigated from the point of view of the control and quantification of the functional electromechanical dynamics of the movement; on the other hand, because there are no studies that have controlled and modified the velocity of movement in the repetition for each of the study subjects (initial velocity of 0.6 m·s-1 and final velocity of 0.9 m·s-1 in each repetition), it was difficult to compare the results found in the present investigation. The latter led to the testing of a broad spectrum of time and PAP alternatives in the present study, from 0 to 10 minutes post activation (Miyamoto et al., 2011). It is also important to mention that there are PAP investigations that analyse individual responses more than the average results, suggesting that women should have more rest than men between pre-activation and subsequent exercises (Ojeda et al., 2019;Seitz and Haff, 2016). Some limitations during data collection should be noted. First, the experimental sample is not large enough to extrapolate the data obtained to the entire population. These handball players had not previously trained with the proposed stimuli, both isometric contractions and pleokinetic contractions. Therefore, their fatigue could be increased by this lack of adaptation. Due to this, more research should be done with female subjects who are more familiar with the proposed stimuli. It could help better define which type of training produces the desired PAP effect. This could lead to an improvement in performance of female athletes. They would obtain a qualitative leap in both the preparation for matches as well as in the planning of their training program. In conclusion, considering that after the pre-activation through the VIR-R method, 86% of the players increased the throwing velocity in some of the experimental sets, and 93% showed improvements with the ISO method, we believe that the methodology used is applicable for handball players. However, if pre-activation is to be used with both the VIR-R and ISO methods, individual responses should be checked, considering longer rest intervals between the preactivation and the subsequent exercise (Seitz and Haff, 2016), as the results obtained may have been influenced by the short rest interval between the pre-activation and the experimental sets (Tillin and Bishop, 2009;Kobal et al., 2019). This study did not show statistically significant improvements in overhead throwing velocity in handball players (0, 1st, 2nd and 10th min) after the application of pre-activation protocols with VIR-R and ISO. However, large individual variations were observed after preactivation with the RVI-R and ISO methods. Therefore, pre-activation with a set of five maximum repetitions at an initial velocity of 0.6 m·s-1 and a final velocity of 0.9 m·s-1 (VIR-R), or a set of five seconds of maximum voluntary isometric contraction (ISO) in a "unilateral chest press" with the dominant arm in a standing position, using a functional electromechanical dynamometer, generated PAP in the vast majority of handball players. Consequently, coaches who consider the use of this pre-activation protocol to generate PAP should explore its effectiveness individually. Factors such as power levels, the type of contraction or intensity of the preactivation protocol to generate PAP, and above all, the recovery interval between pre-activation and subsequent motor activity, need further investigation before successful implementation of PAP.
2021-02-14T14:10:02.121Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "492a7b4beb3652a206781a4c31df198512df64c4", "oa_license": "CCBYNCND", "oa_url": "https://www.sciendo.com/pdf/10.2478/hukin-2021-0022", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "730990a9e8e2fed76fa218d02e68096d5a06fa56", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
8295228
pes2o/s2orc
v3-fos-license
A new genus of horse from Pleistocene North America The extinct ‘New World stilt-legged’, or NWSL, equids constitute a perplexing group of Pleistocene horses endemic to North America. Their slender distal limb bones resemble those of Asiatic asses, such as the Persian onager. Previous palaeogenetic studies, however, have suggested a closer relationship to caballine horses than to Asiatic asses. Here, we report complete mitochondrial and partial nuclear genomes from NWSL equids from across their geographic range. Although multiple NWSL equid species have been named, our palaeogenomic and morphometric analyses support the idea that there was only a single species of middle to late Pleistocene NWSL equid, and demonstrate that it falls outside of crown group Equus. We therefore propose a new genus, Haringtonhippus, for the sole species H. francisci. Our combined genomic and phenomic approach to resolving the systematics of extinct megafauna will allow for an improved understanding of the full extent of the terminal Pleistocene extinction event. Introduction The family that includes modern horses, asses, and zebras, the Equidae, is a classic model of macroevolution. The excellent fossil record of this family clearly documents its~55 million year evolution from dog-sized hyracotheres through many intermediate forms and extinct offshoots to present-day Equus, which comprises all living equid species (MacFadden, 1992). The downside of this excellent fossil record is that many dubious fossil equid taxa have been erected, a problem especially acute within Pleistocene Equus of North America (Macdonald et al., 1992). While numerous species are described from the fossil record, molecular data suggest that most belonged to, or were closely related to, a single, highly variable stout-legged caballine species that includes the domestic horse, E. caballus (Weinstock et al., 2005). The enigmatic and extinct 'New World stilt-legged' (NWSL) forms, however, exhibit a perplexing mix of morphological characters, including slender, stilt-like distal limb bones with narrow hooves reminiscent of extant Eurasian hemionines, the Asiatic wild asses (E. hemionus, E. kiang) (Eisenmann, 1992;Eisenmann et al., 2008;Harington and Clulow, 1973;Lundelius and Stevens, 1970;Scott, 2004), and dentitions that have been interpreted as more consistent with either caballine horses (Lundelius and Stevens, 1970) or hemionines (MacFadden, 1992). On the basis of their slender distal limb bones, the NWSL equids have traditionally been considered as allied to hemionines (e.g. Eisenmann et al., 2008;Guthrie, 2003;Scott, 2004;Skinner and Hibbard, 1972). Palaeogenetic analyses based on mitochondrial DNA (mtDNA) have, however, consistently placed NWSL equids closer to caballine horses (Barró n- Ortiz et al., 2017;Der Sarkissian et al., 2015;Orlando et al., 2008Orlando et al., , 2009Vilstrup et al., 2013;Weinstock et al., 2005). The current mtDNA-based phylogenetic model therefore suggests that the stilt-legged morphology arose independently in the New and Old Worlds (Weinstock et al., 2005) and may represent convergent adaptations to arid climates and habitats (Eisenmann, 1985). However, these models have been based on two questionable sources. The first is based on 15 short control region sequences (<1000 base pairs, bp; Barró n-Ortiz et al., 2017;Weinstock et al., 2005), a data type that can be unreliable for resolving the placement of major equid groups Orlando et al., 2009). The second consist of two mitochondrial genome sequences (Vilstrup et al., 2013) that are either incomplete or otherwise problematic (see Results). Given continuing uncertainty regarding the phylogenetic placement of NWSL equids-which impedes our understanding of Pleistocene equid evolution in general-we therefore sought to resolve their position using multiple mitochondrial and eLife digest The horse family -which also includes zebras, donkeys and asses -is often featured on the pages of textbooks about evolution. All living horses belong to a group, or genus, called Equus. The fossil record shows how the ancestors of these animals evolved from dog-sized, three-toed browsers to larger, one-toed grazers. This process took around 55 million years, and many members of the horse family tree went extinct along the way. Nevertheless, the details of the horse family tree over the past 2.5 million years remain poorly understood. In North America, horses from this period -which is referred to as the Pleistocenehave been classed into two major groups: stout-legged horses and stilt-legged horses. Both groups became extinct near the end of the Pleistocene in North America, and it was not clear how they relate to one another. Based on their anatomy, many scientists suggested that stilt-legged horses were most closely related to modern-day asses living in Asia. Yet, other studies using ancient DNA placed the stilt-legged horses closer to the stout-legged horses. Heintzman et al. set out to resolve where the stilt-legged horses sit within the horse family tree by examining more ancient DNA than the previous studies. The analyses showed that the stiltlegged horses were much more distinct than previously thought. In fact, contrary to all previous findings, these animals actually belonged outside of the genus Equus. Heintzman et al. named the new genus for the stilt-legged horses Haringtonhippus, and showed that all stilt-legged horses belonged to a single species within this genus, Haringtonhippus francisci. Together these new findings provide a benchmark for reclassifying problematic fossil groups across the tree of life. A similar approach could be used to resolve the relationships in other problematic groups of Pleistocene animals, such as mammoths and bison. This would give scientists a more nuanced understanding of evolution and extinction during this period. partial nuclear genomes from specimens representing as many parts of late Pleistocene North America as possible. The earliest recognized NWSL equid fossils date to the late Pliocene/early Pleistocene (~2-3 million years ago, Ma) of New Mexico (Azzaroli and Voorhies, 1993;Eisenmann, 2003;Eisenmann et al., 2008). Middle and late Pleistocene forms tended to be smaller in stature than their early Pleistocene kin, and ranged across southern and extreme northwestern North America (i.e. eastern Beringia, which includes Alaska, USA and Yukon Territory, Canada). NWSL equids have been assigned to several named species, such as E. conversidens Owen 1869, E. tau Owen 1869, E. francisci Hay (1915, E. calobatus Troxell 1915, and E. (Asinus) cf. kiang, but there is considerable confusion and disagreement regarding their taxonomy. Consequently, some researchers have chosen to refer to them collectively as Equus (Hemionus) spp. (Guthrie, 2003;Scott, 2004), or avoid a formal taxonomic designation altogether Vilstrup et al., 2013;Weinstock et al., 2005). Using our phylogenetic framework and comparisons between specimens identified by palaeogenomics and/or morphology, we attempted to determine the taxonomy of middle-late Pleistocene NWSL equids. Radiocarbon ( 14 C) dates from Gypsum Cave, Nevada, confirm that NWSL equids persisted in areas south of the continental ice sheets during the last glacial maximum (LGM;~26-19 thousand years before present (ka BP); Clark et al., 2009) until near the terminal Pleistocene,~13 thousand radiocarbon years before present ( 14 C ka BP) (Weinstock et al., 2005), soon after which they became extinct, along with their caballine counterparts and most other coeval species of megafauna (Koch and Barnosky, 2006). This contrasts with dates from unglaciated eastern Beringia, where NWSL equids were seemingly extirpated locally during a relatively mild interstadial interval centered on~31 14 C ka BP (Guthrie, 2003), thus prior to the LGM (Clark et al., 2009), final loss of caballine horses (Guthrie, 2003;2006), and arrival of humans in the region (Guthrie, 2006). The apparently discrepant extirpation chronology between NWSL equids south and north of the continental ice sheets implies that their populations responded variably to demographic pressures in different parts of their range, which is consistent with results from some other megafauna (Guthrie, 2006;Zazula et al., 2014;Zazula et al., 2017). To further test this extinction chronology, we generated new radiocarbon dates from eastern Beringian NWSL equids. We analyzed 26 full mitochondrial genomes and 17 partial nuclear genomes from late Pleistocene NWSL equids, which revealed that individuals from both eastern Beringia and southern North America form a single well-supported clade that falls outside the diversity of Equus and diverged from the lineage leading to Equus during the latest Miocene or early Pliocene. This novel and robust phylogenetic placement warrants the recognition of NWSL equids as a distinct genus, which we here name Haringtonhippus. After reviewing potential species names and conducting morphometric and anatomical comparisons, we determined that, based on the earliest-described specimen bearing diagnosable features, francisci Hay is the most well-supported species name. We therefore refer the analyzed NWSL equid specimens to H. francisci. New radiocarbon dates revealed that H. francisci was extirpated in eastern Beringia~14 14 C ka BP. In light of our analyses, we review the Plio-Pleistocene evolutionary history of equids, and the implications for the systematics of equids and other Pleistocene megafauna. Phylogeny of North American late Pleistocene and extant equids We reconstructed whole mitochondrial genomes from 26 NWSL equids and four New World caballine Equus (two E. lambei, two E. cf. scotti). Using these and mitochondrial genomes of representatives from all extant and several late Pleistocene equids, we estimated a mitochondrial phylogeny, using a variety of outgroups (Appendix 1, Appendix 2-tables 1-2, and Supplementary file 1). The resulting phylogeny is mostly consistent with previous studies Vilstrup et al., 2013), including confirmation of NWSL equid monophyly (Weinstock et al., 2005). However, we recover a strongly supported placement of the NWSL equid clade outside of crown group diversity (Equus), but closer to Equus than to Hippidion (Figure 1, Figure 1-figure supplement 1a, Figure 1-source data 1, and Appendix 2-tables 1-2). In contrast, previous palaeogenetic studies placed the NWSL equids within crown group Equus, closer to caballine horses than to Figure 1. Phylogeny of extant and middle-late Pleistocene equids, as inferred from the Bayesian analysis of full mitochondrial genomes. Purple nodebars illustrate the 95% highest posterior density of node heights and are shown for nodes with >0.99 posterior probability support. The range of divergence estimates derived from our nuclear genomic analyses is shown by the thicker, lime green node-bars ( [Orlando et al., 2013]; this study). Nodes highlighted in the main text are labeled with boxed numbers. All analyses were calibrated using as prior information a caballine/non-caballine Equus divergence estimate of 4.0-4.5 Ma (Orlando et al., 2013) at node 3, and, in the mitochondrial analyses, the known ages of included ancient specimens. The thicknesses of nodes 2 and 3 represent the range between the median nuclear and mitochondrial genomic divergence estimates. Branches are coloured based on species provenance and the most parsimonious biogeographic scenario given the data, with gray indicating ambiguity. Fossil record occurrences for major represented groups (including South American Hippidion, New World stilt-legged equids, and Old World Sussemiones) are represented by the geographically coloured bars, with fade indicating uncertainty in the first appearance datum (after (Eisenmann et al., 2008;Forsten, 1992;O'Dea et al., 2016;Orlando et al., 2013) and references therein non-caballine asses and zebras (Barró n-Ortiz et al., 2017;Der Sarkissian et al., 2015;Orlando et al., 2008Orlando et al., , 2009Vilstrup et al., 2013;Weinstock et al., 2005). To explore possible causes for this discrepancy, we reconstructed mitochondrial genomes from previously sequenced NWSL equid specimens and used a maximum likelihood evolutionary placement algorithm (Berger et al., 2011) to place these published sequences in our phylogeny a posteriori. These analyses suggested that previous results were likely due to a combination of outgroup choice and the use of short, incomplete, or problematic mtDNA sequences (Appendix 2 and Appendix 2-table 3). To confirm the mtDNA result that NWSL equids fall outside of crown group equid diversity, we sequenced and compared partial nuclear genomes from 17 NWSL equids to a caballine (horse) and a non-caballine (donkey) reference genome. After controlling for reference genome and ancient DNA fragment length artifacts (Appendices 1-2), we examined differences in relative private transversion frequency between these genomes (Appendix 1-figure 1). We found that the relative private transversion frequency for NWSL equids was~1.4-1.5 times greater than that for horse or donkey (Appendix 2, Figure 1-source data 3, Figure 1-figure supplement 2, and Figure 1source data 2). This result supports the placement of NWSL equids as sister to the horse-donkey clade ( Figure 1-figure supplement 3), the latter of which is representative of living Equus diversity (e.g. Jó nsson et al., 2014]) and is therefore congruent with the mitochondrial genomic analyses. Divergence times of Hippidion, NWSL equids, and Equus We estimated the divergence times between the lineages leading to Hippidion, the NWSL equids, and Equus. We first applied a Bayesian time-tree approach to the whole mitochondrial genome data. This gave divergence estimates for the Hippidion-NWSL/Equus split (node 1) at 5. 15-7.66 Ma, consistent with (Der Sarkissian et al., 2015), the NWSL-Equus split (node 2) at 4.09-5.13 Ma, and the caballine/non-caballine Equus split (node 3) at 3.77-4.40 Ma (Figure 1 and Figure 1-source data 1). These estimates suggest that the NWSL-Equus mitochondrial split occurred only~500 thousand years (ka) prior to the caballine/non-caballine Equus split. We then estimated the NWSL-Equus divergence time using relative private transversion frequency ratios between the nuclear genomes, assuming a caballine/non-caballine Equus divergence estimate of 4-4.5 Ma (Orlando et al., 2013) and a genome-wide strict molecular clock (following [Heintzman et al., 2015]). This analysis yielded a divergence estimate of 4.87-5.69 Ma (Figure 1-figure supplement 3), which overlaps with that obtained from the relaxed clock analysis of whole mitochondrial genome data ( Figure 1). These analyses suggest that the NWSL equid and Equus clades diverged during the latest Miocene or early Pliocene (4.1-5.7 Ma; late Hemphillian or earliest Blancan). Systematic palaeontology The genus Equus (Linnaeus, 1758) was named to include three living equid groups -horses (E. caballus), donkeys (E. asinus), and zebras (E. zebra) -whose diversity comprises all extant, or crown group, equids. Previous palaeontological and palaeogenetic studies have uniformly placed NWSL equids within the diversity of extant equids and therefore this genus (Barró n-Ortiz et al., 2017;Bennett, 1980;Der Sarkissian et al., 2015;Harington and Clulow, 1973;Orlando et al., 2008;Scott, 2004;Vilstrup et al., 2013;Weinstock et al., 2005). This, however, conflicts with the phylogenetic signal provided by palaeogenomic data, which strongly suggest that NWSL equids fall outside the confines of the equid crown group (Equus). Nor is there any morphological or genetic evidence warranting the assignment of NWSL equids to an existing extinct taxon such as Hippidion. We therefore erect a new genus for NWSL equids, Haringtonhippus, as defined and delimited below: Order Etymology The new genus is named in honor of C. Richard Harington, who first described NWSL equids from eastern Beringia (Harington and Clulow, 1973). 'Hippus' is from the Greek word for horse, and so Haringtonhippus is implied to mean 'Harington's horse'. Holotype A partial skeleton consisting of a complete cranium, mandible, and a stilt-legged third metatarsal (MTIII) (Figure 2a and Figure 2- holotype of 'E'. francisci, originally described by Hay (1915), and is from the middle Pleistocene Lissie Formation of Wharton County, Texas (Hay, 1915;Lundelius and Stevens, 1970). Referred material On the basis of mitochondrial and nuclear genomic data, we assign the following material confidently to Haringtonhippus: a cranium, femur, and MTIII (LACM(CIT): Nevada); three MTIIIs, three third metacarpals (MCIII), three premolar teeth, and a molar tooth (KU: Wyoming); two radii, 12 MTIIIs, three MCIIIs, a metapodial, and a first phalanx (YG: Yukon Territory); and a premolar tooth , 2017;Weinstock et al., 2005). This material includes at least four males and at least six females (Appendix 2, Appendix 2- Table 4 and Appendix 2- Table 4-source data 1). We further assign MTIII specimens from Yukon Territory (n = 13), Wyoming (n = 57), and Nevada (n = 4) to Haringtonhippus on the basis of morphometric analysis ( Figure 2c and , 2017;Guthrie, 2003). The green-star-labeled HT is the locality of the francisci holotype, Wharton County, Texas, USA. This figure was drawn using Simplemappr (Shorthouse, 2010 Ortiz et al., 2017). We also tentatively assign 19 NWSL equid metapodial specimens from the Fairbanks area, Alaska (Guthrie, 2003) to Haringtonhippus, but note that morphometric and/or palaeogenomic analysis would be required to confirm this designation. Geographic and temporal distribution Haringtonhippus is known only from the Pleistocene of North America ( Figure 3). Mitogenomic diagnosis Haringtonhippus is the sister genus to Equus (equid crown group), with Hippidion being sister to the Haringtonhippus-Equus clade ( Figure 1). Haringtonhippus can be differentiated from Equus and Hippidion by 178 synapomorphic positions in the mitochondrial genome, including four insertions and 174 substitutions (Appendix 1- Table 2 and Appendix 1-table 2-source data 1). We caution that these synapomorphies are tentative and will likely be reduced in number as a greater diversity of mitochondrial genomes for extinct equids become available. Morphological comparisons of third metatarsals We used morphometric analysis of caballine/stout-legged Equus and stilt-legged equids (hemionine/ stilt-legged Equus, Haringtonhippus) MTIIIs to determine how confidently these groups can be distinguished ( Figure 2c). Using logistic regression on principal components, we find a strong separation that can be correctly distinguished with 98.2% accuracy (Appendix 2; Heintzman et al., 2017). Hemionine/stilt-legged Equus MTIIIs occupy the same morphospace as H. francisci in our analysis, although given a larger sample size, it may be possible to discriminate E. hemionus from the remaining stilt-legged equids. We note that Haringtonhippus seems to exhibit a negative correlation between latitude and MTIII length, and that specimens from the same latitude occupy similar morphospace regardless of whether DNA-or morphological-based identification was used ( Figure 2c and Comments On the basis of morphology, we assign all confidently referred material of Haringtonhippus to the single species H. francisci Hay (1915) (Appendix 2). Comparison between the cranial anatomical features of LACM(CIT) 109/156450 and TMM 34-2518 reveal some minor differences, which can likely be ascribed to intraspecific variation ( Figure 2a and Appendix 2 and Figure 2-figure supplement 1). Further, the MTIII of TMM 34-2518 is comparable to the MTIIIs ascribed to Haringtonhippus by palaeogenomic data, and is consistent with the observed latitudinally correlated variation in MTIII length across Haringtonhippus (Figure 2c and Appendix 2). This action is supported indirectly by molecular evidence, namely the lack of mitochondrial phylogeographic structure and the estimated time to most recent common ancestor (tMRCA) for sampled Haringtonhippus. The mitochondrial tree topology within Haringtonhippus does not exhibit phylogeographic structure (Figure 1-figure supplement 1b), which is consistent with sampled Haringtonhippus mitochondrial genomes belonging to the same species. Using Bayesian time-tree analysis, we estimated a tMRCA for the sampled Haringtonhippus mitochondrial genomes of~200-470 ka BP ( Figure 1 and Figure 1-source data 1; Heintzman et al., 2017). The MRCA of Haringtonhippus is therefore more recent than that of other extant equid species (such as E. asinus and E. quagga, which have a combined 95% HPD range: 410-1030 ka BP; Figure 1 and Figure 1-source data 1; Heintzman et al., 2017). Although the middle Pleistocene holotype TMM 34-2518 (~125-780 ka BP) may predate our Haringtonhippus mitochondrial tMRCA, this sample has no direct date and the range of possible ages falls within the tMRCA range of other extant equid species. We therefore cannot reject the hypothesis of its conspecificity with Haringtonhippus, as defined palaeogenomically. We attempted, but were unable, to recover either collagen or genomic data from TMM 34-2518 (Appendix 2), consistent with the taphonomic, stratigraphic, and geographic context of this fossil (Hay, 1915;Lundelius and Stevens, 1970). Altogether, the molecular evidence is consistent with the assignment of H. francisci as the type and only species of Haringtonhippus. Discussion Reconciling the genomic and fossil records of Plio-Pleistocene equid evolution The suggested placement of NWSL equids within a taxon (Haringtonhippus) sister to Equus is a departure from previous interpretations, which variably place the former within Equus, as sister to hemionines or caballine horses ( Figure 1). According to broadly accepted palaeontological interpretations, the earliest equids exhibiting morphologies consistent with NWSL and caballine attribution appear in the fossil record only~2-3 and~1.9-0.7 Ma ago (Eisenmann et al., 2008;Forsten, 1992), respectively, whereas our divergence estimates suggest that these lineages to have diverged between 4.1-5.8 and 3.8-4.5 Ma, most likely in North America. Dating incongruence might be attributed to an incomplete fossil record, but this seems unlikely given the density of the record for late Neogene and Pleistocene horses. Conversely, incongruence might be attributed to problems with estimating divergence using genomic evidence. However, we emphasize that the NWSL-Equus split is robustly calibrated to the caballine/non-caballine Equus divergence at 4.0-4.5 Ma, which is in turn derived from a direct molecular clock calibration using a middle Pleistocene horse genome (Orlando et al., 2013). Other possibilities to explain the incongruence include discordance between the timing of species divergence and the evolution of diagnostic anatomical characteristics, or failure to detect or account for homoplasy (Forsten, 1992). For example, Pliocene Equus generally exhibits a primitive ('plesippine' in North America, 'stenonid' in the Old World) morphology that presages living zebras and asses (Forsten, 1988(Forsten, , 1992, with more derived caballine (stout-legged) and hemionine (stilt-legged) forms evolving in the early Pleistocene. The stilt-legged morphology appears to have evolved independently at least once in each of the Old and New Worlds, yielding the Asiatic wild asses and Haringtonhippus, respectively. We include the middle-late Pleistocene Eurasian E. hydruntinus within the Asiatic wild asses (following [Bennett et al., 2017;Burke et al., 2003;Orlando et al., 2006]), and note that the Old World sussemione E. ovodovi may represent another instance of independent stilt-legged origin, but its relation to Asiatic wild asses and other non-caballine Equus is currently unresolved (as depicted in Der Orlando et al., 2009;Vilstrup et al., 2013; and Figure 1). It is plausible that features at the plesiomorphous end of the spectrum, such as those associated with Hippidion, survived after the early to middle Pleistocene at lower latitudes (South America, Africa; Figure 1). By contrast, the more derived hemionine and caballine morphologies evolved from, and replaced, their antecedents in higher latitude North America and Eurasia, perhaps as adaptations to the extreme ecological pressures perpetuated by the advance and retreat of continental ice sheets and correlated climate oscillations during the Pleistocene (Forsten, 1992, Forsten, 1996Forsten, 1996. We note that this high-latitude replacement model is consistent with the turnover observed in regional fossil records for Pleistocene equids in North America (Azzaroli, 1992;Azzaroli and Voorhies, 1993) and Eurasia (Forsten, 1988, Forsten, 1996. By contrast, in South America Hippidion co-existed with caballine horses until they both succumbed to extinction, together with much of the New World megafauna near the end of the Pleistocene (Forsten, 1996;Koch and Barnosky, 2006;O'Dea et al., 2016). This model helps to explain the discordance between the timings of the appearance of the caballine and hemionine morphologies in the fossil record and the divergence of lineages leading to these forms as estimated from palaeogenomic data. Although we can offer no solution to the general problem of mismatches between molecular and morphological divergence estimators-an issue scarcely unique to equid systematics-this model predicts that some previously described North American Pliocene and early Pleistocene Equus species (e.g. E. simplicidens, E. idahoensis; [Azzaroli and Voorhies, 1993]), or specimens thereof, may be ancestral to extant Equus and/or late Pleistocene Haringtonhippus. Temporal and geographic range overlap of Pleistocene equids in North America Three new radiocarbon dates of~14.4 14 C ka BP from a Yukon Haringtonhippus fossil greatly extends the known temporal range of this genus in eastern Beringia. This result demonstrates, contrary to its previous LAD of 31,400 ± 1200 14 C years ago (AA 26780; [Guthrie, 2003]), that Haringtonhippus survived throughout the last glacial maximum in eastern Beringia (Clark et al., 2009) and may have come into contact with humans near the end of the Pleistocene (Goebel et al., 2008;Guthrie, 2006). These data suggest that populations of stilt-legged Haringtonhippus and stout-legged caballine Equus were sympatric, both north and south of the continental ice sheets, through the late Pleistocene and became extinct at roughly the same time. The near synchronous extinction of both horse groups across their entire range in North America suggests that similar causal mechanisms may have led each to their demise. The sympatric nature of these equids raises questions of whether they managed to live within the same community without hybridizing or competing for resources. Extant members of the genus Equus vary considerably in the sequence of Prdm9, a gene involved in the speciation process, and chromosome number (karyotype) (Ryder et al., 1978;Steiner and Ryder, 2013), and extant caballine and non-caballine Equus rarely produce fertile offspring (Allen and Short, 1997;Steiner and Ryder, 2013). It is unlikely, therefore, that the more deeply diverged Haringtonhippus and caballine Equus would have been able to hybridize. Future analysis of high coverage nuclear genomes, ideally including an outgroup such as Hippidion, will make it possible to test for admixture that may have occurred soon after the lineages leading to Haringtonhippus and Equus diverged, as occurred between the early caballine and non-caballine Equus lineages (Jó nsson et al., 2014). It may also be possible to use isotopic and/or tooth mesowear analyses to assess the potential of resource partitioning between Haringtonhippus and caballine Equus in the New World. Fossil systematics in the palaeogenomics and proteomics era: concluding remarks Fossils of NWSL equids have been known for more than a century, but until the present study their systematic position within Plio-Pleistocene Equidae was poorly characterized. This was not because of a lack of interest on the part of earlier workers, whose detailed anatomical studies strongly indicated that what we now call Haringtonhippus was related to Asiatic wild asses, such as Tibetan khulan and Persian onagers, rather than to caballine horses (Eisenmann et al., 2008;Guthrie, 2003;Scott, 2004;Skinner and Hibbard, 1972). That the cues of morphology have turned out to be misleading in this case underlines a recurrent problem in systematic biology, which is how best to discriminate authentic relationships within groups, such as Neogene equids, that were prone to rampant convergence. The solution we adopted here was to utilize both palaeogenomic and morphometric information in reframing the position of Haringtonhippus, which now clearly emerges as the closest known outgroup to all living Equus. Our success in this regard demonstrates that an approach which incorporates phenomics with molecular methods (palaeogenomic as well as palaeoproteomic, e.g. [Welker et al., 2015]) is likely to offer a means for securely detecting relationships within speciose groups that are highly diverse ecomorphologically. All methods have their limits, with taphonomic degradation being the critical one for molecular approaches. However, proteins may persist significantly longer than ancient DNA (e.g. [Rybczynski et al., 2013]), and collagen proteomics may come to play a key role in characterizing affinities, as well as the reality, of several proposed Neogene equine taxa (e.g. Dinohippus, Pliohippus, Protohippus, Calippus, and Astrohippus;[MacFadden, 1998]) whose distinctiveness and relationships are far from settled (Azzaroli and Voorhies, 1993;Forsten, 1992). A reciprocally informative approach like the one taken here holds much promise for lessening the amount of systematic noise, due to oversplitting, that hampers our understanding of the evolutionary biology of other major late Pleistocene megafaunal groups such as bison and mammoths (Enk et al., 2016;Froese et al., 2017). This approach is clearly capable of providing new insights into just how extensive megafaunal losses were at the end of the Pleistocene, in what might be justifiably called the opening act of the Sixth Mass Extinction in North America. Materials and methods We provide an overview of methods here; full details can be found in Appendix 1. Sample collection and radiocarbon dating We recovered Yukon fossil material (17 Haringtonhippus francisci, two Equus cf. scotti, and two E. lambei; Supplementary file 1) from active placer mines in the Klondike goldfields near Dawson City. We further sampled seven H. francisci fossils from the contiguous USA that are housed in collections at the University of Kansas Biodiversity Institute (KU; n = 4), Los Angeles County Museum of Natural History (LACM(CIT); n = 2), and the Texas Vertebrate Paleontology Collections at The University of Texas (TMM; n = 1). We radiocarbon dated the Klondike fossils and the H. francisci cranium from the LACM(CIT) (Supplementary file 1). Morphometric analysis of third metatarsals For morphometric analysis, we took measurements of third metatarsals (MTIII) and other elements. We used a reduced data set of four MTIII variables for principal components analysis and performed logistic regression on the first three principal components, computed in R (R Development Core Team, 2008) (Source code 1). DNA extraction, library preparation, target enrichment, and sequencing We conducted all molecular biology methods prior to indexing PCR in the dedicated palaeogenomics laboratory facilities at either the UC Santa Cruz or Pennsylvania State University. We extracted DNA from between 100 and 250 mg of bone powder following either Rohland et al. (2010) or Dabney et al. (2013a). We then converted DNA extracts to libraries following the Meyer and Kircher protocol (Meyer and Kircher, 2010), as modified by (Heintzman et al., 2015) or the PSU method of (Vilstrup et al., 2013). We enriched libraries for equid mitochondrial DNA. We then sequenced all enriched libraries and unenriched libraries from 17 samples using Illumina platforms. Mitochondrial genome reconstruction and analysis We prepared raw sequence data for alignment and mapped the filtered reads to the horse reference mitochondrial genome (Genbank: NC_001640.1) and a H. francisci reference mtDNA genome (Genbank: KT168321), resulting in mitogenomic coverage ranging from 5.8Â to 110.7Â (Supplementary file 1). We were unable to recover equid mtDNA from TMM 34-2518 (the francisci holotype) using this approach (Appendix 2). We supplemented our mtDNA genome sequences with 38 previously published complete equid mtDNA genomes. We constructed six alignment data sets and selected models of molecular evolution for the analyses described below (Appendix 1-table 1, and Supplementary file 1; Heintzman et al., 2017). We tested the phylogenetic position of the NWSL equids (=H. francisci) using mtDNA data sets 1-3 and applying Bayesian (Ronquist et al., 2012) and maximum likelihood (ML; [Stamatakis, 2014]) analyses. We varied the outgroup, the inclusion or exclusion of the fast-evolving partitions, and the inclusion or exclusion of Hippidion sequences. Due to the lack of a globally supported topology across the Bayesian and ML phylogenetic analyses, we used an Evolutionary Placement Algorithm (EPA; [Berger et al., 2011]) to determine the a posteriori likelihood of phylogenetic placements for candidate equid outgroups using mtDNA data set four. We also used the same approach to assess the placement of previously published equid sequences (Appendix 2). To infer divergence times between the four major equid groups (Hippidion, NWSL equids, caballine Equus, and non-caballine Equus), we ran Bayesian timetree analyses (Drummond et al., 2012) using mtDNA data set five. We varied these analyses by including or excluding fast-evolving partitions, constrained the root height or not, and including or excluding the E. ovodovi sequence. To facilitate future identification of equid mtDNA sequences, we constructed, using data set six, a list of putative synapomorphic base states, including indels and substitutions, that define the genera Hippidion, Haringtonhippus, and Equus at sites across the mtDNA genome. Phylogenetic inference, divergence date estimation, and sex determination from nuclear genomes To test whether our mtDNA genome-based phylogenetic hypothesis truly reflects the species tree, we compared the nuclear genomes of a horse (EquCab2), donkey (Orlando et al., 2013), and the shotgun sequence data from 17 of our NWSL equid samples (Figure 1-source data 2, Appendix 1, Appendix 1-figure 1, and Supplementary file 1). We applied four successive approaches, which controlled for reference genome and DNA fragment length biases (Appendix 1). We estimated the divergence between the NWSL equids and Equus (horse and donkey) by fitting the branch length, or relative private transversion frequency, ratio between horse/donkey and NWSL equids into a simple phylogenetic scenario (Figure 1-figure supplement 3). We then multiplied the NWSL equid branch length by a previous horse-donkey divergence estimate (4.0-4.5 Ma; [Orlando et al., 2013]) to give the estimated NWSL equid-Equus divergence date, following (Heintzman et al., 2015) and assuming a strict genome-wide molecular clock (Figure 1-figure supplement 3). We determined the sex of the 17 NWSL equid samples by comparing the relative mapping frequency of the autosomes to the X chromosome. DNA damage analysis We assessed the prevalence of mitochondrial and nuclear DNA damage in a subset of the equid samples using mapDamage (Jó nsson et al., 2013). Data availability Repository details and associated metadata for curated samples can be found in Supplementary file 1. MTIII and other element measurement data are in Figure 2-source data 1, and the Rscript used for morphometric analysis is in the DRYAD database (Heintzman et al., 2017). MtDNA genome sequences have been deposited in Genbank under accessions KT168317-KT168336, MF134655-MF134663, and an updated version of JX312727. All mtDNA genome alignments (in NEXUS format) and associated XML and TREE files are in the DRYAD database (Heintzman et al., 2017). Raw shotgun sequence data used for the nuclear genomic analyses and raw shotgun and target enrichment sequence data for TMM 34-2518 (francisci holotype) have been deposited in the Short Read Archive (BioProject: PRJNA384940). Nomenclatural act The electronic edition of this article conforms to the requirements of the amended International Code of Zoological Nomenclature, and hence the new name contained herein is available under that Code from the electronic edition of this article. This published work and the nomenclatural act it contains have been registered in ZooBank, the online registration system for the ICZN. The ZooBank LSIDs (Life Science Identifiers) can be resolved and the associated information viewed through any standard web browser by appending the LSID to the prefix 'http://zoobank.org/'. The LSID for this publication is: urn:lsid:zoobank.org:pub:8D270E0A-9148-4089-920C-724F07D8DC0B. The electronic edition of this work was published in a journal with an ISSN, and has been archived and is available from the following digital repositories: PubMed Central and LOCKSS. permitting the sampling of fossils from Natural Trap Cave that were originally recovered by Larry Martin, Miles Gilbert, and colleagues, and are presently curated by the University of Kansas Biodiversity Institute. We thank Chris Beard and David Burnham (University of Kansas) for facilitating access to these fossils. Thanks to Tom Guilderson, Andrew Fields, Dan Chang, and Samuel Vohr for technical assistance. Thanks to Greger Larson for providing the base map in Figure 1. We thank the reviewers whose comments improved this manuscript. This work used the Vincent J Coates Genomics Sequencing Laboratory at UC Berkeley, supported by NIH S10 Instrumentation Grants S10RR029668 and S10RR027303. PDH, JAC, JDK, MS, and BS were supported by NSF grants PLR- The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication. Supplementary methods Yukon sample context and identification Pleistocene vertebrate fossils are commonly recovered at placer mining localities, in the absence of stratigraphic context, as miners are removing frozen sediments to access underlying gold bearing gravel (Froese et al., 2009;Harington, 2011). We recovered H. francisci fossils along with other typical late Pleistocene (Rancholabrean) taxa, including caballine horses (Equus sp.), woolly mammoth (Mammuthus primigenius), steppe bison (Bison priscus), and caribou (Rangifer tarandus), which are consistent with our age estimates based on radiocarbon dating (Supplementary file 1). All Yukon fossil material consisted of limb bones that were taxonomically assigned based on their slenderness and are housed in the collections of the Yukon Government (YG). Radiocarbon dating We subsampled fossil specimens using handheld, rotating cutting tools and submitted them to either the KECK Accelerator Mass Spectrometry (AMS) Laboratory at the University of California (UC), Irvine (UCIAMS) and/or the Center for AMS (CAMS) at the Lawrence Livermore National Laboratory. We extracted collagen from the fossil subsamples using ultrafiltration (Beaumont et al., 2010), which was used for AMS radiocarbon dating. We were unable to recover collagen from TMM 34-2518 (francisci holotype), consistent with the probable middle Pleistocene age of this specimen (Lundelius and Stevens, 1970). We recovered finite radiocarbon dates from all other fossils, with the exception of the two Equus cf. scotti specimens. We calibrated AMS radiocarbon dates using the IntCal13 curve (Reimer et al., 2013) in OxCal v4.2 (https://c14.arch.ox.ac.uk/oxcal/OxCal.html) and report median calibrated dates in Supplementary file 1. Morphometric analysis of third metatarsals Third metatarsal (MTIII) and other elemental measurements were either taken by GDZ or ES or from the literature (Figure 2-source data 1). For morphometric analysis, we focused exclusively on MTIIIs, which exhibit notable differences in slenderness among equid groups (Figure 2-figure supplement 2a; [Weinstock et al., 2005]). Starting with a data set of 10 variables (following [Eisenmann et al., 1988]), we compared the loadings of all variables in principal components space in order to remove redundant measurements. This reduced the data set to four variables (GL: greatest length, Pb: proximal breadth, Mb: midshaft breadth, and DABm: distal articular breadth at midline). We visualized the reduced variables using principal components analysis, computed in R (Appendix 1- Target enrichment and sequencing We enriched libraries for equid mitochondrial DNA following the MyBaits v2 protocol (Arbor Biosciences, Ann Arbor, MI), with RNA bait molecules constructed from the horse reference mitochondrial genome sequence (NC_001640.1). We then sequenced the enriched libraries for 2 Â 150 cycles on the Illumina HiSeq-2000 platform at UC Berkeley or 2 Â 75 cycles on the MiSeq platform at UC Santa Cruz, following the manufacturer's instructions. We produced data for the nuclear genomic analyses by shotgun sequencing 17 of the unenriched libraries for 2 Â 75 cycles on the MiSeq to produce~1.1-6.4 million reads per library (Figure 1-source data 2). Mitochondrial genome reconstruction We initially reconstructed the mitochondrial genome for H. francisci specimen YG 404.663 (PH047). For sequence data enriched for the mitochondrial genome, we trimmed adapter sequences, merged paired-end reads (with a minimum overlap of 15 base pairs (bp) required), and removed merged reads shorter than 25 bp, using SeqPrep (St. John, 2013; https:// github.com/jstjohn/SeqPrep). We then mapped the merged and remaining unmerged reads to the horse reference mitochondrial genome sequence using the Burrows-Wheeler Aligner aln (BWA-aln v0.7.5; [Li and Durbin, 2010]), with ancient parameters (-l 1024; [Schubert et al., 2012]). We removed reads with a mapping quality less than 20 and collapsed duplicated reads to a single sequence using SAMtools v0. . We then re-mapped the reads to the same reference mitochondrial genome using the iterative assembler, MIA (Briggs et al., 2009). Consensus sequences from both alignment methods required each base position to be covered a minimum of three times, with a minimum base agreement of 67%. The two consensus sequences were then combined to produce a final consensus sequence for YG 404.663 (Genbank: KT168321), which we used as the H. francisci reference mitochondrial genome sequence. For the remaining newly analyzed 21 H. francisci, two E. cf. scotti, and two E. lambei samples, we merged and removed reads as described above. We then separately mapped the retained reads to the horse and H. francisci mitochondrial reference genome sequences using MIA. Consensus sequences from MIA analyses were called as described above. The two consensus sequences were then combined to produce a final consensus sequence for each sample, with coverage ranging from 5.8Â to 110.7Â (Supplementary file 1). We also reconstructed the mitochondrial genomes for four previously published samples: For all six data sets, we first created an alignment using muscle (v3.8.31; [Edgar, 2004]). We then manually scrutinized alignments for errors and removed a 253 bp variable number of tandem repeats (VNTR) part of the control region, corresponding to positions 16121-16373 of the horse reference mitochondrial genome. We partitioned the alignments into six partitions (three codon positions, ribosomal-RNAs, transfer-RNAs, and control region), using the annotated horse reference mitochondrial genome in Geneious, following (Heintzman et al., 2015). We excluded the fast-evolving control region alignment for data set three, which included the highly-diverged dog sequence. For each partition, we selected models of molecular evolution using the Bayesian information criterion in jModelTest (v2.1.6; [Darriba et al., 2012]) (Appendix 1-table 1). Appendix 1-table 1. Selected models of molecular evolution for partitions of the first five mtDNA genome alignment data sets. All lengths are in base pairs. Reduced length excludes the Coding3 and CR partitions. For all RAxML analyses the GTR model was implemented. *The TrN model was selected, but this cannot be implemented in MrBayes and so the HKY model was used. EPA: evolutionary placement algorithm; CR: control region. Phylogenetic analysis of mitochondrial genomes To test the phylogenetic position of the NWSL equids, we conducted Bayesian and maximum likelihood (ML) phylogenetic analyses of data sets one, two, and three, under the partitioning scheme and selected models of molecular evolution described above. For outgroup, we selected: White rhinoceros (data set one), Malayan tapir (data set two), or dog (data set three). For each of the data sets, we varied the analyses based on (a) inclusion or exclusion of the fast-evolving partitions (third codon positions and control region, where appropriate) and (b) inclusion or exclusion of the Hippidion sequences. We ran Bayesian analyses in MrBayes (v3.2.6, [Ronquist et al., 2012]) for two parallel runs of 10 million generations, sampling every 1,000, with the first 25% discarded as burn-in. We conducted ML analyses in RAxML (v8.2.4, [Stamatakis, 2014]), using the GTRGAMMAI model across all partitions, and selected the best of three trees. We evaluated branch support with both Bayesian posterior probability scores from MrBayes and 500 ML bootstrap replicates in RAxML. Placement of outgroups and published sequences a posteriori We used the evolutionary placement algorithm (EPA) in RAxML to determine the a posteriori likelihood of phylogenetic placements for eight candidate equid outgroups (two tapirs, six rhinos) relative to the four well supported major equid groups (Hippidion, NWSL equids, caballine Equus, non-caballine Equus). We first constructed an unrooted reference tree consisting only of the equids from data set four in RAxML. We then analyzed the placements of the eight outgroups and retaining all placements up to a cumulative likelihood threshold of 0.99. We used the same approach to assess the placement of 21 previously published equid sequences derived from 13 NWSL equids ( Divergence date estimation from mitochondrial genomes To further investigate the topology of the four major equid groups, and to infer divergence times between them, we ran Bayesian timetree analyses in BEAST (v1.8.4; [Drummond et al., 2012]). Unlike the previous analyses, BEAST can resolve branching order in the absence of an outgroup, by using branch length and molecular clock methods. For BEAST analyses, we used data set five. We did not enforce monophyly. Where available, we used radiocarbon dates to tip date ancient samples. For two samples without available radiocarbon dates, we sampled the ages of tips. For the E. ovodovi sample (mtDNA genome: NC_018783), which was found in a cave that has been stratigraphically dated as late Pleistocene and includes other E. ovodovi remains have been dated to~45-50 ka BP (Eisenmann and Sergej, 2011;Orlando et al., 2009), we used the following lognormal prior (mean: 4.5 Â 10 4 , log(stdev): 0.766, offset: 1.17 Â 10 4 ) to ensure that 95% of the prior fell within the late Pleistocene (11.7-130 ka BP). For the E. cf. scotti mitochondrial genome (KT757763), we used a normal prior (mean: 6.7 Â 10 5 , stdev: 5.64 Â 10 4 ) to ensure that 95% of the prior fell within the proposed age range of this specimen (560-780 ka BP; [Orlando et al., 2013]). We further calibrated the tree using an age of 4-4.5 Ma for the root of crown group Equus (normal prior, mean 4.25 Â 10 6 , stdev: 1.5 Â 10 5 ) (Orlando et al., 2013). To assess the impact of variables on the topology and divergence times, we either (a) included or excluded the fast-evolving partitions, (b) constrained the root height (lognormal prior: mean 1 Â 10 7 , stdev: 1.0) or not, and (c) included or excluded the E. ovodovi sequence, which was not directly dated. We used the models of molecular evolution estimated by jModeltest (Appendix 1-table 1). We estimated the substitution and clock parameters for each partition, and estimated a single tree using all partitions. We implemented the birth-death serially sampled (BDSS) tree prior. We ran two analyses for each variable combination. In each analysis, we ran the MCMC chain for 100 million generations, sampling trees and parameters every 10,000, and discarding the first 10% as burn-in. We checked log files for convergence in Tracer (v1.6; http://tree.bio.ed.ac.uk/ software/tracer/). We combined trees from the two runs for each variable combination in LogCombiner (v1.8.4) and then calculated the maximum clade credibility (MCC) tree in TreeAnnotator (v1.8.4). We report divergence dates as 95% highest posterior probability credibility intervals of node heights. Mitochondrial synapomorphy analysis We first divided data set six, which consists of all available and complete equid mitogenomic sequences, into three data sets based on the genera Hippidion, Haringtonhippus, and Equus. For each of the three genus-specific alignments, we created a strict consensus sequence, whereby sites were only called if there was 100% sequence agreement, whilst including gaps and excluding ambiguous sites. We then compared the three genus-specific consensus sequences to determine sites where one genus exhibited a base state that is different to the other two genera, or, at five sites, where each genus has its own base state (Appendix 1table 2-source data 1). In this analysis, we did not make any inference regarding the ancestral state for the identified synapomorphic base states. We identified 391 putative mtDNA genome synapomorphies for Hippidion, 178 for Haringtonhippus, and 75 for Equus (Appendix 1- The following source data available for Appendix 1- [Orlando et al., 2013]) with shotgun sequence data from 17 of our NWSL equid samples (Figure 1-source data 2, and Supplementary file 1). We merged paired-end reads using SeqPrep as described above, except that we removed merged reads shorter than 30 bp. We further removed merged and remaining unmerged reads that had low sequence complexity, defined as a DUST score >7, using PRINSEQ-lite v0.20.4 (Schmieder and Edwards, 2011). We used four successive approaches to minimize the impact of mapping bias introduced from ancient DNA fragment length variation and reference genome choice. We first followed a modified version of the approach outlined in (Heintzman et al., 2015). We mapped the donkey genome to the horse genome by computationally dividing the donkey genome into 150 bp 'pseudo-reads' tiled every 75 bp, and aligned these pseudoreads using Bowtie2-local v2.1.0 (Langmead and Salzberg, 2012) while allowing one seed mismatch and a maximum mismatch penalty of four to better account for ancient DNA specific damage (Appendix 1-figure 1, steps 1-3). We then mapped the filtered shotgun data from each of the NWSL equid samples to the horse genome using Bowtie2-local with the settings described above, and removed PCR duplicated reads and those with a mapping quality score of <30 in SAMtools. We called a pseudo-haploidized sequence for the donkey and NWSL equid alignments, by randomly picking a base with a base quality score !60 at each position, using SAMtools mpileup. We masked positions that had a coverage not equal to 2Â (donkey) or 1Â (NWSL equid), and those located on scaffolds shorter than 100 kb (Appendix 1-figure 1, step 4). As the horse, donkey, and NWSL equid genome sequences were all based on the horse genome coordinates, we compared the relative transversion frequency between the donkey or NWSL equids and the horse using custom scripts. We restricted our analyses to transversions to avoid the impacts of ancient DNA damage, which can manifest as erroneous transitions from the deamination of cytosine (e.g. Appendix 2- figure 1,2) (Dabney et al., 2013b). We repeated this analysis, but with the horse and NWSL equids mapped to the donkey genome (the donkey genome coordinate framework). Appendix 1-figure 1. An overview of the nuclear genome analysis pipeline. A first reference genome sequence (red; step 1) is divided into 150 bp pseudo-reads, tiled every 75 bp for exactly 2 Â genomic coverage (step 2). These pseudo-reads are then mapped to a second reference genome (blue; step 3), and a consensus sequence of the mapped pseudo-reads is called (step 4). Regions of the second reference genome that are not covered by the pseudo-reads are masked (step 5). For each NWSL equid sample, reads (orange) are mapped independently to the first reference consensus sequence (step 6a) and masked second reference genome (step 6b). Alignments from steps 6a and 6b are then merged (step 7). For alignment coordinates that have base calls for the first reference, second reference, and NWSL equid sample genomes, the relative frequencies of private transversion substitutions (yellow stars) for each genome are calculated (step 8). The co-ordinates from the second reference genome (blue) are used for each analysis. For the second approach and using the horse genome coordinate framework, we next masked sites in the horse reference genome that were not covered by donkey reads at a depth of 2Â. This resulted in the horse genome and donkey consensus sequence being masked at the same positions (Appendix 1-figure 1, step 5). We then separately mapped the filtered NWSL equid shotgun data to scaffolds longer than 100 kb for the masked horse genome and donkey consensus sequence (Appendix 1-figure 1, step 6), called NWSL equid consensus sequences, and calculated relative transversion frequencies as described above. This analysis was repeated using the donkey genome coordinate framework. Next, for each genome coordinate framework, we combined the two alignments for each NWSL equid sample from approach two to create a union of reads mappable to both the masked coordinate genome and alternate genome consensus sequence (Appendix 1- figure 1, step 7). If a NWSL equid read mapped to different coordinates between the two references, we selected the alignment with the higher map quality score and randomly selected between mappings of equal quality. We then called NWSL equid consensus sequences as above. As this third approach allowed for simultaneous comparison of the horse, donkey, and NWSL equid sequences, we calculated relative private transversion frequencies for each sequence, at sites where all three sequences had a base call, using trialn-report (Green et al., 2015); https://github.com/Paleogenomics/Chrom-Compare) (Appendix 1- figure 1, step 8). Finally, as a fourth approach and for both genome coordinate frameworks, we repeated approach three with the exception that we divided the NWSL alignments by mapped read length. We split the alignments into 10 bp read bins ranging from 30-39 to 120-129 bp, and discarded longer reads and paired-end reads that were unmerged by SeqPrep. We called consensus sequences and calculated relative private transversion frequencies for each sequence as described above. We only used relative private transversion frequencies from the 90-99 to 120-129 bp bins for divergence date estimates (Appendix 2). Sex determination from nuclear genomes We used the alignments of the 17 NWSL equids to the horse genome, from approach one described above, to infer the probable sex of these individuals. For this, we determined the number of reads mapped to each chromosome using SAMtools idxstats. For each chromosome, we then calculated the relative mapping frequency by dividing the number of mapped reads by the length of the chromosome. We then compared the relative mapping frequency between the autosomes and X-chromosome. As males and females are expected to have one and two copies of the X chromosome, respectively, and two copies of every autosome, we inferred a male if the ratio between the autosomes and X-chromosome was 0.45-0.55 and a female if the ratio were 0.9-1.1. DNA damage analysis For a subset of nine samples, we realigned the filtered sequence data from the libraries enriched for equid mitochondrial DNA to either the H. francisci (for H. francisci samples) or horse (for E. lambei and E. cf. scotti samples) reference mitochondrial genome sequences using BWA-aln as described above. We also realigned the filtered unenriched sequence data to the horse reference genome (EquCab2) for a subset of six samples using the same approach. We then analyzed patterns of DNA damage in mapDamage v2.0.5 (Jó nsson et al., 2013). DOI: https://doi.org/10.7554/eLife.29944.024 Ancient DNA characterization We selected a subset of samples for the analysis of DNA damage patterns. In all of these samples, we observe expected patterns of damage in both mitochondrial and nuclear DNA, including evidence of the deamination of cytosine residues at the ends of reads, depurinationinduced strand breaks, and a short mean DNA fragment length (Dabney et al., 2013b) (Appendix 2- figure 1-2). We note that the sample with the greatest proportion of deaminated cytosines is E. cf. scotti (YG 198.1; Appendix 2- figure 1v-x), which is the oldest sample in the subset (Supplementary file 1). These results are from the Bayesian and maximum likelihood (ML) analyses of mtDNA data sets 1-3, including either the all or reduced partition sets, and with Hippidion sequences either included or excluded. Topology numbers and node letters refer to those outlined in Appendix 2-figure 3. Bayesian posterior probability support of >0.99 and ML bootstrap support of >95% are in bold for nodes A and B. *support for nodes that are consistent with topology one in Appendix 2-figure 3. NCs: non-caballines. Outgroup Partitions Hippidion? We further investigated the effect of outgroup choice by using an evolutionary placement algorithm (EPA; [Berger et al., 2011]) to place the outgroup sequences into an unrooted ML phylogeny a posteriori using the same set of variables described above. We find that the outgroup placement likelihood is increased with the inclusion of Hippidion sequences, and that the only placements with a likelihood of !0.95 are consistent with topology one (Appendix 2-figure 3; Appendix 2-table 2), in agreement with the Bayesian and ML phylogenetic analyses. The phylogenetic and EPA analyses demonstrate that outgroup choice can greatly impact equid phylogenetic inference and that multiple outgroups should be used for resolving relationships between major equid groups. We lastly ran Bayesian timetree analyses in BEAST in the absence of an outgroup, whilst including or excluding the fast-evolving partitions, including or excluding the E. ovodovi sequence, and constraining the root prior or not. All BEAST analyses yielded a maximum clade credibility tree that is consistent with topology one (Figure 1 and Appendix 2- figure 3) with Bayesian posterior probability support for the NWSL equid-Equus and Equus clades of 0.996-1.000 (Figure 1-source data 1). Altogether, the phylogenetic, EPA, and timetree analyses support topology one (Appendix 2-figure 3), with NWSL equids falling outside of Equus, and therefore the NWSL equids as a separate genus, Haringtonhippus. Placement of previously published NWSL equid sequences To confirm that all 15 previously published NWSL equid samples with available mtDNA sequence data (Barró n- Ortiz et al., 2017;Vilstrup et al., 2013;Weinstock et al., 2005) belong to H. francisci, we either reconstructed mitochondrial genomes for these samples (JW277, JW161; [Weinstock et al., 2005]), placed the sequences into a ML phylogeny a posteriori using the EPA whilst varying the partitioning scheme and inclusion or exclusion of Hippidion (Appendix 2-table 3), or both. For JW277 and JW161, the mitochondrial genomes were consistent with those derived from the newly analyzed samples (Figure 1-figure supplement 1). For eight other NWSL equid mitochondrial sequences (JW125, JW126, JW328, EQ3, EQ9, EQ13, EQ22, EQ41; [Barró n-Ortiz et al., 2017;Vilstrup et al., 2013;Weinstock et al., 2005]), including samples from Mineral Hill Cave and Dry Cave (Supplementary file 1), the EPA strongly supported a ML placement within the NWSL equid clade (cumulative likelihood of 0.974-1.000). The EPA placed four sequences from Dry Cave, San Josecito Cave, and the Edmonton area (EQ1, EQ4, EQ16, EQ30; [Barró n-Ortiz et al., 2017]) within the NWSL equid clade albeit with lower support (cumulative likelihood of 0.703-0.854). We note that in the case of EQ4 from Edmonton, this may be due to very limited available sequence data (117 bp). For EQ1, EQ16, and EQ30, the placement with the second greatest support is the branch leading to NWSL equids (cumulative likelihood of 0.138-0.259), which, assuming high fidelity of the sequence data, may indicate that these samples fall outside of, but close to, sampled NWSL equid mitochondrial diversity. However, the EPA placed the remaining sample (MS272; [Vilstrup et al., 2013]) on the branch leading to NWSL equids with strong support (likelihood: 1.000). We therefore explored whether this is real or if the published sequence for MS272 was problematic. We first tested the EPA on eight other equid mitochondrial sequences (E. ovodovi, n = 3; Hippidion devillei, n = 5), which grouped as expected from previous analyses (likelihood: 0.999-1.000; Appendix 2-table 3; [Orlando et al., 2009]). We then used our mitochondrial genome assembly pipeline to reconstruct a consensus for MS272 from the raw data used by Vilstrup et al. (2013), which resulted in a different sequence that was consistent with other NWSL equids. To confirm this new sequence, we used the original MS272 DNA extract for library preparation, target enrichment, and sequencing. The consensus from this analysis was identical to our new sequence. We sought to understand the origins of the problems associated with the published MS272 sequence. We first applied our synapomorphy analysis. For the called bases, we found that the published MS272 sequence contained 0/384 diagnostic bases for Hippidion, 124/164 for Haringtonhippus, and 16/70 for Equus (Appendix 1-table 2-source data 1). We infer from this analysis that the published MS272 sequence is therefore~76% Haringtonhippus and that~23% originates from Equus. The presence of Equus synapomorphies could be explained by the fact that the enriched library for MS272 was sequenced on the same run as ancient caballine horses (Equus), thereby potentially introducing contaminating reads from barcode bleeding (Kircher et al., 2012), which may have been exacerbated by alignment to the modern horse reference mitochondrial genome with BWA-aln and consensus calling using SAMtools (Vilstrup et al., 2013). The presence of caballine horse sequence in the published MS272 mtDNA genome explains why previous phylogenetic analyses of mitochondrial genomes have recovered NWSL equids as sister to caballine Equus with strong statistical support Vilstrup et al., 2013). Resolving the phylogenetic placement of NWSL equids using nuclear genomes The horse and donkey genomes are representative of total Equus genomic diversity (Jó nsson et al., 2014), and so, if NWSL equids are Equus, we should expect their genomes to be more similar to either horse or donkey than to the alternative. Initial analyses based on approach one (see Appendix 1) were inconclusive, with some NWSL equid samples appearing to fall outside of Equus (higher relative transversion frequency between the NWSL equid and the horse or donkey than between the horse and donkey) and others inconsistently placed in the phylogeny, appearing most closely related to horse when aligned to the horse genome and most closely related donkey when aligned to the donkey genome (Figure 1-source data 2). We then used approaches two and three in an attempt to standardize between the horse and donkey reference genomes, and therefore reduce potential bias introduced from the reference genome. In the latter union-based approach, mapping should not be disproportionately sensitive to regions of the genome where NWSL equids are more horse-or donkey-like. These approaches, however, were not successful, but we noted that relative private transversion frequency for the coordinate genome and NWSL equid sequences correlated with mean DNA fragment length (Appendix 2- figure 4 and Figure 1-source data 2). We therefore used approach four to control for the large variation in mean DNA fragment length between NWSL equid sequences (Appendix 2- figure 2 and Figure 1-source data 2), which is likely due to a combination of DNA preservation and differences in the DNA extraction and library preparation techniques used (Figure 1-source data 2). This allowed for direct comparison between the NWSL equid samples, which showed a consistent pattern across read length bins (Figure 1-figure supplement 2, Figure 1-source data 1). The relative private transversion frequency for both the coordinate genome and NWSL equid sequences increase with read length until the 90-99 bp bin, at which point the coordinate genome and alternate sequence relative private transversion frequencies converge (defined as a ratio between 0.95-1.05) and the NWSL equid relative private transversion frequencies reach plateau at between 1.40-1.56Â greater than that of the horse or donkey (Figure 1-figure supplements 2-3, Figure 1-source data 1). The following source data available for Appendix 2- We note that all three Gypsum Cave samples are inferred to be female, have statistically indistinguishable radiocarbon dates, and identical mtDNA genome sequences (Figure 1figure supplement 1b, Supplementary file 1). However, the skull was found in room four of the cave, whereas the femur and metatarsal were found in room three. The available evidence therefore suggests that these samples represent at least two individuals. Intriguingly, we further note that, across all 17 NWSL equid samples, the relative mapping frequency for chromosomes 8 and 13 is appreciably greater than the remaining autosomes (Appendix 2-table 4-source data 1). This may suggest that duplicated regions of these chromosomes are present in NWSL equids, as compared to the horse (E. caballus). Designation of a type species for Haringtonhippus We sought to designate a type species for the NWSL equid genus, Haringtonhippus, using an existing name, in order to avoid adding to the unnecessarily extensive list of Pleistocene North American equid species names (Winans, 1985). For this, we scrutinized nine names that have previously been assigned to NWSL equids in order of priority (date the name was first described in the literature). We rejected names that were solely based on dentitions, as these anatomical features are insufficient for delineating between equid groups (Groves and Willoughby, 1981). The earliest named species with a valid, diagnostic holotype is francisci Hay (1915). On the basis of taxonomic priority, stratigraphic age, and cranial and metatarsal comparisons (see main results and below), we conclude that francisci Hay (1915) is the most appropriate name for Haringtonhippus. We note that this middle Pleistocene species is also small, like our late Pleistocene specimens. The nine examined names were: conversidens Owen, 1869: a small species based upon a partial palate from Tepeyac Mountain, northeast of Mexico City, Mexico. The type fossil has no reliably diagnostic features other than small size, and no more diagnostic topotypal remains are available. For this reason, the validity of the name has previously been challenged by some authors (e.g., absolutely and as a percentage of the skull length. The rostrum of the GCS is also absolutely narrower; the fHS, despite being the smaller skull, is transversely broader at the i/3. The palatine foramina are positioned medial to the middle of the M 2 in the GCS, whereas they are medial to the M 2 -M 3 junction in the fHS. Viewed laterally, the orbits of the GCS have more pronounced supraorbital ridges than those of the fHS. The latter skull also exhibits somewhat stronger basicranial flexion than the GCS. Dentally, the GCS exhibits arcuate protocones, with strong anterior heels and marked lingual troughs in P 3 -M 3 ; the fHS has smaller, triangular protocones with less pronounced anterior heels and no lingual trough or groove. These characters are not thought to result from different ontogenetic stages, since both specimens appear to be of young adults (all teeth in wear and tall in the jaw). Both the GCS and the fHS have relatively simple enamel patterns on the cheek teeth, with few evident plications. Not only are the observed differences between these two specimens unlikely to result from ontogeny, they also don't result from sex, since both skulls appear to be females given the absence of canine teeth. The inference of the GCS being female is further supported by palaeogenomic data (Appendix 2-table 4). Attempt to recover DNA from the francisci holotype We attempted to retrieve endogenous mitochondrial and nuclear DNA from the holotype of francisci Hay (TMM 34-2518), to directly link this anatomically-derived species name with our palaeogenomically-derived genus name Haringtonhippus, but were unsuccessful. After sequencing a library enriched for equid mitochondrial DNA (see Appendix 1), we could only align 11 reads to the horse reference mitochondrial genome sequence with BWA. Using the basic local alignment search tool (BLASTn), we show that these reads are 100% match to human and therefore likely originate from contamination. We repeated this approach using MIA and aligned 166 reads, which were concentrated in 20 regions of the mitochondrial genome. We identified these sequences as human (n = 18, 96-100% identity), cow (n = 1, 100%), or Aves (n = 1, 100%), consistent with the absence of endogenous mitochondrial DNA in this sample. We further generated~800,000 reads from the unenriched library for TMM 34-2518, and followed a modified metagenomic approach, outlined in (Graham et al., 2016), to assess if any endogenous DNA was present. We mapped the reads to the horse reference genome (EquCab2), using the BWA-aln settings of (Graham et al., 2016), of which 538 reads aligned. We then compared these aligned reads to the BLASTn database. None of the reads uniquely hit Equidae or had a higher score to Equidae than non-Equidae, whereas 492 of the reads either uniquely hit non-Equidae or had a higher score to non-Equidae than Equidae. These results are consistent with either a complete lack, or an ultra-low occurrence, of endogenous DNA in TMM 34-2518. Morphometric analysis of third metatarsals Stilt-and stout-legged equids can be distinguished with high accuracy (98.2%; logistic regression) on the basis of third metatarsal (MTIII) morphology (Figure 2c, Appendix 1table 2-source data 1, and Appendix 2-table 4-source data 1), which has the potential to easily and confidently distinguish candidates from either group prior to more costly genetic testing. We note that future genetic analysis of ambiguous specimens, that cross the 'middle ground' between stilt-and stout-legged regions of morphospace, could open the possibility of a simple length-vs-width definition for these two morphotypes. Furthermore, we can highlight potential misidentifications, such as the two putative E. lambei specimens that fall within stilt-legged morphospace (Figure 2c), which could then be tested by genetic analysis. Intriguingly, an Old World E. ovodovi (stilt-legged; MT no. 6; [Eisenmann and Sergej, 2011]) and New World E. cf. scotti (stout-legged; CMN 29867) specimen directly overlap in a stout-legged region of morphospace (Figure 2c), which could indicate that either this E. ovodovi specimen was misidentified or that this species straddles the delineation between stilt-and stout-legged morphologies. H. francisci occupies a region of morphospace distinct from caballine/stout-legged Equus, but overlaps considerably with hemionine/stilt-legged Equus (Figure 2c). The holotype of H. francisci (TMM 34-2518) is very pronounced in its slenderness; it has a greater MTIII length than most other H. francisci but slightly smaller width/breadth measurements. This holotype is surpassed in these dimensions only by the quinni Slaughter et al. holotype, which has itself previously been synonymized with francisci Hay (Lundelius and Stevens, 1970;Winans, 1985). This suggests a potentially larger range of MTIII morphology for H. francisci than exhibited by the presently assigned specimens. We observe that this diversity may be influenced by geography, with H. francisci specimens from high-latitude Beringia having shorter MTIIIs relative to those from the lower-latitude contiguous USA. We note that two New World caballine Equus from Yukon, E. cf. scotti and E. lambei, appear to separate in morphospace (Figure 2c), primarily by MTIII length, supporting the potential delineation of these two taxa using MTIII morphology alone.
2017-08-07T03:06:33.700Z
2017-06-24T00:00:00.000
{ "year": 2017, "sha1": "4144c829eef4c211d2951e82bc3bc3d0f40e577f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.29944", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "65a0a409f4236e4127db8fc7c5e9bde8b224312a", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Geography", "Medicine" ] }
13641842
pes2o/s2orc
v3-fos-license
Do presence and location of annular tear influence clinical outcome after lumbar total disc arthroplasty? A prospective 1-year follow-up study Background Lumbar total disc arthroplasty is often performed in patients with axial back pain. There are multiple etiologies for axial back pain, including disc degeneration and annular tears. The location of these annular tears can vary, producing differing preoperative symptomatology. Intraoperatively, disruptions in the annulus are identifiable, and it has been suggested that patients with discrete annular tears may have better clinical outcomes after surgery. The purpose of this study was to investigate whether the presence and location of annular tears have an effect on clinical outcomes after lumbar total disc arthroplasty. Methods Patients undergoing a single-level anterior disc replacement from L3-S1 at a single site by a single surgeon were evaluated preoperatively for the presence or absence of annular tears with magnetic resonance imaging. All patients were part of either the ProDisc (n = 41) (Synthes, Paoli, Pennsylvania) or Activ-L (n = 19) (Aesculap [B. Braun Melsungen AG], Tuttlingen, Germany) lumbar prospective clinical trials. In those patients with annular tears, the location of the tear (central, paracentral, or lateral) was documented. Patients were assessed at 6 and 12 months after lumbar total disc arthroplasty with the Oswestry Disability Index (ODI), visual analog scale (VAS) score for back pain, VAS score for leg pain, and radiographic imaging. All radiographic evaluations were conducted by an attending neuroradiologist and an attending spinal surgeon, and reliability testing was performed. An analysis of variance was performed among the 3 anatomic locations of annular tears. Results A total of 60 patients were included and had complete 12-month follow-up. The prevalence of annular tears among all patients was 42% (n = 25). Outcome data in patients without annular tears were as follows: ODI, 66% preoperatively and 26% postoperatively; VAS score for back pain, 8.0 preoperatively and 2.6 postoperatively; and VAS score for leg pain, 2.9 preoperatively and 1.2 postoperatively. Among those patients with tears, the prevalence of central tears was 80%, the prevalence of paracentral tears was 12%, and the prevalence of lateral tears was 8%. Outcome data in patients with central tears were as follows: ODI, 66% preoperatively and 26% postoperatively; VAS score for back pain, 7.8 preoperatively and 2.6 postoperatively; and VAS score for leg pain, 5.2 preoperatively and 0.5 postoperatively. Outcome data in patients with paracentral tears were as follows: ODI, 86% preoperatively and 59% postoperatively; VAS score for back pain, 8.8 preoperatively and 3.3 postoperatively; and VAS score for leg pain, 5.0 preoperatively and 5.4 postoperatively. Outcome data in patients with lateral tears were as follows: ODI, 6.5 preoperatively and 2.6 postoperatively; VAS score for back pain, 9.2 preoperatively and 0.2 postoperatively; and VAS score for leg pain, 1.4 preoperatively and 0.7 postoperatively. In those patients with paracentral tears, there was a significantly higher incidence of postoperative radicular symptoms both from an intensity standpoint and from a duration standpoint. Other complications did not vary among those patients with or without annular tears. Conclusions Although patients with annular tears and patients without annular tears improve after lumbar artificial disc replacement, those with central annular tears or without tears have significantly lower disability scores than those with paracentral tears or lateral tears, whose outcome scores showed significantly less improvement (P ≤ .03). In particular, patients with central tears have less postoperative leg pain than those with paracentral annular tears. In this study the presence or absence of an annular tear on magnetic resonance imaging was not a significant predictive factor for clinical outcome. Further investigation regarding the effects of paracentral annular tears and surgical techniques should be explored. Axial back pain is often a consequence of disc injury rather than musculotendinous or ligamentous strain, and it is often debilitating to patients. Experimental evidence suggests that disc injury results not necessarily from an acute traumatic etiology but rather results from an internal disruption of the annular lamellae as a result of a chronic degenerative process. Violations of disc integrity can be seen on magnetic resonance imaging (MRI) as separations in the annulus or at the vertebral insertions. Annular tears are most easily assessed during discography, where contrast can egress from the nucleus beyond the annular boundary. Annular tears have been seen in over one-third of asymptomatic patients, so their presence alone does not necessitate intervention. Significant controversy exists between the diagnostic imaging evidence and the clinical manifestation of such disc degeneration. Chemical and structural changes leading to an annular tear can cause pain through the stimulation of sinuvertebral nerves innervating the outer one-third of the annulus or chemical irritation of adjacent nerve roots. 1,2 Disc arthroplasty has been shown to be an effective treatment for discogenic lower-back pain in patients with degenerative disease. [3][4][5][6] Preoperative MRI is used to evaluate the level of degeneration and relative health of the discs, potentially also identifying any annular tears as highintensity zones (HIZs) on T2-weighted imaging. The role of MRI in predicting clinical outcome has been identified for nonsurgical treatment of diseases of the lumbar spine. 7,8 The goal of this prospective study is to evaluate (1) the prognostic value of the presence of an annular tear on MRI in patients undergoing lumbar total disc replacements and (2) the prognostic value of its location (if there is an annular tear). Patient evaluation Prospective data were collected for single-level lumbar total disc replacements performed at Yale New Haven Hospital, New Haven, Connecticut, from July 2003 through July 2008. All patients had back pain at a single level of the lumbar spine, with or without leg pain. After a minimum of 6 months of unresponsiveness to adequate conservative treatment, patients underwent randomization. Patients participating in the ProDisc-L (Synthes, Paoli, Pennsylvania) trial were randomized to lumbar total disc replacement or fusion. Patients participating in the Activ-L (Aesculap [B. Braun Melsungen AG], Tuttlingen, Germany) trial were randomized to ProDisc or Activ-L lumbar total disc replacement. Exclusion criteria were as follows: at least 3 mm of translation or at least 5°of angulation, presence of osteophytes, disc height at least 2 mm smaller compared with adjacent level, herniated nucleus pulposus, or facet joint degeneration. Inclusion criteria were age between 18 and 60 years, willingness and physical ability to participate in the study for a minimum of 1 year of follow-up, Oswestry Disability Index (ODI) greater than 40%, visual analog scale (VAS) score for back pain greater than 40 mm (on a 100-mm VAS), and anterior accessibility of the lumbar spine. MRI evaluation for the presence and location of annular tears was conducted preoperatively by an attending neuroradiologist and an attending spinal surgeon. The presence of an annular tear was dichotomized into absent or present. An annular tear was defined as a fissure or focal hyperintensity within the posterior part of the annulus fibrosus without focal extrusion on T2-weighted imaging. When an annular tear was present, its location (central, lateral, or paracentral) was noted by the same experienced clinicians (Fig. 1). Localization of tears was defined in the same way disc herniations are defined (ie, lateral location is anatomically extraforaminal). Reliability testing showed high agreement between the observers (94% concordance, ϭ 0.89). Clinical outcome Disability and pain scores were acquired with the ODI and VAS for back pain at 3 time points: preoperatively and at 6 and 12 months postoperatively. Patients participating in the Activ-L trial had VAS scores for leg pain compiled at the same time. Only cases with complete 1-year follow-up were used for this study. In addition to questionnaires, patients underwent clinical and radiographic evaluation at each clinical visit and complications were noted. Intraoper- ative complications, duration of the procedure, and blood loss were also noted. All procedures were performed by the senior author. Bias regarding the primary outcome measurements was avoided with patients' responses on questionnaires. Data were analyzed by an independent examiner who was not involved in the surgical procedures or postoperative follow-up. Statistical analysis Data collection was performed with Excel 2007 for Windows (Microsoft, Redmond, Washington). Statistical analysis was performed with SPSS for Windows, version 17.0 (SPSS, Inc., Chicago, Illinois). For each cohort, analyses of differences in continuous data over time were performed with paired t-tests. Analysis of variance was used for analysis between the cohorts. Fisher's Least Significant Difference (LSD) post hoc tests were used for analysis of clinical outcome on different locations. Bonferroni correction was used to test the possible presence of a type I error. A P value of .05 was considered statistically significant. Demographic During the research period, 68 patients were randomized to lumbar total disc replacement. Eight patients did not fulfill all follow-up criteria because of distant places of domicile. Preoperative radiographic evaluation of included patients resulted in 35 patients without annular tears and 25 patients with disruption on MRI. Of the patients without annular tears, 25 (71%) were men, as compared with 13 (52%) with annular violation. The mean age in the no-tear cohort was 39 years (SD, 7 years), and that in the tear cohort was 36 years (SD, 8 years). Of the observed annular tears, 80% were centrally localized, 12% laterally, and 8% paracentrally. There were no patients with multiple tears in different locations. ProDisc total disc replacement was performed in 24 patients (69%) without annular tears and 17 patients (68%) with tears on MRI. In the tear group, 15 patients (75%) with central tears, 1 (33%) with a paracentral tear, and 1 (50%) with a lateral tear received ProDisc implants. Sixty-three percent of all procedures were performed at the lumbosacral level. All other procedures were performed at lumbar level L4-5. An artificial disc was used at L5-S1 in 20 patients (57%) without annular tears and 18 patients (72%) with annular tears. The frequencies of operative level L5-S1 by subcohort were 15 (75%) in centraltear patients, 2 (67%) in paracentral-tear patients, and 1 (50%) in lateral-tear patients. Clinical outcomes In both patients with annular tears and those without annular tears, mean ODI and VAS scores decreased over time with statistical significance (P Ͻ .001) (Figs. 2-4). ODI scores decreased from 66% to 26% in patients without annular tears (P Ͻ .001) and from 68% to 30% in patients with annular tears (P Ͻ .001). In the subcohorts of patients with central, paracentral, and lateral tears, the ODI scores decreased from 66% to 26% (P Ͻ .001), from 86% to 59% (P ϭ .101), and from 65% to 26% (P ϭ .016), respectively. VAS scores for back pain showed a statistically significant decrease over time both in patients with annular tears and in those without annular tears, from 8.0 to 2.6 (P Ͻ .001) and from 8.1 to 2.5, respectively. In the central-tear subcohort, the decrease in VAS scores for back pain, from 7.8 to 2.6, was significant (P Ͻ .001). In patients with tears in the lateral or paracentral locations, the decrease was not statistically significant over time. Over time, the decrease in mean VAS scores for leg pain was statistically significant for the tear cohort and central-tear subcohort, from 4.4 to 1.5 (P ϭ .024) and 5.2 to 0.5 (P ϭ .003), respectively. Analyses with analysis of variance resulted in no statistically significant differences between the cohort with annular tears and the cohort without annular tears. No differences were seen for either mean clinical outcome scores at 12 months' follow-up (P Ն .109) or mean decrease over time (P Ն .238). LSD post hoc tests showed that patients with paracentral tears had significantly higher ODI scores at 12 months postoperatively Mean ODI scores (shown as percentages) in patients with annular tears (divided into 3 cohorts based on tear location) and patients without annular tears. Data were prospectively compiled before and 12 months after total lumbar disc replacement. than central-tear patients and patients without annular tears (P ϭ .022 and P ϭ .019, respectively). Bonferroni post hoc tests did not show significant differences in ODI scores between these patients with annular tears at different locations (P ϭ .113 and P ϭ .130, respectively). LSD post hoc tests showed that patients with paracentral annular tears had significantly higher VAS scores for back pain at 12 months postoperatively compared with patients with centrally localized annular tears (P ϭ .019) and with patients without tears (P ϭ .030). Bonferroni post hoc tests did not show significant differences in VAS scores between these patients with annular tears at different locations (P ϭ .183 and P ϭ .115, respectively). No significant differences in clinical outcome scores were seen between patients with lateral tears and other patients. In those patients with paracentral tears, there was a higher incidence of postoperative radicular symptoms both from an intensity standpoint and from a duration standpoint. No cases of loosening, mechanical failure, infection, or fusion at the affected segment occurred in this series. Furthermore, no intraoperative complications or neurovascular complications were identified during follow-up. Discussion Carragee and Kim 9 showed that the morphometric features of disc herniation and the spinal canal on MRI are powerful predictors of clinical outcome after surgical treatment of disc herniations. In this prospective study, the presence of annular tears on MRI examination had no prognostic value for clinical outcome after treatment of degenerative disc disease with artificial disc replacement. However, patients with paracentral localization of annular tears had worse outcomes than patients with central annular tears or patients without annular tears. Paracentral localization of annular tears on MRI cannot be generalized as a negative prognostic factor for clinical outcome after lumbar total disc replacement, however, when accounting for Bonferroni correction. Lumbar disc arthroplasty has emerged as an alternative to lumbar fusion in the treatment of degenerative disc disease and discogenic back pain. Although the effectiveness of lumbar total disc replacement is still being evaluated, it is important to define the predicting and complicating factors for the patient's clinical course. The indications for these procedures include degenerative changes in the vertebral disc, which-by defini- Fig. 3. Mean VAS scores for back pain in patients with annular tears (divided into 3 cohorts based on tear location) and patients without annular tears. Data were prospectively compiled before and 12 months after total lumbar disc replacement. Fig. 4. Mean VAS scores for leg pain in patients with annular tears (divided into 3 cohorts based on tear location) and patients without annular tears. Data were prospectively compiled before and 12 months after total lumbar disc replacement. tion-include annular tears. The sole presence of an annular tear is not an indication for disc replacement. In our study an annular tear was seen in 25 patients (41%) with complete followup. A particular difficulty for this type of study is the relative ubiquity of tears in the general population. Ernst et al. 10 reported 11 annular tears in 30 asymptomatic volunteers (36.7%), whereas 7 years earlier, Stadnik et al. 11 found annular tears in 20 of 36 asymptomatic volunteers (56%) at the same medical center. In this study reliability testing for observing annular tears on MRI showed high agreement. This corresponds to the interobserver and intraobserver agreement reported by Arana et al., 12 who showed almost perfect agreement for diagnosis of annular tears by MRI. There is a general paucity of information regarding the natural history of annular tears. Disruption of the normal annular lamellae or internal disc architecture results in annular tears on MRI examination. The disc is able to leak out nuclear material from this violation and cause chemical irritation or mass effect on the adjacent nerve root, resulting in radiculopathy. The extrusion of nuclear material results in a poor capacity for the annulus to repair itself and thus continues to manifest as chronic back pain. 13,14 Conversely, there has been documented correlation of annular tears eliciting pain during discography in symptomatic patients with positive predictive values over 85%. 15,16 Therefore the tear may not necessarily be considered an acute abnormality and more likely represents a phase in the internal degeneration of a disc, as described by Kirkaldy-Willis et al. 17 Our study suggests that the presence or absence of tears does not predict clinical outcome after treatment of degenerative disc disease. Munter et al. 14 looked at annular tears on serial MRI scans to determine whether there was any ability to date annular tears and to describe the natural radiologic course. They reported that MRI findings of annular tears do not change over time and, therefore, no conclusion can be made regarding the chronicity of the lesion. The diagnostic value of HIZs on T2-weighted MRI is still being evaluated for reliably identifying annular tears. Annular tears are most easily assessed during discography, where contrast can egress from the nucleus beyond the annular boundary. Discography, however, remains an invasive procedure. Lam et al. 18 showed significant correlation between lumbar disc HIZs with pain reproduction during discography and implied tear as a pain indicator. Peng et al. 16 reported similar results, with all 17 discs with HIZs showing painful reproduction and abnormal morphology on discography. Earlier reports are conflicting, however, because Carragee et al. 19 found that the same percentage of asymptomatic and symptomatic discs with HIZs were painful during discogram. This led the authors to conclude that HIZs do not reliably indicate the presence of symptomatic disc disruption. 19 The current study did not investigate correlations between HIZs and preoperative discography results when analyzing the outcomes of disc replacement. In conclusion, patients with or without annular tears improve after lumbar anterior disc replacement. In addition, patients with central annular tears or without tears have significantly lower disability scores than those with paracentral tears, whose outcome scores were significantly worse (P Յ .03). In particular, patients with central tears improve more with less postoperative leg pain than patients with paracentral annular tears. Further investigation regarding the effects of paracentral annular tears and surgical techniques should be explored.
2017-10-17T04:58:53.679Z
2012-12-01T00:00:00.000
{ "year": 2012, "sha1": "6fafa27bef0092206cf3790caebeafa06cf00afd", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc4300871?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "d0370b84a3833ea2a37e0a6193edc2b03929241b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267345603
pes2o/s2orc
v3-fos-license
Examining the relationship of career crafting, perceived employability, and subjective career success: the moderating role of job autonomy Career crafting has emerged as a significant construct in the field of career development, with the potential to significantly boost individuals’ overall work satisfaction. This study aimed to examine whether career crafting could improve individual’s subjective career success and perceived employability. Career crafting is an inevitable course of career-related actions to achieve career satisfaction. Based on proactive behavior theory, it is hypothesized that career crafting would have an impact on individuals’ subjective career success and perceived employability through the moderating role of job autonomy. Using cross-sectional study design, data were collected via Google Forms survey from 224 employees working in various fields in Pakistan and data were analyzed using structural equation modeling (SEM) via AMOS. The results indicate that career crafting has a significant positive relationship with subjective career success and perceived employability. Furthermore, job autonomy also has significant positive relationship with subjective career success and perceived employability. However, the moderation of job autonomy was not supported. This study provides robust insights to career practitioners, academicians, and individuals. Overall, the study expands the literature of the novel notion of career crafting and career outcomes; additionally, the study advocates organizations to include career crafting in HR policies and helping them to enhance the well-being of employees in their career development. Introduction Technological advancements, the globalization of businesses, and increased competition in the workplace have altered the perceptions of careers, and employees must be aware of emerging market trends and the demand for job-related skills or may forfeit their jobs.As a result of the environment's rapid transformation, occupations are becoming more fluid and versatile, and individuals are changing their jobs and organizations more frequent than in the past. In the past few years, the concept of career crafting has emerged as a popular approach to career development and management.With the changing nature of work and the increasing complexity of career paths, career crafting offers individuals the opportunity to take a more active role in shaping their own career paths.Career crafting is when individuals choose anticipating behaviors to improve career-related outcomes by achieving personcareer fit [55].Those who constantly ruminate on their professional accomplishments are ambitious and motivated to advance their careers.Hence, by emphasizing on proactive career behaviors and competencies, individuals can steer and customize their careers [162]. In the past, career paths were clear and predictable; however, they are becoming increasingly dynamic and complex, requiring individuals to take control of their careers to be successful in their professional lives [61].Individuals are responsible for making changes to their professions if they wish to survive and thrive in today's flexible and demanding workplaces.The current rapid changes in the marketplace entail that employees be wellprepared, possess refined capabilities and skills, and be able to meet the needs of their employers; thus, career crafting is essential for their success.[162]. Career crafting is rarely addressed in the academic literature, most empirical evidence based on job crafting; however, it is suggested for practical use [126,153].According to Shockley et al. [153], career crafting must be studied alongside other career outcomes variables as career satisfaction and employability.Organizations desire a transition toward soft career development options such as mentoring employees, developing plans for staffs' career development, and employees selfdetermining career crafting options integrated with the objectives and needs of the organizations [149].This will increase staff retention, human capital, and capacity development of employees, resulting in employees' motivation and satisfaction.When employees engage in the process of career crafting and meet the demanding requirements of their work, it is believed that they are highly committed to their jobs, which is reflected positively in their job attitude [100,111].Therefore, career crafting is essential for the development of a positive work attitude, and it has a positive impact on employee engagement and motivation.Accordingly, this study aims to examine the relationship between career crafting, perceived employability, and subjective career success.In addition to, the moderating role of job autonomy in the relationship between career crafting, subjective career success, and perceived employability. Problem statement Career crafting is a substantial factor for career success and sustainable employment.Existing literature indicates that engaging in proactive behaviors to develop career competencies has a positive impact on key career outcomes, including job satisfaction and employability [30,134].The current body of the literature is insufficient to establish whether career crafting is the underlying cause for increased subjective career success and perceived employability.Furthermore, the factors that influence the relationships between career crafting, subjective career success, and perceived employability are unclear. Likewise, job crafting is associated with challenging job assignments, improved access to onsite facilities, work engagement, and job performance [145,164].Proactive behaviors in the workplace serve as a guide for employees to engage in appropriate job crafting throughout their professional lives [112].Career crafting is an emerging concept that emerged from the convergence of job crafting, career competencies, and career self-management [162]. Lastly, up to the best knowledge of the researchers, there is a dearth of research examining the relationship between career crafting, subjective career success, and perceived employability in Pakistani contexts, with job autonomy as a moderating factor.This highlights the significance of conducting further investigations into the concept of career crafting.Furthermore, the concept of career crafting is closely linked to career self-management, job crafting, and professional competences.However, career crafting is separate from these metrics and provides novel perspectives.Consequently, engaging in the research endeavor of career crafting will result in tangible contributions within the realm of career development.Therefore, the research addresses the following questions: 1. How career crafting is related to perceived employability and subjective career success in the context of Pakistan?2. Does job autonomy moderates the relationship between career crafting, perceived employability, and subjective career success? Hence, the study aims to: • Examine the relationship of career crafting, perceived employability, and subjective career success, and • Scrutinize the moderation role of job autonomy in the career crafting, perceived employability, and subjective career success relationships. Career crafting The concept of career crafting is crucial in today's contemporary workplaces.According to Arthur et al. [14], a person's career is the sequence of professional experiences that they have as their lives progress.The nature of a career is constantly changing, intriguing, varying for each individual, and subjective [108].The traditional view of careers as enduring and certain has lost prominence [92,115].According to a survey by [105], over 77% of contemporary employees prefer to manage their own careers, and there will be an increase in career-related movement across organizational and job boundaries.In the light of the increasing exclusion of employees' career development by corporations.There has been a shift toward a proactive approach that encourages individuals to direct their own professional trajectories, which has various positive effects on both the individuals and their organizations [9,41,121].De Vos et al. [54] proposed the term "career crafting, " which refers to when individuals engage in proactive career behaviors to achieve individual-career suitability and key career outcomes considering the rapidly changing marketplace dynamics, recent requirements pertinent to employment, and industrial shifts.Proactive behavior is defined as when people take initiative, act to change the status quo for improvement, or decide and take action to make a fresh start [47].Proactive behaviors include all preplanned actions individuals take that help them achieve their career goals [109,113,158]. Similarly, Tims and Akkermans [162] argued that proactive career-self-management behaviors are necessary for achieving individual-career alignment.Career crafting has two dimensions: a) proactive career construction and b) proactive career reflection; the first dimension indicates when employees are engaged in interactions and networking to advance their careers, whereas the second dimension indicates when employees reflect on their careers proactively and pursue the search for motivation and career-relevant skills.Individuals who possess proactive characteristics and abilities are ambitious and motivated to redefine and shape their careers. According to De Vos et al. [54], the primary proactive behaviors that individuals engage in to nurture the lifespan of their careers are those that pertain to career crafting.Career crafting is developed when job crafting, career competences, and career self-management come together.These concepts promote proactivity in individuals, which are success driver for work and career success [162].The concepts of job crafting, career competencies, and career self-management are defined as follows: Job crafting is all about the self-initiating behaviors of individuals for the optimization of their jobs with their skills, capabilities, knowledge, and preferences, and certain work aspects are changed by initiating these behaviors by themselves without direct support from others [19].Career competencies are the set of skills that an individual needs to advance in their career.It is assumed that employees engage in job crafting to establish a good alignment between their personal characteristics and their jobs and that this eventually increases the individual's fitness with a job [165].For instance, some individuals perform very well when they are given deadlines, while others require clarity and instructions from their supervisors [162]. Second, according to Akkermans et al. [5], career competencies are defined as an employee's set of skills, knowledge, and capabilities that they have mastered, all of which contribute to the employee's career development. According to Akkermans et al. [8] and Blokker et al. [30], career competencies are the key tools that assist individuals in enhancing their careers, and career competencies positively increase career outcomes such as learning and employability. Thirdly, self-management of one's career places an emphasis on the individual's capacity to advance his or her professional standing through the adoption of proactive behaviors.Cognitive and behavioral are the two components that make up career self-management.The cognitive component involves the development of awareness and in-depth thoughts dynamically related to an individual's career aspirations, such as the drafting of a career goal and the development of a career plan [57,59].On the other hand, the behavioral component assists in the self-initiation of behaviors for an individual's career management, such as the identification of career opportunities and the interaction with others at networking events.According to King [109], career self-management can be exemplified by several behaviors, including promotion of oneself, management of career boundaries, and networking. Subjective career success Subjective career success is defined by Arthur et al. [15] as the achievement of the desired work-related outcome over time at any point in a person's professional life.In the past, career success was classified as either subjective or objective.The subjective career success, also known as career satisfaction, refers to an individual's self-assessment of his or her career progress, it is implicit and complex in nature; and it is all about the career perceptions of employees, when they evaluate and respond to it in an excellent manner [131].While objective career success is directly observable, easily measurable, and verified, it is based on clear goals that can be compared and measured to determine career success, such as salary increases and promotions [2]. In the past, objective career success was the primary focus of career studies, and its main parts included individual achievement and job position in any organization [15,32].However, Hall [98] emphasized the importance of the role of career satisfaction in career without borders, which is aimed at the employees' feelings for achieving satisfaction and organizational goals [150].Furthermore, the research of Shockley et al. [153] reveals additional many facets of a career, such as authenticity, personal life, development, and growth.In turn, subjective career success is distinct from objective career success. Subjective career success is measured primarily by job and career satisfactions [102]; however, it can also be measured by work-life balance, career fulfillment, and job satisfaction [127,130].Al-Hussami et al. [10] state that few studies have been conducted on the subjective career success and limited evidence is available in the literature regarding whether employees' acceptance of change voluntarily is a result of subjective career success.Therefore, it is asserted that subjective career success is more significant than objective career success, and that individuals' inner career satisfaction is a interesting topic in the field of careers, and this study will provide a productive insight. Perceived employability Perceived employability refers to an individual's perception of their ability to obtain and maintain employment in the current and future job market [176].De Vos et al. [60] argue that perceived employability implies that individuals are primarily responsible and key actors for their work and career development.Therefore, perceived employability motivates, guides, and assists individuals to be on the proper career path. The concept of perceived employability has gained popularity among career practitioners, academics, and recruitment policymakers, as well as other disciplines, including psychology, management, education, human resources, and career development [176].The increasing job insecurity, multidirectional, and rapidly changing knowledge economy [37] has resulted in the necessity of employability to prepare individuals pursuing challenging career opportunities [75].On this basis, it is argued that in competitive labor markets and a rising unemployment curve, perceived employability is even more significant [79].Employability is represented by flexibility, which helps the employed population seek for and obtain job opportunities that may support job mobility within an employer or between organizations.It is referred to as internal perceived employability when individuals change positions within their current organization.In contrast, when employees quit one organization for another, this is referred to as external perceived employability.Both aspects of perceived employability are considered significant for employment [9]. Employability is divided into objective and perceived (subjective) employability.The objective dimension provides information and facts about the professional life of the employed population, such as his/her education status and position in the marketplace.The subjective side is individuals' self-assessment of their abilities to obtain new employment within their organizations or outside.Scholars in the field of employability asserted that, due to the continuous changes occurring in organizations, perceived employability should be given more weight than objective employability [26,49], as it is likely that individuals will base their decisions and actions on their perceptions rather than on objective truth [171]. Perceived employability produces the desired career outcomes, such as employability leading to lucrative employment opportunities [79], and employee well-being is also related to perceived employability [49].Therefore, perceived employability not only contributes to employees' professional and personal success, but also to their lifelong learning. Job autonomy Job autonomy refers to the degree of independence, discretion, and substantial freedom employees have in planning their work schedules and determining the procedures to be implemented in their jobs [86,95,118].Employees who are skilled, knowledgeable, and able to easily manage their working style can devise appropriate work plans and schedules.It is proposed that autonomous employees in their work are not influenced by centralization in their organizations, and that the degree of freedom and flexibility provided in their jobs enables them to contribute to their organizations, as well as enjoy and be completely engaged in their work [65]. Employees can be valued by allowing them to determine their own work, and they can develop a passion for their job.Employees with a high level of job autonomy are more likely to be risk takers, problem solvers, and fruitful thinkers, which means they are more innovative than other employees [161].Greater job autonomy leads to improved work and efficiency in organizations, whereas employees with low job autonomy are hesitant to accept risky or challenging assignments because they are aware that their decisions could negatively impact their employment [177]. Previous studies investigated the relationship between job autonomy and psychological outcomes of the employed individuals; lack of job autonomy decreased the workers' personal accomplishments [125], and individuals experience job burnout when they lack job control and face less involvement in decision making [137].It is argued that negative outcomes may result from the use of technological tools in organizations.Particularly, the negative outcomes associated with stress can be reduced by providing workers a greater degree of job autonomy, allowing them to independently schedule their work, obtain the necessary resources efficiently, and exercise the desired degree of control [42,146].The stress level of employees is increased by high job demands, and delegation of job autonomy to employees makes them prioritize job tasks and enables them to manage their mental wellbeing; employees with high levels of autonomy in their jobs take frequent breaks and recover from work-related stress [4].In the literature, employment autonomy, worklife balance, and workloads are linked to organizational performance [68,154]. Subsequently, job autonomy fosters work-life balance by identifying the boundaries between work and family life, and empirical evidence suggests that employees with high levels of job autonomy are better at resolving conflicts between task priorities and family obligations [13,141]. Proactive behavior theory Career crafting is theoretically based on Crant [47] proactive behavior theory.Proactivity or proactive behavior is defined when individuals anticipate any action influencing them personally and/or their surroundings [85].Crant [47] posited that individuals who take initiatives that changes their current situations or develop new ones are proactive.When employees engage in career planning, they take initiatives or engage in a career-related issues in such a way that they behave in defined directions rather than reacting passively to the forced change [78].Providing networking opportunities to new staffs should be part of proactive career management [47]. Proactive employees anticipate career development activities such as seeking personal and professional development opportunities, participating in careeroriented initiatives, and altering their lifestyles, whereas those who are not proactive are passive, reactive, and hesitant to change [47].In other words, proactive behavior is the foundation for career crafting, and individuals with proactive traits will be successful in tailoring their careers over time.Proactivity is significantly associated with job crafting, proactive individuals take initiative regardless of a specific situation, such as responding to an emergency, managing personal relationships, or networking at specific events [19].Similarly, career crafting is the combination of career development measures to be taken during career transitions to achieve career success.The study by Judith Plomp et al. [134] examined the relationship between proactive personality and employee well-being through the mediation of career competencies and job crafting.The study revealed that proactivity of individuals is not limited to work or career outcomes but is integrated with both concurrently.Employees exhibiting proactive behaviors are continually enhancing their work-related competencies and establishing long-term career success goals.Individuals who are proactive exhibit a high level of creativity and are typically enthusiastic about their work [12].Career crafting refers to proactive actions that contribute to important career outcomes.Career crafting is categorized as proactive career construction and proactive career reflection [162] and consists of career planning, communication, seeking opportunities for career development, mastering jobrelated skills, and engaging in challenging work tasks. In the research on proactivity, the focus of proactive actions is distinguished [25].Proactive behaviors can be directed toward the individual (pro-self ), a unit or team (prosocial), or the organization (pro-organizational).In accordance with the empirical work of Tims and Akkermans [162], this study investigates proactive behaviors aimed at achieving individuals' career objectives, or proself-behaviors aimed at obtaining a decent job and having a successful and rewarding long-term career. Theoretically, career crafting is formed when job crafting, career self-management, and career competencies are combined; however, the integration of these three concepts has not been previously investigated [162].These three concepts provide us with abundant results in the field of career studies, but their scope is limited, whereas career crafting is comprehensive and provides us with immense scientific insights.In the literature, the concepts of job crafting, career self-management, and career competencies are established independently; however, in empirical studies, they are integrated, for example, job crafting and career competencies [9] and career competencies and career self-management [57,59].The currently available literature on these three concepts will aid in exploring and comprehending the nature of career crafting. Due to the novelty of the concept of career crafting and the lack of empirical data on the relationship between career crafting and key career outcomes, this study is guided by the theory of proactive behaviors and will shed light on proactive personalities, career crafting, and career-related outcomes. Career crafting and subjective career success Subjective career success (career satisfaction) is obtained when employees proactively steer the wheel of their careers and anticipate proactive career behaviors, actions, and career planning [162].Likewise, career competencies, which include capabilities, knowledge, and skill sets relevant to careers, are crucial for increasing the level of subjective career success [5], and career academics have identified planning, communication, and reflection as some of the most essential career competencies that serve as the foundation for career success [162]. According to Chiaburu et al. [43], employees with proactive personality traits demonstrated strong subjective career management behaviors.Similarly, when employees improve their career competencies, they will experience greater job satisfaction [182].As stated previously, career competencies are a theoretical component of career crafting, and it is believed that career crafting has a positive correlation with career satisfaction.Moreover, career competencies can foster ambition in employees and motivate them to proactively craft their jobs, resulting in subjective career success [9].Thus, proactive career behaviors such as self-career management, job crafting, motivation, and networking are hypothesized to be associated with a high level of subjective career success.In addition, it is hypothesized that career crafting is positively associated with subjective career success. H1 Career crafting is positively associated with subjective career success. Career crafting and perceived employability Individuals who engage in job crafting will improve their perceived employability skills [162]; when they invest in their own capacity building by completing job responsibilities, this will lead to new career opportunities and help employees evaluate their position on the job market.It is hypothesized that when employees are given demanding job tasks, they are trained in a challenging work environment, and they are led in an effective manner, their employability will increase because they will acquire new skills and expand their thinking ability in the workplace.According to Plomp et al. [136], integrating job challenges and resources into the job crafting process strengthens an individual's capacity to acquire updated information that has been polished as well as generic and networking skills that promote their career flexibility and personal development.Career growth is facilitated when employees set up their working environments so that they may plan, gather resources, and successfully handle obstacles as they arise.According to empirical research, proactive career behaviors by employees are associated with outstanding key work and career outcomes, such as a rise in perceived employability [9,134].Additionally, job insecurity has a negative relationship with perceived employability [51].In contrast, according to Berntson et al. [27], perceived employability has a positive relationship with selfefficacy, career success, work engagement, job satisfaction, organizational commitment, and life satisfaction.Employability is positively correlated with job crafting, according to previous studies [34,52,163].This study will investigate the relationship between career crafting and perceived employability.The literature strongly affirmed that employees with proactive personalities are planning and customizing their careers and it develops their employability qualities.Based on this empirical evidence, it is assumed that career crafting will have a positive impact on perceived employability. H2 Career crafting will be positively associated with perceived employability. Job autonomy and subjective career success Job autonomy is crucial for attaining subjective career success [143], and it increases individual-career fitness.Individual-career fit refers to the compatibility and alignment of an individual's career experiences with his or her skills, values, and talents [133].Career autonomy helps employees make changes in their careers by enabling them to make the desired choices and achieve career compatibility, which in turn increases their subjective career success [46].This is because when employees pursue careers that are aligned with their self-perceptions, they experience satisfaction and achieve career outcomes that are personally significant to them [133]. High job autonomy promotes employee sense of work responsibility and employee empowerment [95], whereas low job autonomy results in passive attitudes and low employee engagement [77].This implies that employees' lack of interest in their employment because of low job autonomy may result in career dissatisfaction. The relationship between job autonomy and subjective career success has received little scholarly attention; consequently, little is known about it.However, the relationship between career autonomy and subjective career success has been demonstrated by the research of Colakoglu [46].Therefore, it is hypothesized that there are positive associations between job autonomy and subjective career success. Job autonomy and perceived employability Employees with a high degree of job autonomy are likely aware of their responsibility for the issues they face on the job and those that affect their employment outcomes [53].The freedom on the workplace empowers employees and affords them the chance to develop employability skills.It is assumed that the job mainly connect individuals with its employers, it is crucial to focus on the contexts of the job in which they are working [80], and job autonomy is considered a crucial job characteristic [48], and it has the capability to influence proactivity of individuals [91,167]. The literature suggests that job autonomy is one of the most prevalent resources of employment [73], implying that when employees possess a high level of job autonomy, they have the opportunities and resources to improve their employability skills and can make decisions to improve employment conditions.Job autonomy is inevitable, particularly when job responsibilities are developed and delegated to employees.Creating autonomous conditions for the working population is extremely advantageous, as it increases their proactivity and interest in their employment, as well as lowering the turnover rate [80].Instead of focusing solely on organizational achievements and ignoring employees' career aspirations, employers should provide autonomy to their employees and collaborate with them for their career development in the face of a continuous increase in uncertain market changes [23]. The relationship between job autonomy and perceived employability has not been thoroughly explored in the literature, and this research is intended to address the gap and discover if job autonomy impacts the relationship of perceived employability; therefore, it is hypothesized that job autonomy will have a positive relationship with perceived employability. H4 Job autonomy will be positively associated with perceived employability. Job autonomy as moderator The literature suggests that individuals with high job autonomy have greater freedom, discretion, and the ability to design their careers based on their unique preferences, needs, and talents [66,67,180]. Job autonomy is anticipated to be an important contextual factor to define proactive behaviors of employees for their engagement in work [80], and individuals with proactive skills can utilize job autonomy to manage the required professional skills [53].Further, job autonomy is studied as a contextual variable in many career-and work-related studies; for instance, [39,80,140,181] studied the moderation effects of job autonomy in their scientific studies.Furthermore, job autonomy improves one's job through top-down processes, giving employees more freedom, power, and discretion, as well as a sense of mastery to accomplish their career objectives [17,39,83,119,120].Career objectives vary from individual to individual and it is implicit in nature, and some may look for objective career success, while others may seek career satisfaction.Employees' self-determination to maintain career trajectories and achieve career goals is likely to be enhanced by engaging in in self-initiated actions such as job crafting [64,69,83,87,118].Based on the literature, job autonomy promotes self-initiative, proactive behavior, and it has facilitative role for employees career development, because without freedom in their jobs they have limited exposures and chances of professional growth may decreased.It is assumed that job autonomy influences the positive relationship of career crafting, perceived employability, and subjective career success and the following two hypothesis are posit: H5 Job autonomy will moderate the relationship between Career Crafting and subjective career success H6 Job autonomy will moderate the relationship between Career Crafting and Perceived employability Accordingly, Fig. 1 depicts the research model of the present study. Purpose and context of the study This study aimed to examine the relationship of career crafting with key career outcomes, i.e., subjective career success and perceived employability as well as the moderating role of job autonomy in Pakistan.It is posited that Fig. 1 Research model career crafting is inevitable topic which has relevancy across all industries and occupations [162].With the aim of attaining better understanding of career crafting, the data were collected from employees who are working across various organizations and sectors. The study is guided by positivism approach which explores social phenomenon by using quantitative approach, selecting the appropriate participants and achieving the generalizability of the study results [124]. Sample size is identified using G-power software [72], and for social and business sciences, it is recommended [97].According to the study model, the G-Power calculated 107 sample size using the instructions given by Memon et al. [128].Surveys were distributed and 224 responses were received online using Google Forms from targeted respondents. Data collection procedure Research survey is created using Google Forms and carefully reviewed for any kind of errors.30 samples were used in a pre-test to verify that the questionnaire is appropriate and usable for this study.Afterward, the research questionnaires were distributed online to individuals employed in Corporate, Government, NGO/ INGO, Education, Banking, and other sectors.The target respondents are working in geographically different locations and organizations, and they cannot be reached physically due to the scarcity and limitedness of resources.Respondents were informed beforehand that their data will be strictly remain confidential and solely used for research purposes.The research survey was sent to the participants through emails, personal messages and social networks (Facebook, WhatsApp, and LinkedIn).After receiving their responses, the data were inserted in in MS excel and SPSS for data analysis. Measures Data were collected using structured questionnaires, and 5-point Likert scales were employed to rate the responses ranging from strongly agree = 5 to strongly disagree = 1.All the construct items were adopted from scholarly research.Items were adapted to fit the criteria of the study.The questionnaires consisted of five sections, the first section consisted of questions about the demographics profile such as age, gender, qualification, job experience, and job sector.The second section consisted of questions about the career crafting adopted from [162].The construct of career crafting has 8 items and sample items are "I set goals for where I want to be one year from now" and "I create an overview of my talents and competencies, " section three includes questions about perceived employability and measured by 4 items, the scale is adopted from [62] in the current study, and it is successfully employed in other research studies in various employment contexts [93,173].Section five consisted of questions about subjective career success, and it was measured, using Greenhaus and Callanan [88] career satisfaction scale.The scale consists of 5 items, and the sample item is "I am satisfied with the success I have achieved in my career." Sample characteristics This section includes demographic profile of the respondents covering five variables: gender, age, qualification, work experience, and job sector. Table 1 indicates that 60% of respondents who participated in the current study are male and 40% are female.Half of the respondents aged between 26 and 33 years old for 50.2% while few responses received from respondents who are 50 years old or above only 1%.Work experience is an important factor for this study, to understand career satisfaction along other study variables succinctly.Table 1 shows that the highest number of responses which corresponds to 55% received from professionals who are having 1-5 years of experience, while individuals who are having 16-20 years of experience relatively responded less corresponding to 5.3%. Qualification is another factor considered for data analysis, and Table 1 shows that more than half of the Work experience is an important factor for this study, to understand career satisfaction along other study variables succinctly as indicated in Table 1 that the highest number of responses which corresponds to 55% (116 responses), received from professionals who are having 1-5 years of experience, while individuals who are having 16-20 years of experience relatively responded less corresponding to 5.3% (11 responses). Job sector was also considered as an important aspect to gain empirical insights about career crafting process and career satisfaction of those who are working in different type of industry or sector [162].Table 4 Descriptive statistics Table 2 represents descriptive statistics, which provide information about constructs lowest and highest values of responses obtained on Likert scale, and it also includes variable values for mean and standard deviations.As shown in Table 2, career crafting has 4.67 mean (SD = 0.80) which is the highest value for mean value in the data set, whereas, for dependent variables, i.e., subjective career success and perceived employability, the mean values are 3.6(SD = 0.90), 3.5 (SD = 0.91), respectively.And the moderating variable, i.e., job autonomy, has mean value of 3.23 (SD = 0.96). Normality test The normality of the data is tested by checking the skewness and kurtosis values of variable scales.The values of skewness are ranged between (− 0.237, − 0.587), and the values of kurtosis are ranged between (− 0.403, 0.061).These values are within the range of cutoff criteria i.e., ± 1.96 [148], and the data are normally distributed (Table 3). Reliability For assessing the internal consistency of the scales, the Cronbach's alpha value is recommended to be equal or higher than 0.70 [132].Table 4 indicates that the Cronbach's alpha values are higher than the minimum threshold and it is posited that the internal consistency (reliability) is conducted. Correlation analysis Correlation analysis is used to determine the positive or negative association between study variables.Pearson's correlation analysis is widely used for identifying the linear relationship between constructs.The correlation coefficient values are ranged between − 1 and + 1.The positive association between variables is indicated by positive values and level of significance, and negative association between variables is indicated by negative values and level of significance [110]. Table 5 shows that career crafting has positive and significant relationship with subjective career success (r = 0.419, P < 0.001) and it also shows that career crafting has a positive and significant relationship with perceived employability (r = 0.339, P < 0.001). Table 5 indicates that job autonomy also has significant positive relationship with subjective career success (r = 0.421, P < 0.001) and job autonomy has significant positive relationship with perceived employability (r = 0.405, P < 0.001). Confirmatory factor analysis Confirmatory factor analysis (CFA) is used for testing the relationships of different kind of variables, i.e., independent, dependent, and others.CFA is a special type of structural equation modeling (SEM) and aimed to determine the fitness of measurement model before employing regression of latent variables [169].The developed scales are adopted in this study, and CFA was run to check the validity of adopted scales (Table 6). The reliability of the study variable in SEM is examined through composite reliability (CR), whereas the convergent and discriminant validity of variables is measured through AVE and MSV [96].The variables reliability and validity was examined using master validity tool developed and recommended by Gaskin and Lim [81].The values of CR of all four study variables ranged from 0.786 to 0.918, and these values higher than the minimum recommended threshold value, i.e., 0.6, recommended by Fornell and Larcker [74].AVE determines convergent validity of variables [155], and the values of AVE of all constructs are examined and were found above the general given criteria, i.e., 0.5, except for value of CC (Career Crafting) variable.Lam [117] and Fornell and Larcker [74] posited that the values of AVE may examine the measurement model validity strictly and the researcher may decide the convergent validity of variables based only on CR values.Moreover, in his empirical study, the AVE value of CC, i.e., 0.350, will be maintained.Discriminant validity is assessed when the estimate values (diagonally presented in the table) are similar to square values (SQRT_AVE) of every construct [74], and the estimates values are higher than the values of its squares (AVE values) and discriminant validity is conducted [96]. Assessment of model fit The measurement of the model fitness is specified using empirical statistics, which includes CMIN/DF, CFI, SRMR, and RMSEA [103].The CFA results are recommended to be within the ranges defined principally below: • Chi-square value-CMIN/DF < 5 [160] • Comparative Fit Index-CFI ≥ 0.90 [103] • Root Mean Square Residual-RMR ≤ 0.08 [103] • Root Mean Squared Error of Approximation-RMSEA < 0.06 [103] Table 7 indicates that the estimate values of chi-square, CFI, SRMR, and RMSEA fulfill the requirements of the cutoff criteria [81] and indicates fitness of the measurement model (Fig. 2). Hypotheses testing Table 8 shows the results of hypotheses testing.The constructs were standardized through Z score method in SPSS, and then, proposed hypotheses were examined in AMOS 23.0. H1 predicted that career crafting will have positive relationship with subjective career success, the results supported hypothesis 1 and it is evident that career crafting has significant positive relationship with subjective career success (β = .339, p < 0.000). H2 proposed that career crafting will have positive relationship with perceived employability.Results indicated that career crafting has positive and significant relationship with perceived employability (β = 0.260, p < 0.000).H3 proposed that job autonomy will have positive association with subjective career success and the hypothesis is supported (β = 0.319, p < 0.000). H4 predicted that job autonomy has positive relationship with perceived employability, the result shows that job autonomy has positive and significant positive association with perceived employability (β = 0.236, p < 0.000). H5 predicted that job autonomy will moderate the relationship of career crafting and subjective career success; for example, this relationship will be stronger when job autonomy is high than when job autonomy is low.The interaction term of career crafting and job autonomy was non-significant (β = 0.043, p = 0.465), and the hypothesis was not supported in this study. H6 proposed that job autonomy will moderate the relationship of career crafting and perceived employability such that this association will be stronger when job autonomy is high than when job autonomy is low.The interaction term of career crafting and job autonomy was non-significant (β = − 0.070, p = 0.268), and the hypothesis was not supported (Fig. 3). Discussion The empirical evidence in the literature indicates that career crafting is the behavior of an individual that ensures career sustainability over time.Career crafting is a relatively new concept that has not been thoroughly investigated in the literature.In the current study, the empirical relationship between career crafting and other key career outcomes, such as career satisfaction and perceived employability, is examined. In regards the first hypothesis, it was supported, and the results revealed that there is a positive relationship between career crafting and subjective career success.The results are in line with [9,54] who confirmed that when employees engage in career crafting activities in advance, they will achieve success in their jobs and ultimately in their careers.And individuals who are responsible for crafting and redefining their careers effectively manage their career success [57,59], in addition to King [109] who asserted that proactive career behaviors lead the employed individuals to achieve both life and career success.The results are also aligned with Tims and Akkermans [162] who found the clear positive relationship of career crafting with subjective career success and they maintained that career-related competencies such as communicating, planning, and career reflection are the leading factors for obtaining successful careers.Moreover, previous studies show that career competencies lead to career success [114] and career satisfaction [70]. Similarly, the second hypothesis was supported and the results indicate that career crafting has positive relationship with perceived employability confirming that employee who engage in career crafting process, and those who are looking into their careers proactively, will gain enhanced perceived employability skills and they will be able to make transitions in their careers within their organizations (internal perceived employability) or outside their organizations (external perceived employability).The results are in line with the study of De Vos et al. [58] which showed that when employees partake in career crafting process, it will yield intended career results, such as career success and employability.Furthermore, individual's job crafting actions enhance employability, and these job crafting actions include pursuing challenging job assignments, investing in self-capacity development related to his/her job, having access to learning opportunities and all these must add value to individuals' employability in the marketplace [76].Career competencies and perceived employability are positively related; when employees are aware of what they seek in their careers, they are enabled to take help of career mentors and they can find the right career opportunities, they will eventually enhance both their external and internal employability qualities [6].Moreover, the literature provides ample insights on positive relationship of increasing job resources, demanding job assignments and employability [7,34,135,163].Similarly, Lysova et al. [122] also supported the results as they proved in their study the positive link between crafting actions for career development and perceived employability. In terms of the job autonomy and subjective career success, the third hypothesis is also supported, and the results of the current study indicated that that job autonomy has positive relationship with subjective career success.Similarly, in the study of Colakoglu [46] the relationship of career autonomy and subjective career success was examined and it was find that career autonomy plays a crucial role in obtaining subjective career success also referred as career satisfaction. It can be explained that high career autonomy allows individuals to develop and steer their careers to achieve individual-career fit, resulting in increased career satisfaction.This is particularly important for employed individuals with dual careers.Individuals with career autonomy can avoid obstacles in their careers and pursue their aspirations efficiently.Additionally, having a certain degree of workplace freedom allows them to make work assignment-related decisions independently, resulting in subjective career success. The fourth hypothesis is confirmed, and the results indicate the positive association between job autonomy and perceived employability.The literature indicates that autonomy enhances employees' responsibility for their job assignments, feedback enhances the usefulness of employees knowledge related to their work activities and the variety of work is perceived more meaningful [95], and it is argued that autonomy, variety, and feedback, these three altogether referred as job resources, have positive association with extrinsic and intrinsic job opportunities which ultimately create positive link with perceived employability [172], whereas the extrinsic job opportunities are the tangible compensations and benefits and intrinsic job opportunities are referred to development and growth of employees.Moreover, the association between resources (autonomy, feedback, and variety) and perceived employability can be comprehended further from empirical studies of job demands resources (JD-R) [18].The perception of high level of job autonomy may help the employees to have the attention and trust of their organizations' top management.And the commitment of the management is continued by providing the necessary skills and knowledge to their employees so they can maintain their employability [129,166].Lastly, a recent longitudinal study [174] conducted among 238 Dutch gastroenterologists, and the findings of the study indicated that high level of job autonomy is associated with employability, in contrast, low level of job autonomy and increased quantity of workload negatively affect employability. In regard to the moderation of job autonomy, Hypothesis 5 and Hypothesis 6 stated that job autonomy will moderate the association between career crafting and subjective career success and association between career crafting and perceived employability, both hypotheses are not supported, and the crucial reasons are explored and discussed. It is argued that the findings of the relationships of career constructs with regard to individuals and organizations behavioral outcomes may produce different results, contingent upon the differences of the cultural contexts and organizational settings in which the empirical study is conducted [138].It is explored that employees' high priority is to achieve high objective career success, i.e., raise in salary and promotion, and they pay less attention toward their career development to enhance their subjective career success. Second, every employee works in different managerial level and has unique career objectives and some organizations are adversely affected by changes due to worldwide economic crisis.Job autonomy can facilitate individuals career success and employability but it is not the case to moderate the relationship of career crafting activities, career satisfaction, and perceived employability and it is argued that career management is the sole responsibility of individuals [36]. Third, job autonomy is considered organizational resource [45]; however, employees may not able to have access and utilize this resource well and it is imperative that individuals must engage in proactive career behaviors [162] for achieving their career success and employers may not support in career development of their employees.And employees are expected to take the responsibility of their own professional grooming, and individual will be influenced with regard to the initiative and efforts made for career advancement and they will attain subjective career success accordingly [22]. Fourth, employees working in the public/government sector organizations in Pakistan rarely resign from their jobs and they serve until retirement.The government organizations provide high job security, but employees lack proper career development plans, and it is explored that they show indifference whether they provided with job autonomy or not, to take initiatives for their career success.Moreover, in the field of education, individuals are competing to obtain highest academic achievements, i.e., research publications and high qualifications such as PhD which makes the career objectives of every individual's different and organizations are unable to meet the unique career success demands of every individuals, rather it becomes the responsibility of the individuals to steer their careers successfully and this may not require to take assistance of job autonomy for their career development. Moreover, the rapid technological advancements taking place in the worldwide industrial sector have caused concern among employees regarding their ability to maintain employment, as organizations may implement staff reduction policies [44].There can be divergent viewpoints between employers and employees.Employers may expect employees to remain with the organization solely by honing their job-related competencies; employees, on the other hand, may seek employment with other organizations that offer higher salaries and benefits [60].This heightened situation is commonly referred to as the "ongoing war for talent, " which signifies the concerns of employers and is also called the management paradox [50], and the organizations may desire to enhance the career competencies of their employees, but these organizations also speculate the risk that their trained staffs will join their competitors before the invested value is recovered.The management paradox is existed everywhere before career practitioners but it is not studied nor confronted in the literature [60].Furthermore, organizations are in quandary whether to assist their staff in career development and enhance their employability skills, this is because they are concerned that their trained value HR capital may be attracted by the competitors [21].And it prevalent that job autonomy may not moderate the relationship of the variables. Individuals employed in NGOs/INGOs polish their job relevant competencies, and they transit to other organizations when they are offered higher compensation and perks or they shift to big cities for having exposure to vast career opportunities.It is known that many organizations in development sector do not provide a promising working environment which include work independence and interrupted work-life balance (due to heavy workload), whereas employees in the corporate sector receive market competitive salaries and they can enhance their careers and their employability skills [71], but these companies have strict objectives and deliverable to be achieved timely which may cause the employees to not give proper attention to their careers and they may also not receive the work freedom they need for important on-job achievements. Overall, the raising unfavorable circumstances further intensify employees job insecurity, and their perceptions may adversely change both by organizational and environmental causes [94] and therefore employees shall act proactively.The increasing shifts taking place in all spheres including technology, economy, and business workplaces, influencing the industries strongly to make amendments to its policies and recruit skilled workforce having expertise in their field of work and organizations are directed to come with sustainable solutions mutually beneficial both for the employed populations and organizations.In this way employers will sustain their valued human capital and employees will have the opportunity to work on their career development by availing training and development opportunities, receive employment benefits, and ultimately achieve their career success [22,71]. Managerial implications The current study provides significant empirical insights to the existing literature and fills an essential research gap.Examining the concept of career crafting, which is a newly established concept developed by Tims and Akkermans [162].Few empirical studies are conducted on career crafting due to the novelty of the concept and the theoretical assumptions that may necessitate additional research to broaden its scope.This study investigated the relationship between important predictors, such as career crafting [162] and job autonomy [33], and important career outcome variables, such as subjective career success [89] and perceived employability [62].The study provided significant support for the positive association between the predictive variables and key variables of career outcomes from a theoretical aspect, career crafting is a component of proactive behaviors or proactivity theory [162].And career crafting is comparable to a comprehensive set of planned actions that reflect proactive career behaviors and lead to significant careerrelated outcomes.People can alter their circumstances through proactive planning and action [38].Therefore, [24] is the first to propose the concept of "proactive behavior, " which asserts that individuals can influence their surroundings through their proactive actions.Proactive individuals are conscious of upcoming risk and protect their careers through concise planning and timely action, they examine career opportunities and take initiative to attain their career development objectives.Those who are not proactive, on the other hand, are reactive and act on passive behaviors and attitudes; they wait for changes to occur and falsely believe that career opportunities will appear on their doors; consequently, they fall behind the success curve. Thus, the current study has imperative contribution to proactivity theory that individuals opting career crafting actions will be succeeded in achieving both subjective career success and perceived employability skills. This study provides valuable empirical evidence that enables organizations to retain and support valuable human capital by enhancing employees' core competencies and appropriate skill sets, thereby obtaining a competitive advantage.It is insightful for today's managers working in diverse and multicultural settings and provides information on how individuals can take responsibility for their career development and ensure the sustainability of their careers. It is crucial to incorporate career crafting practices.Facilitating employees' career development and career satisfaction may appear to be a challenge for organizations [22], but it is emphasized that organizations should choose a people-oriented approach over an authoritarian one [23].In this way, organizations will be able to recruit and retain talented and skilled employees, however, the career choices that individuals make can have an adverse impact on the organization's ability to attract and retain new talent as well as employees' performance [56]. Moreover, professionals from a variety of disciplines and workplaces can obtain enlightening information regarding the various career aspects.They are able to implement effective strategies, such as participating in proactive career behaviors, such as networking, enhancing work-related abilities, and consistently searching for career development opportunities.When employees make informed decisions and invest in their personal and professional development, they attain career success.Individuals must independently craft their professions to achieve success [54]. This empirical research aids career practitioners and academicians for practical applications and provides guidance for organizations to develop strategies to support employees' career planning that is aligned with their visions and objectives.Career crafting research will enable organizations to improve their employability strategies, particularly in the post-COVID-19 era, when job security is tenuous [31].Additionally, the research highlights employability skills that can be extremely beneficial for obtaining or retaining employment. Employers are advised to include a career development component in their HR framework in order to create a friendly environment for their employees, as career development reduces negative outcomes such as underemployment and promotes positive outcomes such as increased employability skills and employee engagement [6]. When employees attain a high level of subjective career success, they will experience high levels of job motivation, goal achievement, and self-confidence [1], which will increase their productivity.Organizations must consider the career satisfaction of their employees by providing opportunities for career reflection and goal attainment.Employers invested few resources in employees with inadequate capabilities [131], but in today's competitive and ever-changing work environment, organizations must develop customized and sustainable solutions (career plans) for their employees in order to retain their loyalty and motivation. Lastly, Kuvaas [116] posits that organizations can assist their employees in developing their work-related competencies by incorporating HR practices [9] which assert that the improvement of these competencies can become a crucial element of performance evaluation or connected to an organization's planning for the career development of its employees, reflecting their commitment and career satisfaction.Practitioners and HR policymakers should amend employee-oriented organization policies, such as considering and meeting employees' career needs assessment and requirements. Limitations and future research directions The study guides readers and researchers to new empirical findings regarding career crafting and significant career outcomes; nevertheless, some limitations in the current study are highlighted. First, the research was conducted among professionals employed in various fields as suggested by Tims and Akkermans [162], and future researchers might select professionals from a specific field or industry, such as the education sector or the sector of non-governmental organizations. Second, it was assumed that job autonomy may moderate the positive association between career crafting and subjective career success and similarly it may moderate the positive relationship of career crafting and perceived employability.But in this research context and geographical and cultural setting this assumption is not supported empirically.This can be explored in the future research to comprehend the unknown factors which led to this result. The current study did not undertake the relationship of mediating variable with the study variables, and future research may consider the mediating variables into account to understand the dynamic nature of career crafting [162].Career shocks is one of the contextual variables which may influence the study outcomes [151].Resilience and adoptability may also be studied in addition to career shocks in career studies because some employees may leave career crafting behaviors, while others may continue it.Additionally, other essential contextual variables can be included in the study, for instance, the role co-worker support or the role of supposal support which can help in enhancing proactive behaviors.The researchers may choose these contextual variables in the career crafting study. Conclusion Individuals make numerous significant career-related decisions over the course of their lives, during which they must maintain and improve their career competencies and sustainability to attain career identity, adoptability, and overall career success.The process of career crafting holds great importance in terms of nurturing both professional and personal development, enhancing job satisfaction, and achieving career success. The main goal of this study was to explore the relationship of career crafting, subjective career success and perceived employability in Pakistan, Additionally, the study aimed at examining the moderating effect of job autonomy on the relationship between career crafting, subjective career success, and perceived employability.The results indicate that career crafting and job autonomy has significant and positive relationships with perceived employability and subjective career success.The study supported the proactive behavior theory of Crant [47].Further, it provides comprehensive insights to understand career crafting mechanisms and dynamics for making correct career-related decisions.The empirical findings of the present study indicate that career crafting plays a significant role in attaining high employability skills and increased subjective career success. Table 1 Sample characteristics Table 2 Descriptive statisticsM mean, SD standard deviation, n sample size Table 3 Normality test Table 6 Reliability and validity The bold values on the diagonal are the squareroot of the AVE assessing the discriminant validity of the construct CR composite reliability, AVE average variance extracted, MSV maximum shared variance, MaxR(H) maximum reliability Table 7 Model fit assessment
2024-02-01T16:36:16.216Z
2024-01-27T00:00:00.000
{ "year": 2024, "sha1": "8386b9eefcdd3e0d76161a562b2f364e6acee19a", "oa_license": "CCBY", "oa_url": "https://fbj.springeropen.com/counter/pdf/10.1186/s43093-024-00304-w", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d52ce2a3f877db296a68089c98801ee0ba976323", "s2fieldsofstudy": [ "Psychology", "Business" ], "extfieldsofstudy": [] }
39494467
pes2o/s2orc
v3-fos-license
Analysis of the population structure of Uruguayan Creole cattle as inferred from milk major gene polymorphisms The ancestors of Uruguayan Creole cattle were introduced by the Spanish conquerors in the XVII century, following which the population grew extensively and became semi-feral before the introduction of selected breeds. Today the Uruguayan Creole cattle genetic reserve consists of 575 animals. We used the tetra primer amplification refractory mutation system polymerase chain reaction (ARMS-PCR) to analyze the -casein, -casein, S1-casein and -lactoalbumin gene polymorphisms and restriction fragment length polymorphism PCR (RFLP-PCR) for the -lactoglobulin and the acylCoA:diacyl glycerol acyltransferase 1 (DGAT1) genes. The -casein and -lactoglobulin genes presented very similar A and B allele frequencies, while the s1-casein and -lactoalbumin gene B alleles showed much higher frequencies than the corresponding A alleles. The -casein B allele was not found in the population sampled. There was a very high frequency of the DGAT1 gene A allele which is associated with low milk fat content and high milk yield. All loci were in Hardy-Weinberg equilibrium and the level of heterozygosity agreed with the high genetic diversity observed in a previous analysis of this population. Preservation of the allelic richness observed in the Uruguayan Creole cattle should be considered for future dairy management and livestock genetic improvement. The results also emphasize the value of the tetra primers ARMS-PCR technique as a rapid, easy and economical way of genotyping cattle breeds for milk gene single nucleotide polymorphisms. Introduction Cattle were first introduced to Uruguay In the XVII century by a Spanish conqueror named Hernando Arias de Saavedra from the Iberian Peninsula and later were also brought from the Jesuit missions located in a region called `Alto Uruguay', these two foundation stocks resulting in the establishment of the Uruguayan Creole cattle (UCC) population (Wilkins et al., 1989;Primo, 1992).During the XIX century, several commercial European cattle breeds were brought to America, leading to genetic introgression that greatly reduced the Creole population. In 1930, a group of 35 Creole bulls, cows and calves were brought from inhabited areas of Treinta y Tres and Maldonado departments and established what is today the Uruguayan Creole cattle reserve (Arredondo, 1958;Postiglioni et al., 2002).Characterization of this population has been performed with different markers.As a first step, ba-sic morphological (such as coat color, horn shape) and cytogenetic markers were studied.The presence of the robertsonian translocation 1;29 and absence of the acrocentric Y chromosome observed in Bos taurus indicus together with the analysis of various molecular markers supported the Iberian origin of Uruguayan Creole cattle as typical Bos taurus taurus (Rodríguez et al., 2001;Postiglioni et al., 2002). Random amplified polymorphic DNA (RAPD) markers have been used to analyze random genome regions of Uruguayan Creole cattle and have revealed differences in band-sharing frequencies and specific banding patterns in Creole cattle as compared to selected European breeds such as Holstein-Friesian and Hereford (Rincón et al., 2000;Postiglioni et al., 2002).Later research detected 22 different alleles in the second exon of the DRB3 gene, a polymorphic region related to the immune response of the BoLa gene (Postiglioni et al., 2002;Kelly et al., 2003).The analysis of 17 microsatellites showed an expected heterozygosity per locus of between 0.46 and 0.80, except for the HEL13 locus (He=0.29).The high expected mean heterozygosity (0.62) and the low mean F IS index (0.098) suggest that the population is able to self-sustain on the long run (Armstrong, 2004). It is now accepted that the potential value of local livestock breeds must be analyzed and conserved so that they can become a self-sustainable resource.The importance for conservation of specific alleles or genotypes and allelic richness has been also advocated (Gandini and Villa, 2003). Milk protein genetic variability in cattle has been extensively studied at both the DNA and protein levels for evolutionary and biodiversity analyses (Caroli et al. 2004).Recent advances in single nucleotide polymorphism (SNP) detection allow genotyping without using restriction enzymes, a simpler and more economical way of detecting polymorphisms.One of these techniques, the tetra-primer amplification refractory mutation system polymerase chain reaction (ARMS-PCR) has been optimized to genotype SNPs in the bovine k-casein, b-casein, aS1-casein and a-lactoalbumin genes (Rincón and Medrano, 2003).This technique uses a set of four primers in a single PCR reaction tube without the need for post-PCR manipulations to accomplish an efficient classification. The acylCoA:diacyl glycerol acyltransferase 1 (DGAT1) gene encodes the enzyme catalyzing the final step of triglyceride synthesis and has become a functional candidate gene for milk fat content after evidence that an increase in this trait in different breeds is strongly associated with a lysine at position 232 of the DGAT1 protein while an alanine at this position is associated with decreased milk fat content (Winter et al., 2002;Kuhn et al., 2004).The DGAT1 gene has also been related to intramuscular fat deposition in cattle as it is mapped within the region of the marbling quantitative trait loci (QTL) (Thaller et al., 2003). This paper outlines the results of a population structure analysis of Uruguayan Creole cattle using gene SNPs related to milk production traits.The tetra primer ARMS-PCR technique was used for the k-casein, b-casein, aS1casein and a-lactoalbumin genes while the restriction fragment length polymorphism PCR (RFLP-PCR) method was used for the b-lactoglobulin and DGAT1 genes.This study provides the first characterization of the different alleles of the aS1-casein, b-casein and DGAT1 genes in Uruguayan Creole cattle. Material and Methods The population of Uruguayan Creole cattle of San Miguel National Park consists of 23 bulls, approximately 447 cows and 105 calves of both sexes.The present study was carried out on a total of 115 genomic DNA samples collected at random. The tetra-primer ARMS-PCR procedure (Rincón and Medrano, 2003) was performed for the milk protein genes and a-lactoalbumin (a-LA).The outer primers used for the k-casein, aS1-casein and a-lactoalbumin were the same as those used in the RFLP-PCR technique, but the inner primers were designed by introducing a second deliberate mismatch at position -2 from the 3' end.The polymorphism K232A (AAG Ä GCG) in the DGAT1 gene was analyzed using the RFLP-PCR technique (Winter et al, 2002).The b-lactoglobulin (b-LG) SNP was also studied using the RFLP-PCR method (Medrano and Aguilar-Cordoba, 1990). For the statistical and genetic structure analysis we calculated the allele frequencies at the six loci analyzed, tested for Hardy-Weinberg equilibrium and estimated the observed (Ho), expected (He) and expected unbiased (He un- biased ) heterozygosity using the POPGENE32 program (Yeh et al., 1999). Wright's fixation index (F IS ) was used as a measure of heterozygote deficiency or excess.Together with heterozygosity and the estimations of Hardy-Weinberg equilibrium, these indexes allowed further comprehension of the reproductive structure of the population analyzed. Results Table 1 shows the allele frequencies detected at the six loci using the tetra-primers-ARMS-PCR and RFLP-PCR techniques.With the exception of the b-CN gene, two alleles were found in all the loci studied. The k-CN and b-LG genes presented very similar frequencies for both alleles (A and B) but the as 1 -CN gene B allele showed a much higher frequency than the C allele.For the a-LA gene, the B allele was present at a higher frequency than the allele A. The technique used for the b-CN gene only allowed us to differentiate between B and non-B alleles, therefore we assumed that A allele corresponded to any other allele different from the B allele. In the case of the two DGAT1 alleles we found that the A allele, corresponding to an alanine mutation associated with low milk fat content and high milk yield, was 492 Rincón et al. present at a very high frequency, while the K allele, considered the ancestral, showed a much lower frequency (Table 1). The likelihood ratio test revealed that all the loci analyzed showed no significant departure from the expected Hardy-Weinberg equilibrium (p > 0.05; Table 1). Observed, expected and mean values of heterozygosity and F IS statistics are shown in Table 2.The loci that presented both alleles at frequencies of around 0.50 had higher heterozygosities (k-CN and b-LG), while loci showing one allele at a much higher frequency than the other (aS1-CN and DGAT1) showed lower values.The a-LA locus presented intermediate values. Wright's fixation index (F IS ), a measure of the inbreeding coefficient, was in agreement with the other results, being low when genetic diversity was high (as in k-CN and b-LG, F IS < 0.05) and medium or high when the heterozygosity was low (aS1-CN and DGAT1, F IS > 0.10).In these two cases a bias towards heterozygote excess was inferred from the negative F IS values that was confirmed by the differences between observed and expected heterozygosity.The observed high F IS values were not associated with Hardy-Weinberg disequilibrium. Discussion Our study used milk major gene polymorphisms to elucidate the genetic structure of Uruguayan Creole cattle.Tetra-primer ARMS-PCR was used to successfully identify k-CN, b-CN, a s1 -CN and a-LA allele variants, while the RFLP-PCR method proved suitable for efficiently determining specific mutations in the b-LG and DGAT1 genes. Analyses performed on Holstein-Friesian cattle demonstrated that the B alleles for most of the proteins of the casein cluster as well as for b-LG are related to high cheese quality since they increase the rate of curd formation, rennet clotting time and coagulum strength (Van Eenennam and Medrano, 1990).We found that Uruguayan Creole cattle showed similar or even higher frequencies for the B al-lele for these protein genes.In the case of the b-CN gene the fact that we were unable to find any B allele implied that the allele is not present in the population or is present at a very low frequency as has been reported for many other cattle breeds (Jann et al., 2004). Allelic variants of the k-CN, a s1 -CN and b-LG genes have also been analyzed in Argentinean and Bolivian Creole cattle (Lirón et al., 2002) and show similar frequencies to that observed in the Uruguayan Creole cattle.However, the Uruguayan Creole cattle allele distribution is even more similar to that seen in semi-wild Argentinean cattle (the Patagonian Creole) and to a Bolivian population (Saavedreño Creole) bred for dairy and beef production.Also, the expected heterozygosity of the k-CN (0.5028) and b-LG (0.5031) genes in Uruguayan Creole cattle is higher than that occurring in Argentinean and Bolivian Creole herds (Lirón et al., 2002). The a s1 -CN polymorphism showed a high B allele frequency (0.8) and a low heterozygosity value (He = 0.2376) similar to that observed in Creole populations as well as in North American Holstein-Friesian cattle (Van Eenennaam and Medrano, 1990;Lirón et al., 2002). The a-LA gene is related to the biosynthesis of lactose in the mammary glands and thus regulates the volume of milk, the effect of two allelic variants in the regulatory region of this gene having been studied by Bleck and Bremel (1993) who showed a positive correlation between the A allele and high milk yield, low milk fat and protein, while the BB genotype had a higher percentage of protein and fat.The A allele is not found in some beef and dairy breeds, but is thought to be involved in the expression of a-LA and thus in modulating milk production.The tetra primer ARMS-PCR genotyping technique detected both alleles and a medium level of heterozygosity (He = 0.4192) in the Uruguayan Creole cattle.This gene should be considered as an important target to preserve in this population for future dairy management. In general, the level of heterozygosity found by us is in agreement with the high genetic diversity that has been previously observed in this population using other molecular markers (Rincón et al., 2000;Armstrong, 2004). The DGAT1 gene also presented both alleles described in the literature (Grisart et al., 2004;Winter et al., 2002).In cattle, allele frequency distribution show a tendency towards high frequencies of the DGAT1 A allele in Bos taurus breeds and the same effect for the DGAT1 K in Bos indicus.In Uruguayan Creole cattle, the DGAT1 allele A (corresponding to the alanine mutation) was the more frequent (0.8864) than the ancestral K allele (0.1136) form.Considering the history and development of the Uruguayan Creole cattle population, this finding could be explained by genetic drift effects acting upon the population established around 1930 (Arredondo, 1958) as well as by the the founder effect.The DGAT1 gene is considered to be a quantitative trait locus (QTL) for milk yield and composition, with the A allele being related to low milk fat and high milk protein content, high milk-yield (Winter et al., 2002;Spelman et al.;2002;Grisart et al., 2004) and low intramuscular fat (marbling) (Thaller et al, 2003).We found that Uruguayan Creole cattle showed similar DGAT1 allele frequencies when compared to a beef breed such as Aberdeen Angus (A allele = 0.87; K allele = 0.13) and the old Spanish breed Toro de Lidia (A = 0.79; K = 0.21) considered closely related to American Creole cattle (Kaupe et al. 2003).Breeds selected for milk production show more variation in DGAT1 allele frequencies, some being near to 0.50 for each allele (e.g.German Holstein, A allele = 0.58; K allele :0.42) or higher for the K allele (e.g.New Zealand Holstein-Friesian, A allele = 0.40; K allele 0.60 and Jersey, A allele = 0.31; K allele = 0.69), which is most likely the result of artificial selection for increasing milk yield and/or high milk fat content (Kaupe et al., 2003;Spelman et al., 2002). Low milk fat content is related to an increase in follicular activity of the ovaries, which results in a higher fertility of the female (Lucy et al., 1992).As the production of low fat milk also results in lower energy expenditure these two factors combined may give a natural selective advantage for cattle which carry the A allele (Kaupe et al., 2004).This is an even more interesting observation when applied to a semi-wild unselected population of cattle such the Uruguayan Creole cattle herd because these animals have lived under natural conditions with almost no management for 400 years (Arredondo, 1958;Postiglioni et al., 2002;Rincón et al., 2000;Armstrong, 2004). All loci were in Hardy-Weinberg equilibrium which is in agreement with previous protein loci data obtained from this cattle population (Postiglioni et al., 2002).However, medium or high F IS values in the two loci that show very low frequencies of one of the detected alleles (DGAT1 and aS 1 -CN) should not be overlooked.These two loci also showed lower gene diversity (measured as He), due to the large difference between allele frequencies.The heterozygosity level of the DGAT1 found in the Creole cattle was similar to that found in other breeds that exhibit similar allele frequencies (e.g.Toro de Lidia, He = 0.33; Aberdeen Angus, He = 0.21) (Kaupe et al.;2003). The results presented in this paper show the value of the tetra primers ARMS-PCR technique as a quick, easy and economical way of genotyping cattle breeds for milk related gene SNPs.Another important finding of this work is the allelic richness and high level of heterozygosity of most molecular markers studied in the Uruguayan Creole cattle, which shows that this reserve is a precious source of genetic variation that should be maintained or used in livestock genetic improvement.
2017-09-15T10:11:41.295Z
2006-01-01T00:00:00.000
{ "year": 2006, "sha1": "f4cc14534a9b2488eabab9b4ddc618eb771cd86b", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/gmb/a/yFMfwySdhp4HTjKLQxMTrPn/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f4cc14534a9b2488eabab9b4ddc618eb771cd86b", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
157946122
pes2o/s2orc
v3-fos-license
Impact of crude oil prices on the Bombay stock exchange Most of the studies have been done about the investments that such money which is used for the purpose of the future use. In the world, there is various ways through which investors can invest their money like oil and gold. The investment in the shape of gold is known as the tangible assets. During the financial crisis decade gold is known as the safe way of the investment. It has seen that oil is risky investments then other assests. Many studies prove that oil prices and stock exchange have the inverse relationship. It can be crude oil is known as the safe investment. The aim of this paper is to show the relationship between stock market and crude oil prices. Crude oil prices have influenced on the performance of the stock exchange. Crude oil prices influence on the industries Abstract on the countries. Similarly, it is seen that oil prices on the prices of the subsidies. For analysis the economy of the country, oil is known as the debatable variable. Increase and decrease the oil prices have influenced on the oil prices. Overview on the Bombay stock exchange : Bombay stock exchange is known as the faster stock exchange of the world. It was founded in 1876.According to market capitalization; it comes on the 11 th no. Large capitalization is the reason the investors want to invest here. The value of market capitalization is seen 1.9 $ trillion. In 2001, it was considered as the derivatives market. It is famous due to its screen based trading system. AT any time, investors can do trade through this system. Impact of international crude oil on the different stock market: 1) Lower cost of the energy become reason of the high profitability. 2) Crude oil prices have also impact on the exchange rate. 3) Lower energy costs have reason of the higher demand in the different domestic market. Of course it is proved that all the sentiments about the stock market showing that its behavior. Objectives: 1) Impact of crude oil prices on the stock market of India. 2) Impact of crude oil prices on the inflation rate. 3) Impact of crude oil on the economy of the India. Problem statement: In this paper, problem statement is the increase and decrease crude oil prices impact on the economy and inflation of the Indian stock exchange. H0: There is positive association between oil prices and stock exchange of India H1: There is no association between oil prices and stock market of India H0: There is positive association between oil prices and inflation rate. H1: There is no positive association between oil prices and inflation rate. Methodology: We have taken the secondary data from dec2008 to August 2013.In this paper; we have taken the Shangi and Bombay stock exchange as the independent and oil prices and dependent variable. We have applied the multiple regressions for this purpose. (1) Y = a + β1X1 + β2X2 + β In the table no 1we have analyzed that data is stationary or not, the results has shown that data are stationary at level 1. Conclusion: Most of the studies have proved that oil is the key indicator of the economy of all the countries. Demand of oil is increasing day by day and it has impacted on its oil prices. According to Kilian (2006) there is positive relationship between oil prices and stock exchange. Increase in crude oil has worst impact on the importers countries. Due to oil prices the prices of transport also increase. Therefore, our paper is trying to expose that increase prices of crude oil caused the inflation rate. This paper is showing that oil crisis impact on the economy of the country and development. The different researchers are being showed that there are some other variables, which impact on the economy but they are not appearing .But in this paper has shown that oil prices are known as the uncontrollable variable .Our study is showing that market prices are affected with these up and downs. 2) In all the variables, oil pieces are known as the uncontrollable variable, therefore there is need of implemented new programs. IMPACT OF CRUDE OIL PRICES ON THE BOMBAY STOCK EXCHANGE 3) Forecasting is known as the best way to control such types of variables. 4) Government should found out alternative in the condition of uncertainty.
2018-01-26T03:34:09.164Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "089290a633ba478f6ecce742194137d92d10652d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2167-0234.1000216", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "86c5b16a1eeea1f52f8ac7d23f386f1ca33f1746", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
7336730
pes2o/s2orc
v3-fos-license
Reflections on Darwinian Evolution – Is there a Jewish Perspective? I present a realistic view of what Darwinian evolution is in its current form and what it is not. I argue that the Torah is not a source of scientific knowledge and all attempts to reconcile its plain text with the data of science are an exercise in futility. The article argues the position that science and the Torah are incommensurable. I argue against using the Torah for attaining knowledge about the nature of the world, or using science for enhancing or denying the truth of the Torah. INTRODUCTION As part of the 150th anniversary of the publication of On the Origin of Species, a prominent Orthodox Jewish physician and ethicist has published in RMMJ a comparative analysis between what he calls "the scientific aspects of the theory of evolution and a Judaic approach to these aspects". 1But rather Steinberg presents a creationist fundamentalist view masquerading as rational and "reasonable debate ... in a calm and humble way".In his essay, Steinberg digresses into discussing some issues concerning the science-religion debate that are clearly irrelevant to evolutionary theory.His arguments are partly misleading and mostly incorrect. One is tempted to let it go and pass indulgently and in silence over this entanglement into whitewashing apologetics.But Maimonides in his Guide of the Perplexed 2 holds the opinion that it is better to bring no proof at all in favor of the Torah than to bring a poor proof, because a poor proof brings the whole system under suspicion; no proof does not.So as a Jew deeply committed to Halakhic Judaism and as practicing geneticist, I cannot refrain from offering some reflections in order to rectify the false picture presented on both subjects, Darwinian evolution and the Jewish perspective. SCIENCE AND DARWINIAN EVOLUTION Steinberg states that "Judaism accepts all experimentally proven facts and observations of the theory of evolution ... but rejects other assumptions and speculations which contradict fundamental Jewish beliefs, and which are anyway not scientifically proven".He presents a widely, but incorrectly, believed perception that the basis of all scientific knowledge is facts, which are obtained by experiments and observations.Accordingly, science begins with facts -observations about nature that can be verified by other scientists.Only after an agreed-upon body of facts exists can one begin to formulate theoretical concepts that might explain them. 1 However, this view is wrong: science does not begin with facts; rather, all experimentation begins with the premise "Let us assume that ...".In short, science starts with theories and concepts about the physical world.Only after a theoretical framework has been formulated can one understand what the facts are.Boldly put, "facts are ventriloquists' dummies.Sitting on a wise man's knee they may be made to utter words of wisdom; elsewhere, they say nothing, or talk nonsense, or indulge in sheer diabolism" (quote attributed to Aldous Huxley).Science advances by postulating concepts and making assumptions, and then investigating to determine whether these concepts and assumptions are successful in explaining and predicting phenomena observed in nature or in the laboratory.As successful explanations grow, it becomes more and more plausible that the assumed concepts and ideas are basically correct and become scientific "facts".In science, "fact" can only mean confirmed to such a degree that it would be perverse to withhold assent.I suppose it is possible that the theory of gravitation is false and apples might stop falling to the ground and start rising, but the possibility does not merit any serious consideration. Steinberg further states that "it is important to distinguish between conclusions drawn from controlled experiments, and a theory, a speculation, or an assumption".Consequently he charges that many of the premises of the evolution theory are unproved speculation: "evolution is only a theory; therefore, one can accept that which is fact and experimentally proven and reject that which is an unsubstantiated hypothesis, or replace it by an alternative explanation".It is important to emphasize here that facts and theories are not rungs in a hierarchy of increasing certainty.Facts are the world's data, while theories are structures that explain and interpret data.Facts do not go away when scientists debate rival theories to explain them.Einstein's theory of gravitation replaces Newton's, but apples did not suspend themselves in mid air, pending the outcome.Humans evolved from apelike ancestors whether they did so by Darwin's proposed mechanism or by some other yet to be discovered. 3einberg introduces a list of creationist claims but without being able to bring one single reference from the professional peer-reviewed literature, because creationist standards of scholarship are too low for publication in professional, reputable journals.For example, he challenges the fossil evidence, including the claim of a "missing link", namely the lack of transitional fossils from lower to higher species. Even a cursory analysis of the professional literature would prove that these claims are incorrect.Based upon the consensus of numerous phylogenetic analyses, the chimpanzee is the closest living relative of humans.Thus, we expect that organisms lived in the past which were intermediate in morphology between humans and chimpanzees.Indeed, over the past century, many spectacular paleontological finds have identified such transitional hominid fossils.Or take the Ambulocetus (the "walking whale"), the transitional fossil that shows how whales evolved from land-living mammals.As S. J. Gould puts it, "If you had given me a blank piece of paper and a blank check, I could not have drawn you a theoretical intermediate any better or more convincing than Ambulocetus.Those dogmatists who by verbal trickery can make white black, and black white, will never be convinced of anything, but Ambulocetus is the very animal that they proclaimed impossible in theory." 4 Continuing, Steinberg writes that "one of the most challenging problems of the theory of evolution is the origin of life" and that Darwinian evolution fails to explain how life arose and developed.To put it mildly, this is a rather odd statement for a biologist.It does not take an expert to know that evolution theory is not about "how life arose".Evolution theory is about the evolution of the variety of living organisms from a common ancestor.As to the origin of that common ancestor, the first replicator, this question is beyond evolution theory.The essence of the theory of evo-lution is that organisms are related by descent from common ancestors.Over time, organisms change and diversify as they adapt to different environments.Species that share a recent common ancestor are more similar to each other than species whose last common ancestor is more remote.Thus, humans and chimpanzees are, in configuration and genetic make-up, more similar to each other than they are to baboons, elephants, or kangaroos.But other concepts, commonly used in literature about Darwinian evolution (especially in popular literature), such as "survival of the fittest" should not be understood simplistically and to a large degree are not essential to the modern understanding of Darwinian evolution.In fact most currently available information leads us away from the idea of survival of the fittest and toward a model of survival of the "barely tolerable". 5nding to oversimplify the concept of "survival of the fittest", we might expect that the most impressive results of evolution are the complex and perfected adaptations of organisms to their environments.[8] There is enormous variation and diversity in the antibody population -the system is capable of recognizing more than 10 8 antigen patterns.By recombination, mutation, insertion, and deletion, gene fragments are packaged by lymphocytes, forming populations of receptor complexes that compete to take hold of foreign antigens.Those that succeed get to reproduce their progeny.The successive rounds of mutations and selections that occur allow the body's immune system to choose a population of cells that specifically synthesize the correct antibody profile to combat the specific infection.The truth of the matter, however, is that no evidence for evolution is stronger than the presence of rudimentary or vestigial structures in nearly all organisms including humans."Remnants of the evolutionary past that don't make sense in the present -the useless, the odd, the peculiar, the incongruous -are the signs of Darwinian evolution". 9Indeed, the immune system demonstrates evolution, but not because it has perfected adaptation of the antibody molecule to the specific infectious agent, but rather because it is clumsy and built from odd parts.As a defense organization, the immune system is large, complicated, and wasteful; it is slow to react and fights today's threats with the solutions of the past. 10The so-called opponents of the immune system -viruses, bacteria, and parasites -are hardly predict-able and are rapidly changing, so past experience does not necessarily prepare the host's immune system for future challenges.While the selective forces acting upon the immune system are constantly varying, the products of the immune cells are often poorly adapted to a particular set of circumstances.Consequently, there is a continuing loss of life from infectious diseases. When new features evolve in a species, they tend to build on already existing features.They are not built from scratch. 11(Francois Jacob elaborated a model of evolution as "tinkering".According to Jacob, natural selection only works with the materials available and within the constraints present at a particular time in a particular place. 11) From an evolutionary standpoint new features do not need to have the best possible design.They just need to be good enough to allow the organism to live long enough to reproduce.The evolution of the human body is no exception.We have body parts whose design is deficient, but they have been tolerable enough to keep our species from extinction.Let us consider the following suboptimal designs in the human body: The human pharynx is the part of the throat that begins behind the nose and leads down to the voice box.It does double duty as a tube for breathing and for swallowing.But when you are swallowing you cannot breathe, and when you are breathing you cannot swallow.That is why humans run a serious risk of choking if the pharynx does not close at the right time when eating.Curiously, human infants under 6 months and chimpanzees do not have this problem.But infants and chimpanzees cannot talk, and without our uniquely situated pharynx we would not be able to talk either. The evolutionary innovation of bipedalism -walking upright on two legs -forced a smaller pelvis on us.But bipedalism is not the whole story.Humans have evolved big brains, and big brains needed big containers to hold them.This is why human infants are born more premature and helpless than other mammals.Babies need to get through the birth canal before their heads get too big.The small birth canal is responsible for significant death of mothers and infants during the complex process of birth. Compared to our Homo erectus ancestors who had massive jaws with huge molars, the human jaw is too small for the number and size of our teeth.Many people have no room for wisdom teeth (third molars) if they get them, and a lot of people's teeth have to fight one another for limited space, leading to crooked teeth.Impacted wisdom teeth can result in serious infections, and before modern dentistry these late eruptors could be deadly.These facts may be related to an inactivating mutation in a myosin gene (MYH16), a chief component of the powerful jaw muscles of many non-human primates who share large crests on their skulls to which their heavy jaw muscles attach.All modern humans share a defect in the gene that created this protein, which could have left us unable to produce one of the main proteins in primate jaw muscles, and as a consequence the crest on the skull for the muscle attachment is notably absent from all modern human skulls. 12Our ancestors may have lost their skull crests when our jaw muscles stopped exerting so much strain on the skull.By doing away with large anchors for chewing muscles, our skull may have freed itself to grow into its modern, rounded shape, and unconstrained our brain to increase its size. The human vermiform appendix, a 5-10 cm long and 0.5-1 cm wide pouch that extends from the cecum of the large bowel, is a derivative of the end of the phylogenetically primitive herbivorous cecum found in our primate ancestors. 13The human appendix has lost its previously essential function as a cellulose-fermenting/digesting cecum and has no apparent function in modern human.Indeed, people who completely lack appendix from birth have no apparent physiological detriment, and appendectomy is without currently discernible long-term side-effects. 14Since evolution is not keen on cleaning up after itself, we are left with a potentially life-threatening situation when indigestible food that enters the appendix is not forced out by muscular contractions.In as much as 7% of the population in industrialized countries, the appendix becomes inflamed and must be surgically removed to avoid a critical infection. THE "JEWISH FAITH" AND THE THEORY OF EVOLUTION Steinberg writes: "Judaism, as a monotheistic religion places an absolute truth in the existence of an Almighty God ... who created the world, established the rules of nature, and commanded a moral-religious practice embodied in the Bible which was given to the Children of Israel on Mount Sinai around 3,300 years ago (around 1290 BC)".In contrast "science has inherent limits ... it is constantly altered and changed as new discoveries and facts develop.The mere fact that a scientific theory is accepted by the majority of scientists does not prove that it is correct ... [T]he theory of evolution, which at first may be widely accepted ... may be [later] proven to be partially or totally incorrect". 1 I hold the opinion that it is ill advised and wrong to attempt to protect the truth of the Torah by casting doubt on the certainty of scientific understandings and/or by trying to prove that scientific truth is not absolute but rather inconclusive or preliminary.Such strategy mistakenly treats the Torah as a text-book in physics, chemistry, or biology, as if the Shekhina had descended onto Mount Sinai to fulfill the functions of a university professor.Hence it is wrong to regard revelation as a substitute or supplement for natural knowledge or consider Scriptures a body of information on the nature of the world or its history. 15One should be aware of the paradox evoked when one regards the Torah as superior only to the extent that the information provided to him is more reliable than his biology book.Information obtained from historical, physical, or biological inquiry that satisfy human curiosity should be deemed within the realm of the profane.If the "Holy Scriptures" were a source of information, it would be problematic to see their sacredness.Therefore, any literal reading of the first chapters of Genesis is misguided, whether to show that the Torah is wrong (evolution is a process of millions of years rather than six days) or when modern science is used to validate the Torah (such as the assertion of the biblical statement "and there was light" by the "scientific discovery that the universe began with the sudden appearance of an enormous 'ball of light' … dubbed 'the big bang'"). 16 is arrogant for us to determine that we are at the center of a 5,770-year-old world, capable of understanding all of "divine" creation from words written in a few paragraphs in the book of Genesis.Nevertheless, Steinberg is, of course, entitled to his fundamentalist position regarding how to read the Torah.Conversely, one cannot accept the claim that his perspective is the unified view of Halakhic Judaism.In fact, throughout his article, Steinberg considers Judaism as representing a significant unity of beliefs which includes a particular conception of man, of the world, or of its history.This view is clearly erroneous as any analysis of Jewish intellectual history can prove. Jewish doctrines and principles were so diverse and dependent upon different schools of thought pertinent to their epoch that they can hardly be alleged to present any significant unity.Hence, Judaism as a historical entity was not constituted by its set of beliefs or philosophical opin-ions.(I argue that the core of the Rabbinic Judaism is its religious practice determined by the Halakha.However, I do not argue that there is no system of belief behind its practice, but instead that it is not intended to be a picture of the world.It is just a framework in which one conducts practices that are supposed to be appropriate.See also Jacob. 17) In truth, articles of faith were the subject of fierce dispute throughout Jewish intellectual history.Even the interpretation of the idea of divine unity by rabbinic thinkers is characterized by direct oppositions.The primary document of Jewish faith, the Shema, opens with the verse "Hear O Israel, the Lord our God, the Lord is One".We should pay attention that the "One" of Isaac Luria (1534-1572) is incompatible with the "One" of Maimonides.We should be acutely aware how dangerously close the Jewish Kabbalists' belief in a decimalian system of deity is to the Christian Trinitarianism.Therefore, Steinberg's assertion that "it is a cardinal axiom of Judaism that God created the world from nothing" 1 is simply incorrect.The presence of Rabbi Levi ben Gershon's (Gersonides) biblical commentary in the rabbinic Miqraot Gedolot Bible is a wonderful testimony to the openness of the rabbinic tradition to diverse theological interpretations, in total difference to the picture presented by Steinberg.Gersonides' (1288-1344) radical hermeneutics is expressed not merely in seeing the biblical account of creation in non-literal terms -that is not unusual for the rabbis -but in applying a philosophical interpretation to that event which both limits and depersonalizes God and feels compelled to reject the notion of creation ex nihilo.Creation for Gersonides is an event in which God functions as the donor formarum (noten ha-shurot).While Gersonides is convinced that creation in all its parts testifies to God's beneficent design, the creator is yet constrained to work with that which is coeternal with him. 18though Maimonides' opinion on this issue is far from being clear, in the Guide of the Perplexed he states: "We do not reject the Eternity of the Universe, because certain passages in Scripture confirm the Creation; for such passages are not more numerous than those in which God is represented as a corporeal being; nor is it impossible or difficult to find for them a suitable interpretation.We might have explained them in the same manner as we did in respect to the Incorporeality of God.We should perhaps have had an easier task in showing that the Scriptural passages referred to are in harmony with the theory of the Eternity of the Universe if we accepted the latter, than we had in explaining the anthropomorphisms in the Bible when we rejected the idea that God is corporeal...". 19Thus, in principle, believing in creation from pre-existing matter is not incompatible with the Torah.More importantly, however, for the purpose of my argument here, Maimonides' or Gersonides' specific opinions on this issue are irrelevant.They are men of the Middle Ages, and their scientific views are deeply rooted in their times.Dr Steinberg and I, however, are men of the twenty-first century, and when we talk about "science" we should refer to what we mean by science today and not to what they represented in the Middle Ages.To us, the importance of chapter II:25 of the Guide 19 rests in the approach of Maimonides to contradictions between the literal meaning of a Torah verse and well established knowledge, say, modern science.In such a case, Maimonides affirms that one should accept the science, reject the literal meaning of the Torah verse, and understand the verse figuratively: "For if the Creation had been demonstrated by proof, even if only according to the Platonic hypothesis, all arguments of the philosophers against us would be of no avail.If, on the other hand, Aristotle had a proof for his theory, the whole teaching of Scripture would be rejected, and we should be forced to other opinions.I have thus shown that all depends on this question." There is, indeed, a clear and extensive history to claims that the scientific knowledge of the rabbis of the Talmud was based on the theories current in their time and can be disproven by later scientific discoveries.For example, the Mishnah mentions the existence of a mouse that was half animal and half dirt. 20Since the sages obviously did not witness this imaginary creature themselves, they, probably, either read about it (perhaps in Plinius' History of Nature 9:58) or heard about it from others.Similarly, the Talmud seems to accept that the human heart has only two chambers. 21This was indeed in accordance with how Hippocrates and Galen understood the heart at the time.Maimonides explicitly states, with respect to these very issues, that they are outside the limits of acceptable rabbinic authority: "Do not ask of me to show that everything they have said concerning astronomical matters conforms to the way things really are.For at that time mathematics were imperfect.They did not speak about this as transmitters of dicta of the prophets, but rather because in those times they were men of knowledge in these fields or because they had heard these dicta from the men of knowledge who lived in those times." 22t Steinberg argues the exact opposite: the rabbis of the Talmud had divine assistance in understanding scientific reality.So if contemporary science disagrees with the sages' perception of reality, then evidently nature has changed.Hence, Steinberg claims that intraspecies changes, "micro-evolution", have been demonstrated and "indeed, already early rabbinic authorities described numerous intraspecies changes between the Talmudic period and their own". 1 They call it "Nature has changed", and Steinberg enumerates them in his Encyclopedia. 23I am deeply puzzled: Is this an error effecting the naive, or perhaps a pretense of naïveté, claiming the existence of a mouse that was half animal and half dirt or a two-chambered heart which has changed in the evolutionarily minuscule time-period of 1,500-2,000 years?Indeed changes in climate, diet, hygiene, and accessibility of clean water and food have caused biological relevant changes in human life expectancy, average height, and time of appearance of menstrual cycles in girls, as has been amply demonstrated scientifically.But the laws of nature have not changed: living creatures can arise only from other living things.I wonder why the same scientific standards Steinberg keenly demands from evolutionary biologists are not applied to those rabbinical claims that nature has changed.If, indeed, one would search for demonstrable proof to the validity of any claim that "nature has changed" in rabbinic literature, regrettably one would find absolutely no such evidence. It seems that Steinberg pays only lip-service to the transcendental position in Judaism that became an essential part of Jewish theology since the Middle Ages to these days.I am puzzled by the obsession to locate a transcendental deity in the middle of the debate over how the universe came into being, whether the universe is eternal or created at a certain time, and how, when, and what is its history.It seems that he is not aware, or rather chose to ignore, the considerable theological challenge this view produces.By accepting an unconditional transcendental God, one must dismiss any notion of ontological reality, namely, the assertion of Godly cosmic intelligence which is reflected in the world and its functions.All knowledge, no matter where, how, and by whom it is produced, ought to be discussed unrelated to an ontological reality (of which we know nothing and cannot know anything).It should be emphasized that the transcendental position in Judaism did not start with the Jewish philosophers of the Middle Ages; evidence for this position can be found among Chazal in the Talmud; for example, Babylonian Talmud. 24terestingly, some of our contemporary Orthodox scientists and rabbis have revived the medieval scholastic argument (which is Christian in its origin) that there is no necessary conflict between science and religious belief since God wrote two books, the Bible and the "Book of Nature", by which his existence and intentions could be known.Therefore, the study of nature had religious value, and the notion that humans should use their God-given faculties of observation and reason to read the "Book of Nature" accurately could be regarded as a religious duty. 6,25strongly disagree with this view, and I am acutely aware of its consequences.We must not deceive ourselves into believing that the Torah provides any more useful information regarding nature than the natural sciences provide about the Torah.Invoking this old idea is not only problematic from the perspective of Halakhic Judaism, but it also reflects a deep misinterpretation of current natural sciences, as amply exemplified by Steinberg's article.There is a decisive difference between what was called "science" in ancient and medieval times and what is called "science" today, and Steinberg seems not to pay attention to it.The major change that took place in the scientific outlook (starting roughly in the seventeenth century) was the introduction of the concept of the functional relations among the phenomena investigated by science.Modern science succeeds by looking solely for functional relations across factual data.Experimental biology, as physics beforehand, refrains from dealing with problems of life itself and focuses upon its active mechanisms.These mechanisms are described by the functional relations among phenomena.The question remains purposefully open whether these mechanisms, being described by biologists, constitute life itself or are no more than mechanisms active in life.Indeed, Claude Bernard makes a distinction between the essence of life and the mechanisms acting in life.26 In my opinion this distinction is valid today, as it was 145 years ago.Generally speaking, what used to be called science until modern times did not differentiate between the mechanisms functioning in the world and the essence of the world itself.Nature was understood as expressing a purpose, meaning, or value embodied in the phenomena.The conception of nature and the world in terms of meaning dictated the way people looked on natural data.For example, the Aristotelian-Ptolemaic conception of the celestial bodies re-volving with uniform velocities in circular orbits was not derived from observation.In fact, observations suggested all these movements to be neither circular nor uniform, but because circular and uniform movement was deemed as the perfect movement, and nature was supposed to express this meaning, an astronomy was devised in order to comply with the paramount notion of a perfect and meaningful universe.This is the background for the longstanding confrontation between "religion" and "science".If the content and conclusions about natural phenomena bear specific meanings and are expressions of these values, then matters of science are on the same plane as matters of faith, namely both enterprises deal with questions of meaning.If this view is accepted, then religion and science may be intertwined, mutually antagonistic, or supplementary as the case may be.However, today we have no "science" in the sense of the Middle Ages, in which religion and science meet.Modern science and the Torah are entirely alien to each other. Since natural sciences have gradually and progressively liberated themselves from the task of discovering the meaning of reality and become exclusively interested in functional relationships, they have become indifferent to any and all issues of meaning, purpose, or value (an important qualification: the practice of medicine is a problematic discipline in relation to other sciences, in that its practice has major moral and value consequences and therefore should not be considered in the present discussion).This is the only context in which natural science is objective; it is uniform and common to all who understand it and is independent of the varying outlooks and values of those involved in scientific discourse.While the scientific discourse should be (by definition) understandable to anybody who acquires the knowledge of its language, the religious discourse, the language of the Torah is infinitely "incomprehensible" and needs constant interpretation."The Torah speaks in the language of man" -we should always recognize that we are talking only in metaphors (This expression frequently arises in the Talmud as representing Rabbi Ishmael's approach. 27Rabbi Ishmael means to say that the Torah contains certain verses that should be taken in the plain sense and not expounded homiletically.However, I use this remark in accordance with Maimonides' interpretation, namely this expression implies that the Torah employs language that is suited to the understanding of the masses, and therefore one should not take the Torah's words at face value.See, for example, Guide of the Perplexed: "You, no doubt, know the Talmudic saying, which includes in itself all the various kinds of interpretation connected with our subject.It runs thus: 'The Torah speaks according to the language of man', that is to say, expressions, which can easily be comprehended and understood by all, are applied to the Creator.Hence the description of God by attributes implying corporeality, in order to express His existence: because the multitudes of people do not easily conceive existence unless in connection with a body, and that which is not a body nor connected with a body has for them no existence." 28) -since the language of man is incapable of expressing divine matters.Leibowitz expressed this idea colorfully: "No expressions in ordinary language are adequate for speaking of God and of the position of mankind before God.Utterances of divine matters require careful scrutiny if one is to distinguish intended sense from literal meaning.Words may seem simple and unambiguous, such as 'and God descended upon Mount Sinai'.Yet most of us understand that God does not dwell on the top of a cosmic skyscraper from which he descends in a helicopter.The same applies to all that is said in the so-called 'historical books' of the Bible''. 29ose of us who accept Halakhic Judaism acknowledge that Torah texts are unchangeable, and their study deemed the very highest of religious work.However, none of the readings and understanding thereby produced are or should be considered final.Thus, if experience appears to contradict an accepted interpretation of a text, we should search for a new interpretation, rather than denying the authenticity of our knowledge.We are indeed constituted by our books but categorically not by a single way in which these books can be read or understood. As a practicing scientist and an educated member of society, I subscribe to the notion that the best way to achieve knowledge about the world and the processes acting within nature is by applying the scientific method.The accomplishments of science in terms of conclusions, deductions, and inferences are not dependent on a person's willingness to accept or reject them but rather are forced upon those that know them.Thus, one has no free will to accept or reject the scientific truth of Darwinian evolution.Scientific research determines the truth (I am referring here to scientific truth which is instrumental and synonymous with "according with scientific method", to differentiate it from "truth" as a value, which is not imposed, and which one can ignore even if he knows for a fact that it is the truth) about reality even against the will of the scientist.In fact, the scientific truth imposes itself upon the investigator if he wants to achieve any theoretical or practical result.Intentional deceit or falsification is usually detected because the scientist's work is open to the critical scrutiny of his colleagues.Although continuation of the scientific activity may reveal in the future a somewhat different picture of reality, adherence to the scientific method is the only option that will allow us to rectify with time our mistaken scientific concepts. In absolute contrast to the scientist in me, I am, at least to a certain degree, acting as a free agent when it comes to the practice of Judaism.To my knowledge, the choice to put on phylacteries this morning had practically nothing to do with whether I have irrefutable evidence to the existence of God, the creation of the world, or whether the biology I am studying the rest of the day enforces or denies my religious convictions. While the position for which I argued here is that science and the Torah are incommensurable, there is one aspect in which Torah scholars and scientists are exactly in the same situation.Rabbi Naftali Zvi Yehuda Berlin (1813-1893), the Naziv in his introduction to his Ha'amek Davar, explains why he felt the need to write a new commentary on the Torah (my own translation): "just as it is impossible for a scientist to feel falsely assured that he has discovered all the secrets of nature … and not just that, but that he has no certain proof that what he has discovered in his research is correct, [because] a colleague or someone in a future generation may come and contradict his scholarly construction, so it is not possible for the person engaged in scholarly Torah study to be certain about his interpretation and to confirm all the advances he has tried to make and investigated, and to claim that he has confirmed them all.Furthermore, there is never proof that his explanation reflects the true meaning of the Torah.Nevertheless, it behooves us to attempt to do all that we have the ability to do."It seems that the Naziv holds that Torah scholars and natural scientists share a common stance, namely there is no certainty in the outcome of their respective undertakings.This humbling realization of the nature of human pursuit (be it the most noble and worthy), should not be considered an impediment, but rather a liberating idea that should energize the respective scholar to work even harder so that he will flourish in his endeavor. CONCLUSION There is no unique Jewish perspective on evolution, as there should not be a singular Jewish position on any other theoretical scientific issue.As a reflection of their wide interests beyond Halakha, and as intellectually curious and educated members of their respective societies, rabbis, throughout history, maintained diverse opinions on scientific matters deeply rooted in their times and environment.Yet, even the most authoritative of our rabbis rarely hold the opinion that their views on scientific matters reflect the unified view of Judaism. The Darwinian evolution theory in its current synthesis remains central to the enterprise of biology today.After 150 years of the most intense analysis, debate, and critical testing, the theory of evolution stands as strong as ever with thousands of facts as its empirical base.As Peter Medawar eloquently put it "the alternative to thinking in evolutionary terms is not to think at all".Whether we like it or not, biology simply means evolution.
2017-06-13T13:50:56.539Z
2011-04-01T00:00:00.000
{ "year": 2011, "sha1": "d2d89e9bb66117a9f6016eee85d87d7d7e36f5dc", "oa_license": "CCBY", "oa_url": "https://www.rmmj.org.il/userimages/36/1/PublishFiles/46Article.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d2d89e9bb66117a9f6016eee85d87d7d7e36f5dc", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
247212810
pes2o/s2orc
v3-fos-license
A Software Framework for Self-Organized Flocking System Motion Coordination Research We describe and analyze the basic algorithms for the self-organization of a swarm of robots in coordinated motion as a flock of agents as a strategy for the solution of multi-agent tasks. This analysis allows us to postulate a simulation framework for such systems based on the behavioral rules that characterize the dynamics of these systems. The problem is approached from the perspective of autonomous navigation in an unknown but restricted and locally observable environment. The simulation framework allows defining individually the characteristics of the basic behaviors identified as fundamental to show a flocking behavior, as well as the specific characteristics of the navigation environment. It also allows the incorporation of different path planning approaches to enable the system to navigate the environment for different strategies, both geometric and reactive. The basic behaviors modeled include safe wandering, following, aggregation, dispersion, and homing, which interact to generate flocking behavior, i.e., the swarm aggregates, reach a stable formation and move in an organized fashion toward the target point. The framework concept follows the principle of constrained target tracking, which allows the problem to be solved similarly as a small robot with limited computation would solve it. It is shown that the algorithm and the framework that implements it are robust to the defined constraints and manage to generate the flocking behavior while accomplishing the navigation task. These results provide key guidelines for the implementation of these algorithms on real platforms. Keywords—Flocking; formation control; motion planning; multi-robot system; obstacle avoidance; swarm I. INTRODUCTION A field of great interest in robotics poses the solution of problems not with a high-performance robot but with a group of robots, simpler in structure, but which can interact to behave as a single system [1], [2]. These systems are known as multiagent, and seek to exploit the ability of biological systems such as flocks of birds [3], [4], schools of fish [5], [6], ants [7], [8] or aggregations of bacteria to solve complex tasks together [9], [10]. However, the control of these types of systems raises problems of high complexity given the characteristics of these dynamics. These are systems with self-organization capacity, which results as an emergent consequence of the system from basic behaviors of each of the agents [11]. The design of these basic behavioral rules that should generate the emergent behavior of the system is difficult. In addition, the robots, as a single system, must perform some task. The flocking behavior presents interesting characteristics that make it of high interest for the design of artificial systems, particularly in problems of localization, search and rescue. This type of behavior has been observed in birds, is similar to schooling fish and swarming insects, and is characterized by a joint movement of the group without central coordination [12], [13]. The first basic rules of this dynamic were established in 1987 as alignment, cohesion, and separation [14]. Subsequently, in 2003, a mathematical model of the 1987 Reynolds rules was proposed taking advantage of the geometrical strategies and the theory of Artificial Potential Fields (APF) [15], [16]. In these works, it was demonstrated how potentials in the environment are not only able to guide the navigation of the robots to the target point, but also to form a homogeneous flock along with the navigation task [17]. It has been observed in different applications that much of the success of this dynamics lies in the local communication scheme between agents, which determines the ability to selforganize within the system [18], [19]. The dynamics of a flock of birds corresponds to that of randomized search algorithms [20]. These algorithms rely on a large number of agents distributed in the search space, which at the same time identify local information, maintain communication with their neighbors to jointly identify the most optimal solution option. In the case of the flock of agents, this solution corresponds to the region of the environment with the characteristics most similar to those defined in the search problem. These characteristics of the dynamics are what make it suitable for solving complex navigation problems, particularly in unknown and dynamic environments. The complexity in the design of these systems lies in the fact that they can produce a system that is unable to solve the task either because the number of robots is too high or too low, if the local signal being tracked is too strong for the population size, or if the navigation environment is too complex [21]. These types of problems can drive the system to local minima, or impede the performance of the task. The motion planning of each robot within the system is linked to the structure of the system and the local information it can detect from the environment [22], [23]. Part of the information from the environment is transmitted to each robot from the movement of the system, which makes it robust to continuous changes of dynamics in the environment, but also dependent on the characteristics of the environment and the system for the success of the task. In this sense, the proper design of both the system and the behavioral policies of the agents is fundamental, particularly if we are looking for a system made up of agents with modest computational resources [24]. This research focuses on the development of multi-agent navigation strategies in this type of environment, guided by the identification of regions of interest in the environment [25]. These strategies must be following the task to be solved, so parameters such as system size, the distance between agents, the scope of the communication system, and characteristics of the basic behaviors must be different in each case [26]. Some assumptions are also made to simplify the model, but without moving it functionally away from the real prototypes. Among the initial assumptions, it is proposed to work with a single robotic platform (uniform system) with perfect displacement capabilities, constant values of the parameters throughout the development of the task, and discrete behavior of the agents in the sense that each action is triggered by a certain stimulus [27]. The objective is to achieve preliminary results that allow evaluating the success of the strategy and its possible implementation in real robots. Possible tasks in real applications include tracking pollution sources (static, dynamic, and multifocal), intruder detection, or wildlife tracking. One of the first developments in software tools to replicate the behavior of robot swarms was the one developed by Craig Reynolds to demonstrate the performance of his basic rules for flocking behavior in this type of multi-agent systems [14]. A later evolution by the same author was the OpenSteer project, a framework for implementing autonomous agent motions and behaviors [28]. These tools were originally intended for video game development, yet robotics has benefited from their use in demonstrating behavioral algorithms on them. The advantage (and disadvantage) in the use of this type of tool is the need for prior knowledge about the dynamics of multi-agent systems, a simulator for video games does not require such information, while a simulator dedicated to robotics requires it while allowing great versatility in the implementation of algorithms. Perhaps the best-known robotic simulation tool is Player Project [29]. It is an open-source software specifically developed for robotics that allows a large number of controls and sensors to be incorporated into a client/server network interface. The results of these simulations can be visualized in the two-dimensional module Stage or the three-dimensional viewer Gazebo. Unfortunately, the last update of the project was made at the end of 2010. Still, Gazebo evolved independently, and today it is a dedicated high-performance simulation tool. Another project developed for robotics is Robot Virtual Worlds or RVW [30], which, although oriented more for robotics education purposes, under specific conditions can function as a simulation platform with its programming language. Other platforms worth mentioning include the CoppeliaSim project [31], and Webots from Cyberbotics Ltd [32], active projects, programmable in different languages, with a commercial focus. We present the problem formulation in Section II. Here we give special attention to the basic behaviors that generate a flocking structure, and to the simplified representation assumed for the navigation problem. Section III presents the methodology followed for the construction of the framework, detailing not only the algorithm used but also the characteristics of its implementation. Section IV shows the results achieved in tests for controlled laboratory conditions, under which it is possible to perform experimental validation, and finally, the conclusions are presented in Section V. II. PROBLEM STATEMENT The flocking behaviors emerge in a multi-agent system as a consequence of a combination of basic or primitive behaviors executed independently by each of the agents. Among these primitive behaviors, five of them can be selected as the basis for the flocking structure [33]: • Safe wandering: In the design of path planning solutions it is fundamental to guarantee the safe wandering of the robot along with the environment, this includes reducing collisions with other agents as well as with the obstacles and limits of the environment [34]. • Following: Robots must be able to establish their motion strategy from the motion of nearby robots. In the case of flock-based navigation strategies, each agent must identify neighboring agents, calculate its distance to them, and define its forward direction and speed to reduce interference in the system's motion [4]. • Aggregation: While agents must follow the movement of their neighbors, flocking behavior also requires that agents can dynamically assemble during navigation while maintaining a safe distance between them [35]. • Dispersion: Like aggregation, dispersion (selflocalization of each robot in the system) turns out to be an important quality in autonomous coordination schemes, and is fundamental to the structure of the system throughout task development [36]. • Homing: This behavior allows each agent to move to the given target as part of the system task using sensed information in the environment [37]. That is, in a flock, each agent wants to stay close to the other agents (which it can detect), to do its best not to collide with them, and to move simultaneously towards the desired location. These behaviors make it possible to define a robust and well-organized flock. The objective of this research is to demonstrate that these basic behaviors allow generating a flocking behavior for an artificial system from fixed rules. This demonstration is done by simulation in a framework developed in Python for a system composed of TurtleBot 3 Burger robots from ROBOTIS. The goal is to scale the primitive behaviors to a large population of these agents. Consequently, the problem is defined from the activation of n holonomic robots with known physical dimensions (not points) in an environment W unknown to the robots but partially observable from their sensors, and which is defined in a connected and compact two-dimensional plane W ⊂ R 2 . From this definition, it follows that all the constraints to which the TurtleBot 3 robotic platform can be subjected are integrable in positional constraints of the form: where the variables q i corresponds to the coordinates of the system. Any typical environment W to be modeled by the framework contains in its interior a set of obstacles called O that consists of regions inaccessible to the robot within W , where each of these regions is characterized by a closed and connected boundary. Therefore, the set O is also considered connected, finite, and piecewise analytic. An additional characteristic of each of the obstacles in O is that they are disjoint pairs of each other, so they do not share common points. The boundaries of W , denoted by ∂W , constrain the movement of the robots within the environment. In addition, the boundaries of the obstacles are also part of ∂W . The free space through which the robots can navigate is denoted by E and is defined as W − O. Each of these robots (Fig. 1), according to its mechanical design, can be represented in the two-dimensional environment by a dish with a radius of 0.105 m (with center at the LiDAR sensor position) and an obstacle detection range (field of view of the LiDAR sensor) of 360 degrees with a range of 3.5 m. Other parameters derived from its design include a maximum forward speed of 0.22 m/s, and an acceptable range to define that it has reached a certain point in the environment of ±0.5 m. These robots have no explicit communication among themselves, only the ability to locate themselves and define their relative position concerning their neighbors (a basic type of local communication). The simulation framework assumes that the robots' sensing capability is perfect, that they are capable of perfect omnidirectional motion, and that they all follow the same navigation rules to define the path in W (Fig. 2). III. METHODS The framework was developed in Python 3.7.12, with support for Numpy 1.19.5, Scipy 1.4.1, and Matplotlib 3.2.2. The tool allows simulating the movement of robots in the environment at a scaled relative speed, and the result is compressed into a video file. This is achieved with the Matplotlib animation library, included in the Matplotlib 1.1 version, which enables to obtain a visual demonstration of the behavior from the features programmed in the navigation algorithm. The base class of the animation tool is matplotlib.animation.Animation, on which the animation functionality is built. The interfaces of this tool are TimedAnimation and FuncAnimation, the latter is the one used in our framework. More details are provided below. From Numpy we use the linear algebra library numpy.linalg.norm to calculate the norms of the n-dimensional vectors. From Scipy we use the libraries scipy.spatial.distance.pdist to calculate the distances between points in the n-dimensional space, and scipy.spatial.distance.squareform that takes the previous results to form a square matrix of distances. The first part of the code defines the global variables of the framework, which correspond mostly to user-configurable parameters according to the conditions of the problem to be simulated ( Fig. 3 and Fig. 4). These variables include the population size of the system (how many robots will conform to the multi-agent system), the size of each of the robots (two-dimensional circular shape is assumed), the sensing range of the 360-degree distance sensors, the distance programmed in each robot to initiate the avoidance policy, the maximum speed, distance from the target to consider that the robot reached its destination, interval between simulation steps in The second section of the code defines the navigation strategy to be followed by the robot swarm, i.e. how it will move in W to find the target area. In this part, we have facilitated the incorporation of several common algorithms, from the explicit definition of the route using navigation coordinates to the incorporation of geometric strategies such as Potential Field algorithm, Dijkstra, and A*, and even reactive strategies based on local sensing. Also in this section, the location of the target region is defined. The third part of the initial configuration corresponds to the definition of the obstacles O. These are drawn on the environment using the matplotlib.patches library, With this information, we proceed to generate the initial position and velocity matrices of the system agents. In both cases, the values are generated randomly within the defined ranges. The initial state of the system is defined by scaling the environment and the robots for its visualization (the robots are represented as red-colored circles of proportional size to the environment). The figure is created with matplotlib.pyplot.figure, and all robots, obstacles, information labels, and a grid are added to facilitate position analysis. This configuration corresponds to the initial state of the system. To perform the simulation the first thing to do is to initialize the robot speed arrangements. The speed of each robot can be kept constant at a percentage of the maximum value, or set to random in the same range. This is handled internally with a multiplier on the maximum speed between 0 and 1. This matrix is updated according to the motion policies applied to each robot. Animations in Matplotlib consist of three elements, an init() function that returns the background of each animation frame, an update() function that returns the figures that should appear in each background frame, and the code in charge of acquiring the animation object. In our case, the Each of the basic behaviors was implemented separately in functions that compute the velocity vector for each of the robots (each basis function for the entire system, from zero to n). The function for aggregation checks the readings from the distance sensors of each robot and adjusts the velocity vectors so that the robot moves towards its nearest neighbors. Distances to nearby robots are used to define the relative location of each robot in the environment, as well as its movement strategy [4]. A function is also implemented to establish attractors in the environment from the local readings, and the navigation strategy used. These attractors allow defining the velocity vector of each robot as if it were following a specific route. To avoid collisions, a function is defined that verifies the fulfillment of minimum distances between robots, obstacles, and environment limits, forcing the movement in a random direction when it detects a possible collision condition. These basic behaviors are combined to create more complex flocking behaviors. For example, the aggregation function and the attractor function combine to produce the following behavior and combined with the collision function they form a safe wandering behavior. Thus by combining homing, aggregation, and avoidance, the desired flocking behavior is achieved. In IV. RESULTS AND DISCUSSION We have performed several tests with the framework for different conditions of the environment, the system, and the navigation strategy. The results shown below were performed for the TurtleBot 3 Burger robot, the platform on which we analyzed the flocking behavior. The characteristics of the environment (rectangular 10 m × 10 m, with three fixed obstacles) and the navigation strategy (reactive from intensity landmarks in the environment) were also kept constant. The varied parameters were population size and initial position of the system, with the intention not only to observe the selforganization in flocks but also to determine the performance of the system for a given navigation task concerning the population size. The system initialization parameters for these tests were as follows: • Number of agents: between 5 and 100 • Agent size: circular with 0.105 m radius. The navigation strategy forces the flock of robots to navigate the environment in a clockwise direction towards the lower right region of the environment. We evaluated the time it takes for the system to navigate to this region for different population sizes, from a flock of five agents to a system with 100 agents. Fig. 5 shows the capture of four such simulations, each with a different population size (20,30,40, and 50 agents). The starting point is randomly generated in E using a computer time-dependent variable seed. In the simulations, it is observed how the system self-organizes according to the basic behaviors, and after reaching equilibrium, it starts to move along the route without altering the formation, only in the event of encountering an obstacle, in which case the agent avoids it, and returns to the formation. A small animation of 20 s with these four cases can be seen in the following link: https://youtu.be/R09mFqAb -c The strength of the tool is observed when evaluating the performance of this type of system in the development of tasks, which is much more complex in implementations on real prototypes. To evaluate this, the above configuration was followed to assess the impact of population size on the total time required for task development. Since the navigation strategy relies on local readings, which may vary depending on the system agent detecting the landmarks, and according to the initial position of the system, statistical analysis with multiple simulations for multiple population sizes is needed to analyze the impact problem. Following the conditions of the previous simulations, the exercise was repeated for different population sizes: 5, 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100 agents. For each population size, 100 simulations were performed (1100 simulations in total) and the times in seconds that each case required were recorded. In the simulations, a 100% success rate (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 13, No. 2, 2022 was obtained (in all cases the system reached the target region), but in some cases, the time required was an outlier (excessively long), which was also experimented with in real tests. The results are shown in Fig. 6, which shows a basic statistical analysis with median values by population size, quartiles, and excluded outliers. The results show that beyond the fact that the system can perform the task, there is a relationship between the time required and the population size, for the particular working conditions (type of robot, size of the environment, and configuration of O) [25]. This relationship can be represented by a mathematical function, which could be useful for system sizing. This same exercise can be repeated with another set of obstacles, a different size environment, different speeds of the robots, or even some dynamic conditions making the obstacles change their position every so often. The framework allows to analysis and generalizes the behaviors of this kind of system, in a fast, economic and reliable way. These results can then be contrasted and scaled according to laboratory tests with some real robots. V. CONCLUSION In this paper, we propose a new simulation framework to replicate the flocking behavior of a swarm of robots, as a strategy for the efficient and safe evaluation of the performance of this type of system. The control scheme is designed based on the basic behaviors identified in the literature as fundamental for a flocking system: safe wandering, following, aggregation, dispersion, and homing. Each of these basic behaviors is replicated from each agent's sensor data, and weighted in conjunction with the navigation strategy to form the speed vector. We identify the weighted value of each basic behavior while allowing it to be adjusted according to the simulation needs. The framework also allows modeling different types of environments and robots, but the tests and calibrations were performed with the TurtleBot 3 Burger platform from Robotis. With this platform, the performance was tuned for a pair of agents, which was scaled to allow evaluating tens and hundreds of agents. In this sense, our framework allows us to adjust specific parameters such as robot size, sensing capacity, and maximum speed. The simulation performs an animation of the system using Python's Matplotlib library, which is exported to video. In this animation, the obstacles are placed in the background, while the agents are dynamically updated in the foreground. The code allows to implementation of a wide range of navigation strategies, both geometric from the global characteristics of the environment, as well as reactive navigation strategies based on local readings. The tool was verified for a simple navigation task, evaluating both the selforganizing capability of the system and the impact of system features on task performance. Future research on this tool includes the incorporation of other robotic platforms of the research group.
2022-03-03T16:29:55.329Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "a7947b8198b57636f9855a7eac4cc50a7cea7204", "oa_license": "CCBY", "oa_url": "http://thesai.org/Downloads/Volume13No2/Paper_81-A_Software_Framework_for_Self_Organized_Flocking_System.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e4c7000e7f0ef5da51250f3d5952c54fffe8b005", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
221478056
pes2o/s2orc
v3-fos-license
CYP4F13 is the Major Enzyme for Conversion of alpha-Eleostearic Acid into cis-9, trans-11-Conjugated Linoleic Acid in Mouse Hepatic Microsomes. Our previous studies have shown that α-eleostearic acid (α-ESA; cis-9, trans-11, trans-13 (c9,t11,t13)-conjugated linolenic acid (CLnA)) is converted into c9,t11-conjugated linoleic acid (CLA) in rats. Furthermore, we have demonstrated that the conversion of α-ESA into CLA is a nicotinamide adenine dinucleotide phosphate (NADPH)-dependent enzymatic reaction, which occurs mostly in the rat liver. However, the precise metabolic pathway and enzyme involved have not been identified yet. Therefore, in this study we aimed to determine the role of cytochrome P450 (CYP) in the conversion of α-ESA into c9,t11-CLA using an in vitro reconstitution system containing mouse hepatic microsomes, NADPH, and α-ESA. The CYP4 inhibitors, 17-ODYA and HET0016, performed the highest level of inhibition of CLA formation. Furthermore, the redox partner cytochrome P450 reductase (CPR) inhibitor, 2-chloroethyl ethyl sulfide (CEES), also demonstrated a high level of inhibition. Thus, these results indicate that the NADPH-dependent CPR/CYP4 system is responsible for CLA formation. In a correlation analysis between the specific activity of CLA formation and Cyp4 family gene expression in tissues, Cyp4a14 and Cyp4f13 demonstrated the best correlations. However, the CYP4F substrate prostaglandin A1 (PGA1) exhibited the strongest inhibitory effect on CLA formation, while the CYP4A and CYP4B1 substrate lauric acid had no inhibitory effect. Therefore, we conclude that the CYP4F13 enzyme is the major enzyme involved in CLA formation. This pathway is a novel pathway for endogenous CLA synthesis, and this study provides insight into the potential application of CLnA in functional foods. However, it has been reported that mice fed the highly purified t10,c12-CLA isomer had adverse side effects such as insulin resistance, robust hyperinsulinemia, and massive liver steatosis 8 10 . Furthermore, insulin resistance has also been detected in obese men treated with purified t10,c12-CLA 11 , which raises concerns about the safety of dietary supplements containing the t10,c12-CLA isomer. On the other hand, c9,t11-CLA, which was found to improve the increased insulin resistance caused by t10,c12-CLA 9,12 , is considered safer due to fewer reported side effects. For these reasons, CLA supplements do not provide the same health effects as CLA from natural foods. In contrast to the generally low content of CLA in nature, the content of conjugated linolenic acid CLnA; C18:3 with three conjugated double bonds in seed oil from certain plants can comprise up to 80 of the total lipid content. For example, α-eleostearic acid α-ESA; c9,t11,t13-CLnA comprises up to 60 wt/wt of the total lipid content of bitter gourd seed oil and 76 of the total lipid content of tung seed oil, whereas punicic acid PA; c9,t10,c13-CLnA comprises up to 74.5 of the total lipid content of pomegranate seed oil, and jacaric acid JA; c8,t10,c12-CLnA is found at a concentration of 15.9 in jacaranda seed oil. We are particularly interested in these CLnA-rich seed oils, because CLnA is the only conjugated fatty acid that can be prepared from natural sources in bulk 13 . Our previous studies have also shown that CLnA has a stronger anti-tumor effect than CLA in vitro and in vivo. Moreover, α-ESA has been reported to have a strong antiangiogenic effect, a new confirmed physiological effect of α-ESA 14 17 . We first reported that α-ESA was converted into c9,t11-CLA in 1 α-ESA-fed rats. The structure of c9,t11-CLA was determined using gas chromatography-electron impact/mass spectrometry GC-EI-MS and 13 C-NMR. Furthermore, we have demonstrated that this conversion is a Δ13 saturation, an NADPH-dependent enzymatic reaction occurring mostly in the rat liver 18,19 . Moreover, we observed similar conversions of CLnA into CLA in PA-and JA-fed rats, where PA was converted into c9,t11-CLA and JA was converted into c8,t10-CLA 13,20 . These conversions were also confirmed in mice and humans in other reports 21 23 . In ruminants, CLA is formed as an intermediate during the ruminal bio-hydrogenation of linoleic acid to stearic acid 18:0 by B. fibrisolvens and other rumen bacteria 24 . Another pathway involves Δ9-desaturase-dependent endogenous CLA synthesis from vaccinia acid t11-18:1 , another intermediate in ruminal biohydrogenation 5,25 . Although the major source of CLA in humans comes from dietary intake, endogenous synthesis of CLA has also been reported in humans and other non-ruminants 26 28 . Accordingly, the conversion of CLnA into CLA could be a novel pathway for endogenous CLA synthesis. As CLnA can be prepared more easily than CLA, once the mechanism of this conversion is elucidated, it is expected that CLnA, especially α-ESA will be a new source for CLA synthesis or supplant CLA as a dietary supplement. Our previous studies indicated that the NADPH-dependent enzyme involved in this conversion should be classified as part of the drug metabolism, but not as part of the β-oxidation enzyme group in the fatty acid metabolic pathway 18 . However, the precise metabolic pathway and enzyme involved have not been identified yet. The superfamily of cytochrome P450 CYP enzymes is considered the major enzyme family responsible for the phase I metabolism reduction, oxidation, or hydrolysis reactions of numerous endogenous and exogenous compounds, such as drugs and other xenobiotics 29 . The in vivo reactions catalyzed by CYP enzymes require the cofactor NADPH as the electron source and the redox partner of cytochrome P450 reductase CPR , which functions in the electron transfer from NADPH to CYP in microsomes 30 . Based on these considerations, we sought to elucidate whether CYPs play a role in the conversion of α-ESA into CLA. For this purpose, we aimed to determine the subcellular distribution of enzymatic activity in mouse liver by using an in vitro reconstitution system of enzymatic activity containing mouse hepatic microsomes, NADPH, and α-ESA to test the effect of various inhibitors and CYP-substrates on c9,t11-CLA formation. Moreover, we aimed to determine the enzymatic activities and Cyp4 family gene expression levels in various tissues for correlational analysis. Animals All animal experiments were conducted in accordance with the Regulations for Animal Experiments and Related Activities at Tohoku University 2018AgA-015 31 . Institute of Cancer Research ICR mice, 13 weeks old, were obtained from CLEA Japan Inc. and were allowed to acclimate to the facility for 1 week with standard chow diet CE-2, CLEA Japan before the initiation of the experiments. At 14 weeks of age, the mice were sacrificed by decapitation after an overnight fast for 12 h and the following tissues were collected: liver, kidney, small intestine, pancreas, brain, heart, lung, spleen, epididymal white adipose tissue eWAT , and brown adipose tissue BAT . All tissues were stored at 80 until use. Enzymatic activity assay To prepare the enzyme substrate α-ESA free fatty acid form , tung oil was hydrolyzed by potassium hydroxide KOH as previously reported with a slight modification 17 . After bubbling with nitrogen gas for 15 s, 30 mg of tung oil in a glass test tube was saponified with 0.25 mL of 0.3 M KOH in 2.5 mL methanol at 40 for 90 min. After cooling to room temperature, 2.5 mL of H 2 O and 5 mL of hexane were added. The reaction mixture was vigorously vortexed for 2 min and centrifuged at 500 g for 5 min to separate the organic layer from the aqueous layer. The top organic layer containing non-saponaceous material was removed, while the bottom aqueous layer was mixed with 1.5 mL of 6 M HCl and 5 mL of hexane to extract the fatty acids. The mixture was vigorously vortexed for 2 min and centrifuged at 500 g for 5 min. The top organic layer was then concentrated by solvent evaporation under vacuum, dissolved in dimethyl sulfoxide DMSO , and stored at 80 until use. For detecting enzymatic activity, each one of the frozen biological samples a total of 10 tissue samples was homogenized in 9 volumes w/v of chilled 0.01 M Tris-acetate sucrose buffer, pH 7.4, containing 0.01 M Tris-Acetate, 0.25 M Sucrose, 1 mM dithiothreitol DTT , 1 mM phenylmethylsulfonyl fluoride PMSF , 1 mM ethylenediaminetetraacetic acid EDTA 18 , by using the bead-type homogenizer Micro Smash MS-100 TOMY Seiko, Tokyo, Japan at 3600 rpm for 30 sec 3 times. To unify the enzymatic reaction time, the experiments were performed in a crushed-ice bath. A total of 450 μL of homogenate 10 w/v was mixed with 10 μL of α-ESA substrate 5 mM in DMSO and 40 μL of coenzyme NADPH 0.1 M in 0.9 NaCl , followed by incubation at 37 for 30 min. The reaction was stopped by placing the sample in a crushed-ice bath and a known amount of the internal standard heptadecanoic acid C17:0 was added in order to quantify the fatty acids. The lipids in the reaction mixture were extracted with chloroform-methanol-water 1:1:0.9 in volume according the Blight & Dyer method. The free fatty acids in the lipid extract were then catalyzed into methyl ester derivatives by the TMSN 2 CH 3 method 32 for gas chromatography GC analysis to determine the CLA concentration. The protein concentration was determined by the Pierce BCA protein assay kit Thermo Scientific, Houston, TX, USA . The specific activity of the conversion of α-ESA into CLA was obtained after normalizing to the total protein using the following formula: specific activity nmol/min/g protein CLA amounts nmol / time taken min / protein g . Gas chromatography analysis of fatty acids The fatty acid methyl esters FAMEs were analyzed by GC GC-4000 Plus, GL Science Inc., Tokyo, Japan equipped with a Supelcowax-10 fused silica capillary column 60 m 0.32 mm 0.25 μm film thickness, Supelco, Bellefonte, PA and a flame ionization detector 20 . Helium was used as a carrier gas at a constant pressure of 400 kPa. The temperatures of the injector and detector were 200 and 250 13 , respectively, and the oven temperature program was as follows: an initial temperature of 50 was ramped to 220 at a rate of 20 /min and held for 30 min, then ramped to 250 at a rate of 20 /min and held for 20 min. Each peak was annotated by comparing the retention times with CLA FAMEs standards and GLC reference standards of FAMEs Nu-Chek-Prep, Elysian, MN, USA . Characterization of α-ESA saturase In order to further understand the characteristics of the conversion of α-ESA into CLA, the liver homogenate, NADPH, and α-ESA in the reaction mixture, were gradually replaced by 0.01 M Tris-acetate sucrose buffer, 0.9 NaCl solution, and DMSO, respectively. After incubation at 37 for 30 min, the specific activity of α-ESA conversion into CLA was measured, as explained above. To further confirm whether other coenzymes could initiate the reaction, 40 μL of 0.1 M coenzyme NADPH, NADP , NADH, and NAD were all dissolved in 0.9 NaCl solution was added individually in the presence of the homogenate and α-ESA. Subsequently, the reaction mixture was processed as described above to measure the specific activity of α-ESA conversion into CLA. Experiments with heated liver homogenate incubations conducted at 60 for 30 min were also performed to further assess the role of the homogenate in the conversion of α-ESA into CLA. To elucidate the subcellular distribution of α-ESA saturase, the liver homogenate was centrifuged at 600 g, at 4 for 10 min to collect the pellet containing intact nuclei and debris. The nuclear pellet was re-suspended into 1 volume of 0.01 M Tris-acetate sucrose buffer and the postnuclear supernatant was again centrifuged at 8000 g, at 4 for 20 min using an Optima L-100 XP Ultracentrifuge with a type 70.1 Ti fixed-angle rotor Beckman Coulter Ltd., Fullerton, CA to sediment mitochondria. The mitochondrial pellet was re-suspended into 1 volume of the of 0.01 M Tris-acetate sucrose buffer and the post-mitochondrial supernatant was further ultracentrifuged at 105000 g, at 4 for 60 min to obtain microsomes 33 . The microsomal pellet was also re-suspended into 1 volume of the same buffer and the cytosolic supernatant was retained. A total of 450 μL of each subcellular fraction, namely nuclear, mitochondrial, microsomal, and cytosolic, was subjected to an enzymatic activity assay and the amounts of CLA were determined by GC analysis. Inhibitory effect of inhibitors and CYP-substrates To evaluate the inhibitory effect of inhibitors CYP-selective inhibitors, COX inhibitors, and CPR inhibitor in CLA formation, liver microsomes 10 mg protein/mL were preincubated with inhibitors or DMSO solvent, which was used as the control, at 37 for 5 min. Then, the pre-incubated mixtures were immediately subjected to the CLA formation assay as explained above. The IC 50 concentration of inhibitor required to cause a 50 inhibition of the original enzyme activity was determined graphically from the plot of the logarithm of inhibitor concentration versus the percentage of CLA remaining of control after inhibition using GraphPad Prism 7 GraphPad Software, San Diego, CA, USA . To determine the inhibitory effect of CYP-substrates on CLA formation, liver microsomes 10 mg protein/ mL were pre-mixed simultaneously with α-ESA and CYPsubstrates α-ESA: CYP-substrates 1:1 or 1:4 in mole ratio before the addition of NADPH, which initiated the reaction. Following this, the mixtures were incubated at 37 for 30 min as described above for the enzymatic activity assay and the amounts of CLA were determined by GC analysis. Cyp4 family messenger RNA expression analysis For the purification of high-quality RNA, the RNeasy Mini Kit Qiagen, Valencia, CA, USA was used for liver, kidney, small intestine, pancreas and spleen samples, while the RNeasy Lipid Tissue Mini Kit Qiagen, Valencia, CA, USA was used for brain, eWAT, and BAT samples. The RNeasy Fibrous Tissue Mini Kit Qiagen, Valencia, CA, USA was used for total RNA isolation from cardiac tissue according to the protocol given by the manufacturer. The concentration and purity of the isolated RNA was determined using the Nanodrop 1000 spectrophotometer Thermo Scientific, Wilmington, MA, USA . Subsequently, reverse transcription of RNA to complementary DNA cDNA was performed with the PrimeScript ® RT Master Mix Perfect Real Time Kit Takara Bio Inc., Shiga, Japan 34 . Briefly, an aliquot volume of 1000 ng of RNA, 4 μL of 5 PrimeScript RT Master Mix Perfect Real Time and RNasefree distilled water up to 20 μL were mixed and incubated at 37 for 10 min, and then at 85 for 5 sec. Finally, 480 μL of RNase-free dH 2 O was added to dilute the cDNA and the samples were stored at 20 for subsequent analysis. The cDNA was used for real-time quantitative reverse transcriptase polymerase chain reaction qRT-PCR to analyze the expression levels of Cyp4 family genes. The qRT-PCR reaction was prepared in a final volume of 20 μL containing 10 μL 2 TB Green ® Premix Ex Taq TM Tli RNaseH Plus Takara Bio Inc., Shiga, Japan , 1 μL forward primer 10 μM , 1 μL reverse primer 10 μM and 10 μL diluted cDNA. The gene-specific primers, purchased from Sigma-Aldrich Merck KGaA , are shown in Table 1. PCR amplification was performed with a CFX Connect TM Real-Time PCR Detection System Bio-Rad, California, USA and each biological sample was assayed in two technical replicates. The reactions were subjected to an initial 30 sec denaturation at 95 . To verify the specificity of the amplification reaction, a melting curve analysis was performed in the range of 60 to 95 , 0.5 per 5 sec increments after thermo-cycling for each reaction 35 . The threshold cycle Ct value, representing the PCR cycle at which an increase in reporter fluorescence signal significantly above the background fluorescence can first be detected, was also determined. The expression levels of the Cyp4 family genes 10 genes total in each biological sample was normalized to tissue weight, and shown as fold changes relative to the corresponding Cyp4 transcripts in the liver. Table 1 List of primer sequences used for real-time PCR. Gene name Accession Number Statistical analysis Data are presented as mean standard deviation SD and were analyzed with one-way analysis of variance ANOVA , followed by Tukey-Kramer test for comparisons among more than two groups. A P value of 0.05 was considered statistically significant. For correlation studies, the specific activity of CLA formation was normalized to the corresponding tissue weight. The correlations between the tissue specific activity and the relative expression level of Cyp4 family genes were described using Spearman s rank correlation coefficients r s . The strength of correlation was ranked as follows: for absolute values of r s , 0.01-0.19 was regarded as negligible, 0.20-0.29 as weak, 0.30-0.39 as moderate, 0.40-0.69 as strong, and ≥ 0.70 as very strong 36 . Characterization of α-ESA saturase in vitro To characterize the conversion of α-ESA into CLA, liver homogenate, α-ESA, and NADPH in the in vitro reconstitution system were replaced by their corresponding solvents. However, CLA formation was not detected in the absence of any one of the ingredients Fig. S1 . Only the liver homogenate could convert α-ESA into CLA in the presence of NADPH, indicating that the converting process was an enzyme-mediated metabolic process. To further confirm whether other coenzymes could initiate the reaction, either NADP, NADH, or NAD was added to the in vitro reconstitution system instead of NADPH. However, in the presence of NADP, NADH, or NAD the CLA level was below the detectable limit due to a high preference for NADPH Fig. S2 . Furthermore, the heated liver homogenate incubated with α-ESA and NADPH also showed no activity Fig. S2 , demonstrating that the metabolism of α-ESA into CLA occurs through an NADPH-dependent enzymatic reaction. In the α-ESA saturase subcellular localization experiment in the liver homogenate, the enzymatic activity was the highest in the microsomal fraction, followed by the mitochondrial fraction, lower in the nuclear fraction, and absent in the cytosolic fraction Fig. 1 . Therefore, it was confirmed that α-ESA saturase required NADPH as the source of electrons and was abundant in microsomes, which contain the major CYP drug-metabolizing enzymes. For these reasons hepatic microsomes were used in the next experiments. The effect of inhibitors on CLA formation To explore whether the CYP enzymes were involved in CLA formation, the various CYP-specific inhibitors were selected to validate their inhibitory effect on CLA formation in hepatic microsomes. With the exception of fluconazole CYP2C19 inhibitor and ketoconazole CYP3A4 inhibitor , all the other CYP-specific inhibitors, namely Table 2. CEES, 2-chloroethyl ethyl sulfide; DMSO, dimethyl sulfoxide; α-ESA, α-eleostearic acid; CLA, conjugated linoleic acid; NADPH, nicotinamide adenine dinucleotide phosphate. fluvoxamine CYP1A2 inhibitor , ticlopidine CYP2B6 inhibitor , montelukast CYP2C8 inhibitor , sulfaphenazole CYP2C9 inhibitor , quinidine CYP2D6 inhibitor , chlormethiazole CYP2E1 inhibitor , 17-ODYA CYP4A/B1/F inhibitor , and HET0016 CYP4A/B1/F inhibitor , inhibited CLA formation at different concentrations. Furthermore, the activity of leukotriene B4 12-hydroxydehydrogenase/ 15-ketoprostaglandin Delta 13-reductase LTB 4 12-HD/ PGR , which was thought to be involved in the conversion of α-ESA into CLA, could be inhibited by cyclooxygenase COX inhibitors, such as indomethacin and niflumic acid. However, the results demonstrated that either indomethacin or niflumic acid had a slight inhibitory effect on CLA formation at a high concentration Fig. 2 , and the IC 50 value of 17-ODYA was about two-fold that of HET0016 0.16 0.02 mM versus 0.085 0.099 mM , but still significantly lower than that of other CYP1-3 inhibitors. These results thus indicate that the NADPH-dependent CPR/CYP4 electron transport system contribute to the conversion of α-ESA into CLA. Difference in enzymatic activities and Cyp4 mRNA expression in various tissues By comparing the specific activity of CLA formation and the expression level of Cyp4 family genes in tissues, we investigated the specific CYP4 enzyme involved in the conversion of α-ESA into CLA. The specific enzymatic activity of CLA formation was determined in various tissue homogenates in a total of 10 tissues , and the activity levels from the most to the least active tissues were as follows: liver, kidney, small intestine, and pancreas, while activity was non-detected in other tissues Fig. 3A . In addition, the mRNA expression levels of a total of 10 Cyp4 family genes were examined to determine the tissue distribution patterns of these genes. The results of the relative mRNA expression level analysis illustrated that Cyp4 family genes exhibited highly different tissue-divergent distribution patterns Fig. 3B . Furthermore, Cyp4a subfamily isoforms were almost exclusively found in the liver and kidney. The highest expression of Cyp4a10 and Cyp4a12a/b were found in the kidney, and the highest Cyp4a14 mRNA expression was detected in the liver, while Cyp4b1 was expressed almost entirely in kidney. Cyp4f13 was expressed ubiquitously in various tissues with the highest expression in the liver followed by kidney, BAT, and small intestine. Cyp4f14 and 4f15 were primarily expressed in the liver, and fairly low amounts were observed in the small intestine, brain, and kidney. Cyp4f16, 4f17, and 4f18 were all expressed ubiquitously like Cyp4f13 across tissues. Cyp4f16 mRNA was high in both the kidney and small intestine, while Cyp4f17 mRNA was highest in the liver and A The specific activity of α-ESA conversion into CLA in various tissues. The specific activity was normalized to tissue weight: specific activity nmol/min /g tissue CLA amounts nmol / time taken min / tissue weight g . B The relative expression level of Cyp4 family genes in each tissue. The gene expression level was normalized to tissue weight and shown as fold changes relative to the corresponding Cyp4 transcripts in the liver. Fourteen weeks old male mice were used for this experiment. Data are presented as Mean SD, n 6. CLA, conjugated linoleic acid; α-ESA, α-eleostearic acid; eWAT, epididymal white adipose tissue; BAT, brown adipose tissue. kidney and slightly lower in BAT, eWAT, the spleen, and brain. Cyp4f18 mRNA was most highly expressed in the spleen. These extensive analysis of Cyp4 mRNA expression levels provides valuable information about the specific CYP4 enzymes that may be involved in CLA formation in various tissues. Correlation analysis The observed disparity between the tissue-divergent expression patterns of Cyp4 genes and the alterations in the specific activities of CLA formation in tissues pointed to the necessity of a correlation study, which was used to confirm the specific CYP4 enzymes in the conversion of α-ESA into CLA. The results are displayed as the specific activity of CLA formation in 10 tissues from 6 male ICR mice versus the relative expression levels of each of the Cyp4 family genes Fig. 4 . There were no significant correlations between the specific activities and the expression levels of the Cyp4a10, 4a12a/b, 4b1, or 4f18 genes. The best correlations were found in Cyp4a14 and 4f13 r s values of 0.8611 and 0.8610, respectively with a very high statistical significance p 0.0001 . Cyp4f14 and 4f15 had very strong positive correlations with a very high statistical significance. A moderate and a strong statistically significant positive correlation were detected in Cyp4f16 and Cyp4f17, respectively. Therefore, the specific activity of CLA formation showed the most similar tissue-distribution pattern with the gene expression patterns of Cyp4a14 and Cyp4f13. These results confirmed that CYP4 enzymes are involved in CLA formation, with CYP4F13 and CYP4A14 being the most relevant enzymes. The effect of CYP-substrates on CLA formation To further investigate the role of the most relevant enzymes, CYP4F13 and CYP4A14, in the conversion of α-ESA into CLA, the liver microsomes were incubated simultaneously with CYP-substrates, α-ESA, and NADPH to test whether the CYP-substrates inhibit CLA formation. The results show that the yield of CLA formation varied in relation to the mole ratios of α-ESA and CYP-substrates Figs. S3 and S4 . The CLA formation was significantly inhibited by prostaglandin A 1 PGA 1 even at a low concentration Table 3 and Fig. S3 , while lauric acid showed no inhibition of CLA formation based on the comparison between CYP-substrate groups Table 3 . The other CYPsubstrates were proved to have a significant inhibitory effect only at a high concentration. PGA 1 , the substrate of CYP4F, appeared to be the most potent substrate, while lauric acid, the substrate of CYP4A and CYP4B1, was the least potent substrate. Taken together, these results suggested that the α-ESA saturase that could convert α-ESA into CLA belongs to the CYP4F subfamily of enzymes, rather than the CYP4A or CYP4B subfamily of enzymes. Furthermore, the results of the correlation analysis described above confirmed that CYP4F13 is most suitable candidate enzyme among CYP4F subfamily involved in the conversion of α-ESA into CLA. Discussion In this study, we demonstrated that the conversion of α-ESA into CLA occurs through an NADPH-dependent enzymatic reaction. Furthermore, the highest specific activity of CLA formation was detected in the liver, followed by the kidney, small intestine, and pancreas Figs. S1-2 and Fig. 3A . These results are consistent with an earlier study conducted with rats by our laboratory 18 . Moreover, we de- Notes: #, all the substrates were dissolved in DMSO which is also taken as a control. a,b,c Values in a column without a common superscript letter are significantly different (p < 0.05). Data are presented as Mean±SD, n=3. CYP4F13 Catalyzes alpha-ESA into c9,t11-CLA termined the subcellular distribution of α-ESA saturase in the liver. Our results demonstrate that α-ESA saturase was abundant in microsomes, but absent in the cytosol. Liver microsomes contain the major drug-metabolizing enzymes CYP and UDP-glucuronosyltransferase UGT , along with other enzymes that contribute to drug metabolism 37 , which support our hypothesis that α-ESA saturase should be classified as an enzyme for drug metabolism. To test this hypothesis, CYP-specific inhibitors were selected to validate the effect of inhibitors on CLA formation in hepatic microsomes. The results showed that the inhibitors of CYP 1 38 , 2 39,40 , and 3 41 family enzymes had minimal or modest inhibitory activities. However, CYP1-3 contain major xenobiotic-metabolizing enzymes responsible for the metabolism of the majority of drugs and other xenobiotics 42,43 , while CYP4 enzymes typically metabolize fatty acids 29,44 . According to this observation, CYP1-3 are unlikely to play a role in the metabolism of α-ESA, which is a type of conjugated triene fatty acid, and the slight inhibitory effect of CYP1-3 inhibitors could be due to a low specificity 43,45 . In contrast, CYP4 inhibitors, 17-ODYA and HET0016 44,46 , showed the highest level of inhibition Fig. 2 , which also indicated that CYP4 enzymes are involved in CLA formation. In addition, the CPR inhibitor of CEES 47 also showed a significant inhibition of CLA formation. All these findings suggested that the CPR/CYP4 electron transport system is involved in the conversion of α-ESA into CLA. We also noticed that the conversion of α-ESA into CLA is similar to that in the metabolic pathway of eicosanoids, where leukotriene B4 LTB 4 , prostaglandin E 2 PGE 2 , and lipoxinA 4 LXA 4 are the dominant eicosanoids. In the LTB4 metabolic pathway, LTB 4 is oxidized by LTB 4 12-HD/PGR to 12-oxo-LTB 4 , and then the double bonds at C10-C11 and C14-15 are reduced to 10,11,14, by an unknown reductase s . Additionally, PGE 2 and LXA 4 are oxidized to 15-oxo-prostaglandin E 2 15-oxo-PGE 2 and 15-oxo-lipoxinA 4 15-oxo-LXA 4 , respectively; LTB 4 12-HD/ PGR subsequently reduces 15-oxo-PGE 2 to 13,14-dihydro-15-oxo-PGE 2 and 15-oxo-LXA 4 to 13,14-dihydro-15-oxo-LXA 4 in the presence of NADPH 48 . In addition, Clish et al. 49,50 have reported that LTB 4 12-HD/PGR is a member of the zinc-independent medium chain dehydrogenase/reductase family, which exhibits high reductase activity toward double bonds in several xenobiotics. Therefore, we speculated that the conversion of α-ESA into CLA proceeds through the LTB 4 metabolic pathway, in which the double bond of α-ESA is saturated by the unknown reductase s , or the PGE 2 and LXA 4 pathways, in which the double bond is directly reduced by LTB 4 12-HD/PGR. However, our study showed that the LTB 4 12-HD/PGR inhibitors, indomethacin and niflumic acid 49 , caused only a slight inhibitory effect on CLA formation, thereby indicating that LTB 4 12-HD/PGR does not contribute to CLA formation. Moreover, Itoh et al. 51 have reported that LTB 4 12-HD/PGR from male Wistar rat liver was predominantly localized in the cytosolic fraction, and was absent in the microsomal fraction. In contrast, the α-ESA saturase activity was highest in the microsomal fraction and absent in the cytosolic fraction in this study Fig. 1 . This discrepancy also suggests that LTB 4 12-HD/PGR is unlikely to play a role in the conversion of α-ESA into CLA. In our previous studies, we have shown that α-ESA and PA were converted into c9,t11-CLA, JA was converted into c8,t10-CLA in rats, and the conversion ratio of α-ESA was higher than that of PA and JA. These results indicate that the double bond distal to the carboxyl group in the carbon chain of the conjugated triene acid is selectively saturated in this reaction and the variety in the conversion ratios may be attributed to the substrate-specificity of the saturase. Furthermore, these findings suggested that CYP-substrates could be utilized to determine which specific CYP4 enzyme catalyzed α-ESA metabolism. Lauric acid, a specific substrate of CYP4A enzymes 44 , also preferentially catalyzed by the rabbit CYP4B1 enzyme 52 but not the CYP4F enzymes, had no inhibitory effect on CLA formation. Moreover, there were no statistically significant differences in the inhibitory effects between palmitic acid C16:0 , C18 carbon fatty acids non-conjugated fatty acids with the same number of carbon atoms as α-ESA but with a variety in the number of double bond , and arachidonic acid C20:4 . This finding suggests that the number of carbon atoms and the non-conjugated double bond have no effect on the substrate-specific effects of α-ESA saturase. In addition, we found no inhibitory effect from the other CYP4F substrates at low concentrations except for PGA 1 , which is one of the classical substrates of CYP4F enzymes, since other CYP4F-substrates could also be catalyzed by other CYP family enzymes besides CYP4F. Taken together, these findings led us to select CYP4F isoforms as α-ESA saturase, consistent with previous reports showing that CYP4As metabolize intermediatechain fatty acids fatty acids with C10 to 16 carbon chain 29 , while CYP4Fs catalyze long-chain fatty acids C16 to 26 53 . However, we could not determine the specific CYP4F enzyme involved due to a lack of a selective marker substrate for the activity of individual CYP4F enzymes in the study of the effect of CYP-substrates on CLA formation, although PGA 1 could be used as a nonselective marker substrate for CYP4F enzymes. Nonetheless, the correlation analysis showed that Cyp4a14 and 4f13 had the best correlation with a very high statistical significance. After a comprehensive evaluation of these findings, we concluded that CYP4F13 was the primary enzyme involved in CLA formation. Conversely, CYP4B1 was predominantly expressed in extrahepatic tissues and has been reported to preferentially metabolize short-chain fatty acids approximately C7 to 10 52 , suggesting that CYP4B1 does not contribute to CLA formation. This is also verified by the results of the correlation study. CYP enzymes almost always act as monooxygenases, or mixed-function oxidases by inserting one atomic oxygen into the substrate. The stoichiometry of the oxidation reaction can be written as: R H NADPH O 2 ROH H 2 O NADP 54 . The CYP4 family plays a major role in the metabolism of fatty acids, in most cases through oxidation of fatty acids, including epoxidation and hydroxylation. Unlike the well-established oxidation reaction, the reduction reaction catalyzed by CYP has not been characterized in detail. The stoichiometry is written as: R X NADPH H RH XH NADP 55 . This reaction is mostly seen when the substrates contain quinones, azo-, halogenated, nitro-, N-hydroxy-, and hydroperoxide functional groups. Amunom et al. 55,56 have reported that α,β-unsaturated aldehydes 9-anthracene aldehyde and 4-hydroxy-trans-2-nonenal are reduced/hydrogenated to their corresponding carboxylic acid by several human and murine CYP enzymes. They also identified that this CYP-dependent reduction occurred in the presence and absence of molecular oxygen. Furthermore, replacement of the normal ambient air with carbon monoxide did not affect the reaction but significantly inhibited CYP-dependent oxidation. We also found that there was no significant difference in the specific activity of CLA formation in tissues determined under ambient air conditions and under nitrogen flow weak anaerobic conditions data not shown . The CLA formation reaction would be more similar to CYP-dependent reduction rather than CYP-dependent oxidation in terms of molecular oxygen requirement. Although, to our knowledge, there have been no previously documented reports that unsaturated fatty acids can be reduced/hydrogenated by CYP, the precedent aldehyde group reduction suggests that there may be other functional groups, such as conjugated double bonds in fatty acids, which also can be reduced by CYP. Mice have 102 CYP enzymes and many of the recently identified CYP enzymes are still considered orphans with no known functions 57 . In particular, the physiological and metabolic functions of the CYP4F subfamily have not been elucidated, only the catalytic activities of CYP4F14 and 4F18 are known, while other enzymes remain to be characterized 58 . From our results we inferred that CYP4F13 is the major enzyme responsible for the conversion of α-ESA into CLA, indicating that there may be a novel reduction reaction pathway for fatty acid metabolism in the liver, besides epoxidation and hydroxylation by CYP. This reduction reaction may be a unique function of CYP4Fs which remains to be fully understood. Thus, it will be of great interest to further examine the possible metabolic and functional effects of CYP4F in vivo at the molecular and physiological levels to extend our knowledge of CYP4F functions. As part of future research efforts, we intend to isolate and purify the CYP4F13 enzyme from mouse liver to provide direct evidence of the α-ESA to CLA conversion reaction. Conclusion In summary, to our knowledge, the present study would be the first to characterize CYP4F13 as the major enzyme responsible for the conversion of α-ESA into CLA. Whether it plays a role in the conversion of other CLnA, such as PA and JA, into CLA remains to be explored.
2020-09-03T09:05:07.312Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "c08a40771476092167675d843abd11a03aa9fabb", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/jos/69/9/69_ess20080/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e5139c6f3a04ad768cdf8a88b7c0b3ffeac65a49", "s2fieldsofstudy": [ "Medicine", "Chemistry", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
250588852
pes2o/s2orc
v3-fos-license
Poromechanical Modeling and Numerical Simulation of Hydraulic Fracture Propagation Hydraulic fracturing (HF) is an important technique for enhancing the permeability of petroleum and gas reservoirs. To understand the coupling response mechanism of fluid pressure and in situ stress during the expansion of hydraulic fractures—based on the theory of the fluid flow of seepage porous media and damage mechanics—a poromechanical model of hydraulic fracture propagation is proposed and the finite element method (FEM) numerical weak coupling calculation method of hydraulic fracturing is realized. First, the effect of the coupling stress field is described by introducing the β value of the amount of pore volume that varies, resulting from internal pressure per unit of fluid internal, and the coupling calculation method of the pore pressure-effective stress-element damage-pore pressure expansion coefficient is formulated. Second, based on the concept of damage localization, a calculation method for the hydraulic fracture opening equation is proposed, and then the element damage-hydraulic fracture opening-permeability tensor-pore pressure field calculation cycle is established. The model indicates four stages of fracture propagation: I, fracture nucleation, II, kinetic propagation, III, steady propagation, and IV, propagation termination. Finally, as an example, a numerical simulation of three-dimension hydraulic fracturing is performed. In comparison to previous research, the morphology of the fracture zone and the fluid pressure contour of the horizontal section are approximately ellipses, which verify the feasibility of the weak coupling calculation method; the fracture parameters verify its accuracy, which include the length, width, and fluid pressure. INTRODUCTION Hydraulic fracturing (HF) is an important technique to enhance the permeability of petroleum and gas reservoirs. The mechanisms of fracture propagation are explained well by analytical solutions (2D, 1−3 P3D, 4,5 and PL3D 6,7 ), which mainly deal with the lubricant flow, elastic displacement of fracture walls, and incomplete coupling between the fluid front and the fracture tip. However, the analytical solutions can only output the temporal and spatial distributions of hydraulic fracture parameters such as fracture opening, length, and fluid pressure on a predefined propagation path. Even some finite element methods (FEMs), 8−11 based on cohesive zone models, need to predefine the propagation path on a lined node. These models essentially place the parallel plate crack model in an infinite elastic body. Although they have good planar applications, the complexity of grid technology is a disadvantage factor for large-scale engineering applications. And they ignore the temporal and spatial distributions of stress, damage, and pore fluid pressure around the injection hole, which are significant for monitoring the hydraulic fracture zone growth and assessment of permeability enhancement. 12,13 In addition, some microscopic models are also used for the numerical simulation of hydraulic fracturing, such as discrete element models (DEMs), 14,15 discrete fracture seam network model (DFN), 16,17 and lattice model (Lattices). 18,19 These models mainly reflect the microscopic formation of crack mechanism, focusing on the complexity and microseismic properties of cracks. This method has a certain arbitrariness in determining the microscopic primitive mechanism and attribute parameters. It will lead to fluctuations in the macroattribute evaluation, which is not conducive to largescale engineering applications and evaluation. The poromechanical model can satisfy both the needs of assessment of hydraulic fracture propagation. It is based on the coupling analysis between the porous flow and stress and damage utilizing the FEM, which comprises direct coupling and load transfer methods. 20 The former is also referred to as strongly coupled, where the final solutions of the unknown multiple physical field variables are recovered by solving the simultaneous equations. However, many engineering problems do not satisfy the conditions of strong coupling, especially for some dynamical evolution problems such as damage-induced fracture, where it is difficult to ensure solution convergence. The load transfer method approaches a solution of the unknown field variables by successively solving the multiphysical field equations, where one field variable is used as an input for the solution of another, which is repeated through a sequence of couplings until a tolerance for an equilibrium solution is reached. This load transfer, sequential, or leap-frog method represents only weak coupling and since fluid-driven fractures are always evolving; therefore, this method is particularly appropriate for dealing with the nonlinearities in these problems. However, there are some key points in the poromechanical model to be dealt with, such as how to define the fracture opening using the continuum variables, how to deal with the strain energy loss resulting from hydraulic fracturing, how to control the direction of fracture propagation, and how to apply the fluid load to simulate the continuum injection. Previously, some hydraulic fracturing models have mitigated these issues. 21−23 In the present paper, based on the theory of the fluid flow of seepage porous media and damage mechanics, an anisotropic tensor format is established for the hydromechanical properties of porous media; a poromechanical model of hydraulic fracture propagation is proposed, and the FEM numerical weak coupling calculation method of hydraulic fracturing is realized. This comprises several components: (1) the fracture opening is calculated based on damage localization, employing the thickness of localization; (2) the strain energy loss resulting from fracturing is compensated by the fluid pressure invasion through poroelastic coupling; (3) the direction of fracture propagation is controlled by the tensors of the hydromechanical properties induced by hydraulic fracture opening; and (4) continuous injection is achieved with the fracture growth through a loading scheme of stepwise increases in solution duration. As an example, the model is used to assess hydraulic fracture propagation in a three-layer reservoir; the morphology of the fracture zone and parameters such as length, width, and fluid pressure are validated with the analytical solutions. The model exhibits four stages of fracture propagation: fracture nucleation, kinetic propagation, steady propagation, and propagation termination, which represents the full coupling between the fracture tip and fluid front. CONTROLLING EQUATIONS We define the relationships that enable the simulation of the effects of fluid pressure on the propagation of a fluid-driven fracture. This involves both the transport of fluid and the mechanism of fracture expansion driven by that fluid. Flow in Porous Media. Under the action of pressure gradient, water flow penetrates into the porous media of the coal reservoir, which is a dynamic process of hydraulic expansion. The porosity−elastic coupling conceptual model is shown in Figure 1. In the initial stage of dynamic adjustment, the q value of the velocity of fluid flowing into the pore is greater than that flowing out, and the redundant water accumulates in the pores to form pore fluid pressure P resulting in the effective stress σ′ increasing. Second, with the increase of P, the inflow pressure gradient gradually decreases, while the outflow pressure gradient gradually increases. In the final stage, the pressure gradient at the inflow end and outflow end disappears, the q of the inflow is equal to the outflow, and the seepage of porous media reaches a steady-state flow. During reservoir formation, the porous flow satisfies Darcy's law: 24 where ∇ is the differential operator vector and K is the permeability. Using the law of mass conservation, the pore pressure is determined as where ξ is the mass of fluid, ϖ is the fluid generation rate of a unit solid volume, and t is time. According to the coupling between the compressibility of the solid and fluid, the differential change in the fluid mass, dζ, can be represented as where ρ f is the mass density of the fluid, dm is the differential increment of fluid mass, V b is the bulk volume, and ϕ is the porosity. Equation 3 shows that the volume increment of a fluid consists of three parts, where ϕC cc dP and ϕC f dP are the pore volume increments caused by the fluid pressure increment, dP, resulting from pore elasticity and fluid compressibility; ϕC pc dP c is the compressive volume increment of the pore volume caused by an increment in the confining stress dP c . The coefficient C pp represents the internal expansibility in volume resulting from increments in the fluid pressure, C pc is the internal contractility in volume resulting from increments in the confining stress, and C f represents the fluid compressibility resulting from increments in the fluid pressure: 25 where V p is the pore volume and P c = σ ii /3 is the average stress, whose sign convention is defined as positive for compression. Further, we can define the storage coefficient of fluids by adding the first and third expressions in eq 4, as C = C pp + C f . Rewriting eq 3 and substituting this into eq 2 yield This states that in any element of the porous media, increments in the fluid volume comprise three parts: the net increment that flows in minus that flowing out, source generation, and drainage resulting from external stress increments, which correspond to the three terms in the right-hand side of this equation. Stress Rebalancing. The pore fluid pressure will enlarge the longitudinal strains, such that the total strain is the superposition of the confining stress-induced strain and pore pressure-induced longitudinal strains, as follows: where ε is the total strain vector, E is the stiffness matrix, σ is the (confining) stress vector, ΔP = P − P 0 is an increment in the pore fluid pressure, P 0 is the reference pore fluid pressure, and β is the linear expansion coefficient vector resulting from internal forces of fluid pressure increments. In the initial state, in which C bp is the bulk expansion coefficient, defined as 25 By employing the concept of thermal elasticity, β can also be calculated as where K d is the bulk modulus of the solid skeleton and α is the Biot coefficient; therefore the stresses corresponding to the total strains in eq 6 are the effective stress σ′, which can be written as Since the pore fluid pressure is not uniformly distributed, the effective stress will lead to stress redistribution. The tensor form of the stress differential equation can be written as where f i is the body force per unit volume. FRACTURE OPENING AND ANISOTROPY The key features to use FEM to represent hydraulic fracture propagation include the following: establishing the fracture opening in a continuum element and establishing a series of second-order Cartesian tensors regarding the damage, poroelasticity coefficients, and permeability. Fracture Opening. Progressive fracturing in geomaterials is a multiscale phenomenon that can be divided into three main stages: the evolution of distributed microdamage, localization and subsequent macrocrack nucleation, and macrocrack propagation. 26−28 For FEM, the entire fracture process can be represented by a damage variable D as follows: where ε t0 is the threshold strain, representing the initiation of crack nucleation; ε tu is the final strain when the fracture has transected the porous element; κ is a combined parameter, calculated as κ = ε t0 /(ε tu − ε t0 ); and ε I is the tensile strain controlling fracture opening. This tensile strain is in the same direction as the first effective principal stress σ I ′ (Figure 2a,b), while the effective stress follows the cohesive law ( Figure 2c). The magnitude of fracture opening can be determined from the strain and damage by employing the thickness of the damage localization band, denoted as δ, and is typically 1−2 times the size of the cleat spacing. The element size is denoted as L. It can be divided into two zones in the tensile direction: a concentrated damage zone of dimension δ and an undamaged zone of dimension L − δ. Both zones are subject to the same stress, σ′; therefore, where w represents fracture opening, w t represents the total elongation of the element, and E 0 represents the initial elastic modulus. From this equation, we have the following relation: The total strain can be expressed as Substituting eq 15 into eq 14 gives the hydraulic fracture opening: According to eq 6, the relationship between strain and stress can be written as By substitution of eq 17 into eq 16, the constitutive relationship between fracture opening and fluid pressure is obtained as where σ 1 is the minimum confining stress normal to the fracture surfaces, taken as negative. Fracture-Induced Anisotropy. The presence of a hydraulic fracture will lead to anisotropy in the hydromechanical properties, including the damage, coefficients of poroelasticity, and permeability. These may be represented using a series of second-order Cartesian tensors, which are defined in the global coordinates. 29 (20) and the D ij σ are given by where D n represents the tensile damage in the direction of σ 1 , which is derived from eq 14. Therefore, the damage tensor in global coordinates is written as Anisotropy of Poroelasticity Coefficients. A hydraulic fracture also leads to anisotropy in the coefficients C pp and C bp . Therefore, a set of damage correlation coefficients are introduced to realize anisotropy, as follows: where ψ and Σ are the orthotropic coefficient vectors, with the components taking the following forms: where D i represents damage in the global directions of x, y, and z. Anisotropy of Permeability. The permeability tensor resulting from a series of joint sets or fractures is well understood. 31,32 However, these only consider fluid flow along the fracture, neglecting the normal flow, which represents the leak-off. We established a permeability tensor considering both flows, as follows (25) where K and K σ are the matrices of the permeability coefficients in global coordinates and principal stress coordinates, respectively, and K σ is written as where N 1 , N 2 , and N 3 are the three projections of the normal vector N of fracture surfaces on the three principal stress vectors; and k n and k l are the normal and tangent permeabilities, respectively. The normal permeability k n can be obtained by multiplying the in situ permeability k by a modification ξ as follows: The in situ permeability decreases exponentially as the effective stress increases 25 and is calculated as follows: where k 0 is the intrinsic permeability tested in the lab; σ m ′ is the effective average stress, noted as compression positive; σ 0 is the reference stress, evaluated as the mean, maximum, or minimum of the magnitude vector of σ m ′ ; and ζ is a constant that ranges between 1.0 and 1.6. The permeability along the fracture k l follows a cubic law 33,34 as follows: In this section, all of the components necessary for the simulation of hydraulic fracture propagation are established. In the next section, these factors are formulated into weakly coupled FEM equations, which work through the coupling analysis scheme. where K s is the global stiffness matrix, a is the column matrix of the unknown nodal displacements, P s is the column matrix of solid load, and P f is the column matrix of fluid pressure, calculated as where E is the elasticity stiffness matrix of the porous rock, B is the element strain matrix, and ε f is the column matrix of incremental strain generated by the incremental pore pressure ΔP. In the fluid solution domain, the FEM format of eq 6 can be written as where P is the column matrix of the unknown pore pressure, P= dP/dt; K f is the permeability matrix; C is the matrix of storage coefficients; and Q is the column matrix of the flow rate. The latter two can be calculated as follows: and where C pp and C f are the matrices of poroelasticity assembled using the coefficients defined in eq 5 and Q q , Q g , and Q pc are the column matrices of the flow rate, generation rate, and confining stress change-induced fluid content change, respectively. All of the global matrices are assembled as where N is the shape function matrix. The weak coupling format of eqs 32 and 34 is written as Analysis Process. As shown in the analysis flow chart in Figure 3, the coupling analysis can be divided into several calculation steps: (1) Establish the in situ stress and permeability fields. The in situ stress distribution determines the permeability distribution around the injection hole, which�in turn�determines the initial location of fracture propagation. (2) Porous flow analysis, where transient analysis is carried out with the fluid flux as the surface load, permeability as the input, and pore pressure as the output. (3) Stress adjustment analysis, where the pore fluid pressure is input as the body force, and the strains and effective stresses are attained through static solutions. (4) Damage judgment: if no new damaged elements are generated, the injection flow rate or injection time must be increased, and the abovementioned processes must be repeated; else, the fracture opening and anisotropy of hydromechanical properties are calculated, and the input parameters are renewed. (5) The above steps can be repeated to realize a numerical simulation of the complex hydraulic fracture propagation. Fluid Loading Scheme. To identify fracture growth during continuous injection, a loading scheme that gradually increases the duration of the solution is employed. This requires the assumption that the fracture is completely closed when the hydraulic fluid is drained, such that the parameters of the hydromechanical properties can be elastically handled. As shown in Figure 4a, C V represents the global elasticity rigidness of the surrounding rock, which decreases with the fracture propagation, while the increasing fluid loading is carried out by the increasing injection volume V. Based on this, the loading scheme is shown in Figure 4b, where in each transient seepage field calculation, the volume of the injected fluid QT (n) is applied in full form in the rock fracturing circle. The transient calculation time length of each step is expressed in eq 41, such that the gradual expansion of the transient seepage field can be achieved by continuously increasing the number of cyclic steps. Further, to simulate an actual injection process, a limit flow rate Q m and a character time up to this pressure T 0 are set up as follows: where h represents the coal bed thickness and d is the hole diameter. In this paper, Q m = 10 m 3 /min. FEM Model. The example model ( Figure 5) is taken from a coal bed methane formation, which is buried at a depth of 750 m, has horizontal dimensions of 200 m × 200 m, has top and bottom bed thicknesses of 5 m, a coal bed thickness of h = 10 m, and an injection hole diameter of d = 20 mm. The minor horizontal principal stress is σ h = 7.8 MPa, the major horizontal principal stress is σ H = λσ h = 15.6 MPa, and the stress ratio is λ = 2.0. The vertical stress is calculated as σ v = ρgH, where ρ is the mass density of overlaying rocks, H is the buried depth, and g is the gravitational acceleration. All boundary displacements are set to zero since it is the requirement of implanting the initial stresses; on the other hand, it is more reasonable to reflect the real in situ deformation state. Figure 5a shows the formation compositions and positions of cross sections; Figure 5b,c shows the middle horizontal and vertical sections with the injection hole crossed, respectively. The direction of the horizontal major principal stress σ H is expressed using angle θ, which rotates anticlockwise from the negative Z coordinate. Properties of Porous Formation and Hydraulic Fluid. The formation parameters include the permeability, coefficients of poroelasticity, and damage model parameters. These are usually assumed to satisfy the Weibull distribution, with the probability density function as follows: where Ω represents the element property parameters, Ω 0 is the reference modulus, and m is the shape factor, which represents the homogeneity of the parameter distribution. The higher the m value, the better the homogeneity; in this paper, m = 10. The reference moduli are listed in Table 1, which reflects their average values. The hydraulic fluid properties are listed in Table 2. (1) Hydraulic fractures always propagate in the direction of the major principal stress σ H , with the fracture surface normal to the minimum principal stress σ h (Figure 6). This conforms to conventional knowledge, proving that the tensors established for the hydromechanical properties are correct and can effectively control the direction of fracture propagation. 8,11,21,22 (2) Figure 6 shows that the horizontal cross section of the fracture zone and fluid pressure contours can be approximated as ellipses. This is consistent with the results found by Liu. 35 (3) The vertical section also approximates an ellipse, which is slightly cut through the top and bottom beds in the vicinity of the injection hole ( Figure 7c). This conforms to the results found by Peirce 36 for three-layered formations. Temporal Variation of Hydraulic where G is the coal bed shear constant; S is the principal stress normal to fracture surfaces, S = σ h = 7.8 MPa; η represents the effects of leak-off, η = 0.32 × 10 −15 ; and C l is the leak-off coefficient involved with the permeability, porosity, in situ stresses, fracture toughness, and fluid viscosity, etc. It is valued 38 at C l = 4.98 × 10 −4 , and q 0 is the injection flow. The comparisons show that the numeric solutions conform well with the analytical solutions, while some slight differences indicate that the numerical solutions can exhibit more plentiful information about hydraulic fracture propagation. Based on the porosity−elastic coupling model (Figure 1), combined with the damage−fracture evolution characteristics of materials, the process of hydraulic fracturing can be divided into four stages (Figure 11), which are conceptualized in Figure 12. Stage-I: fracture nucleation, during which a macroembryo fracture takes form in the close vicinity of the injection hole. Since it is aggregated from distributive cracks, their gaps and bridging constitute fracture cohesive zones. In this stage, the peak fluid pressure is used to both overcome the traction of the cohesive zone and support the confining normal stress. Stage-II: kinetic propagation, during which the sudden breaking of cohesive traction causes the fluid pressure at the fracture mouth to drop significantly, and the fracture opening increases quickly to a peak value, along with the fracture length. Stage-III: steady propagation, during which the fluid pressure at the fracture mouth remains constant, while the fracture length increases quickly and the fracture opening slowly becomes constant. Stage-IV: propagation termination, where�as the fracture length increases�the injection flow rate cannot increase due to leak-off, which gives rise to a slow drop in the fluid pressure, decreasing the fracture opening and propagation termination. It is obvious that the analytical solutions ignore stage-I and stage-IV. where Ω, Π, ξ, γ, τ, and g m are the dimensionless forms of fracture opening, fluid pressure, position coordinate, fracture length, injection time, and viscosity scaling, respectively. The comparisons between the numerical solutions and the analytical solutions for fluid pressure and the half-width along the fracture length are shown in Figures 13 and 14. The contrast between the fluid pressure and half-width along the fracture length is shown in Figure 15. Spatial Variation of the Parameters of the The comparison in Figure 13 shows that the fluid pressure of the numerical solution is distributed throughout the fracture Figure 6. Hydraulic fracture propagation in the direction of maximum principal stress. and extends to the fracture tip. However, if we realize the premise of analytical solutions, the numerical solution is acceptable. Owing to mathematical difficulties, the coupling between the fracture tip and fluid front in analytical solutions is generally assumed to be progressive, such that the fluid pressure distribution always lags behind the fracture tip, leading to the existence of a pressure void ahead of the fluid front. Therefore, we can regard the analytical solution as a case of incomplete coupling between the fracture tip and fluid front, which always occurs in situations with a large toughness, high viscosity, and no leak-off; in most cases, the fluid pressure distribution goes ahead of the fracture tip; complete coupling occurs, which is what the numerical solution represents. The comparisons in Figures 14 and 15 further indicate that there is a cohesive zone ahead of the fracture tip. Based on these facts, two types of coupling modes in the fracture tip are proposed: the incomplete coupling model (Figure 16a) and complete coupling model (Figure 16b), which correspond to the analytical models and numerical solutions in the present paper, respectively. In the complete coupling model, D = 1 represents the completely fractured zone, D = 0 represents the elastically expanding zone of pore pressure; in between them, 1 > D > 0 represents the cohesive fracture zone, where the fracture opening acts more like an aequilate "bag", indicating that the fluid pressure energy is mainly used to overcome in situ stress clamping and viscosity dissipation from the fluid front invasion and leak-off. SUMMARY In this paper, a numerical simulation of three-dimensional hydraulic fracturing is performed. The advantages of poromechanical modeling are as follows: (1) It can reflect the temporal and spatial evolution and distribution of the stress field, damage field, and pore fluid pressure field around the injection hole during the process of hydraulic fracturing. (2) Compared with the analytical solution, the numerical solutions of the fracturing parameters, including the fracture length, opening pressure, and fluid pressure, are more accurate. In particular, the reflection of the fluid pressure advance distribution at the crack tip and cohesive fracture is an improvement to the theoretical model. (3) The established anisotropic tensor format for the hydromechanical properties of porous media can be used to simulate hydraulic fracturing in complex stress and inclined formation. However, some further studies should be conducted on the following aspects: (1) The thickness of the localized fracture band δ is a material parameter that should be specifically researched.
2022-07-17T15:05:28.678Z
2022-07-15T00:00:00.000
{ "year": 2022, "sha1": "ebc8fa399b345ab715c43a79f1ecd28bcb46050d", "oa_license": "CCBYNCND", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.2c00451", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "67ecb55c16c6db62911b34b17a966933a0be4555", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
250275397
pes2o/s2orc
v3-fos-license
Resilience to Nested Crises: The Effects of the Beirut Explosion on COVID-19 Safety Protocol Adherence During Humanitarian Assistance to Refugees To provide services safely to refugees during the COVID-19 pandemic, humanitarian non-governmental organizations (NGOs) have instituted public health safety protocols to mitigate the risk of spreading the SARS-CoV-2 virus. However, it can be difficult for people to adhere to protocols under the best of circumstances, and in situations of nested crises, in which one crisis contributes to a cascade of additional crises, adherence can further deteriorate. Such a nested crises situation occurred in Beirut, Lebanon, when a massive explosion in the city injured or killed thousands and destroyed essential infrastructure. Using data from a study on COVID-19 safety protocol adherence during refugee humanitarian assistance in Lebanon, Jordan, and Turkey, we conduct a cross-country comparison to determine whether the nested crises in Beirut led to a deterioration of protocol adherence–the “fragile rationalism” orientation–or whether adherence remained robust–the “collective resilience” orientation. We found greater evidence for collective resilience, and from those findings make public health recommendations for service provision occurring in disaster areas. INTRODUCTION To provide services safely to refugees during the COVID-19 pandemic, humanitarian nongovernmental organizations (NGOs) have instituted public health safety protocols to mitigate the risk of spreading the SARS-CoV-2 virus. However, those protocols are not always followed, with a number of mitigating conditions presenting barriers to protocol adherence. This is especially true in locations where political instability, conflict, and economic hardships are prevalent. Given that the vast majority of refugees have sought protection in low-and middle-income countries that are often vulnerable to these kinds of circumstances, humanitarian stakeholders need to consider how such mitigating conditions might shape COVID-19 safety protocol adherence. On the evening of August 4, 2020, a massive explosion in a warehouse along Beirut's port occurred that leveled a good portion of the city and caused major disruptions in a number of city services and destruction of physical infrastructure, including the destruction of three hospitals. Several days after the explosion and following protests over government mismanagement, the Lebanese government leadership stepped down. The explosion and political instability increased currency volatility of the Lebanese pound, exacerbating what was already a crisis situation for Lebanese citizens and the large number of refugees living in the country. This paper is part of a larger study on COVID-19 protocols during refugee assistance in Turkey, Lebanon, and Jordan, in which we examined the adherence to protocols designed to reduce the risk of SARS-CoV-2 infection (specifically social distancing, mask wearing, and hand hygiene) during the summer of 2020. Because the Beirut explosion happened at the approximate halfway point during the data collection, we were able to analyze protocol adherence before and after the explosion, comparing Beirut to other parts of the Middle East. In this way, we can ascertain the probable effects of the explosion on COVID-19 protocol adherence. This paper addresses the question, is the Beirut blast associated with a change in adherence to COVID-19 prevention protocols by refugees and service providers? Health Behaviors and Responses to Crises Previous work on nested or cascading crises, in which one crisis produces other crises, has demonstrated detrimental effects of health outcomes. For example, Shrecker (1) argued that the 2008 global recession led to increases in food prices and devaluing of local currencies, which contributed to an increase in food insecurity and chronic undernutrition, and in the long-term resulted in a number of deleterious health outcomes. Robinson et al. (2) argues that the impacts of the COVID-19 outbreak themselves constitute a condition of cascading crises of economic insecurity, mental health problems, addiction, and crises of governance, security, and discourse including misinformation. Quigley et al. (3) noted the need for preparations to be made to mitigate the impacts of natural disasters (floods, hurricanes, cyclones, earthquakes, etc.) that occurred on top of COVID-19. But less research has investigated how cascading crises impact health behaviors. Marck et al. (4) found that Australians diagnosed with multiple sclerosis experienced deterioration in a number of health behaviors during the summer bushfires in 2019-2020 and the COVID-19 pandemic, including reduced physical activity, increased alcohol consumption, and disrupted sleep patterns. Bell et al. (5) linked existing health survey data with federal disaster declarations at the county-level data and found an association among older adults between exposure to a disaster and reduced physical activity but no association with smoking. These studies were limited in that they used a crosssectional sample, and thus were not able to measure changes over time. Using longitudinal data, Ásgeirsdóttir et al. (6) found some decline in health behaviors but generally increases in positive health behaviors following the 2008 global recession. In assessing how people respond to crises such as a pandemic, Reicher and Bauld (7) describe two competing psychological orientations; fragile rationalism and collective resilience. Fragile rationalism is denoted by people's inability to accurately understand risk, and under duress individuals are less likely to follow sound public health practices. In a fragile rationalist frame, inaccurately perceiving threats is a key behavioral problem. Writing in the wake of the influenza epidemic in the early twentieth century, Soper (8) argued that pandemics were difficult to contain because of poor risk perception, the biological disinclination to self isolate, and the unconscious tendency of people to act in ways that endanger themselves and others. Van Bavel et al. (9), using Soper (8) as a reference point, provided a detailed literature review of the scholarship on practicing risk avoidance protocols in the ensuing 100 years. Their analysis identified "threat perception, social context, science communication, aligning individual and collective interests, leadership, and stress and coping as the areas of research focus" as areas of particular importance in understanding reaction to the COVID-19 pandemic (p. 467). Threat perception is driven by fear, and human fear of threats has evolved as a mechanism to ensure survival (10). Threat perception is associated with how people assess multiple perceived risks-they will act based on what they fear most. A large body of the literature argues that "risk perception is a subjective psychological construct that is influenced by cognitive, emotional, social, cultural, and individual variation both between individuals and between different countries" [(11), p. 995]. Slovic (12) argues that people perceive things over which they do not have control to be more risky. Refugees and staff serving refugees in Lebanon were already in a position of assessing multiple risks to survival before the arrival of the COVID-19, and that these multiple risks of food, water, and economic security had only been exacerbated over time (13,14). The alternative psychological orientation is collective resilience, in which a population exhibits collective resilience in the face of multiple crises, with positive health behaviors maintaining or improving over time despite crisis conditions. Elcheroth and Drury (15) review the literature on responses to pandemics and identify conditions under which populations may exhibit collective resilience. Included among those conditions were a perception of a common identity with the persons issuing public health protocols, the preservation of ordinary social roles and relationships, and the preservation of social ties during the pandemic. Williams and John (16) delineate between personal and collective resilience, describing the latter as how groups of people "expect solidarity and cohesion, and thereby coordinate and draw upon collective sources of practical and emotional support adaptively to deal with an emergency or disaster" (p. 294). Personal and collective resilience to disasters are both strongly influenced by institutional resilience and disaster preparedness (17,18). Institutional preparedness involves having in place the protocols and practices that facilitate continued delivery of services in the context of compounding disasters (17)(18)(19). Such protocols can improve response to disasters and continued service delivery, even in the context of overcrowding. However, when protocols for continued delivery of service were not in place, conditions of overcrowding could exacerbate chaotic breakdowns of adherence to health protocols (20). Together, this research indicates that maintaining important social institutions and relationships, fostering a sense of collective fate, and receiving public health guidance from people one identifies with can increase the likelihood that a population will effectively manage crisis conditions. In this paper, we address the competing orientations summarized above. Frame #1, "fragile rationalism" is that during cascading crises there will be a degraded adherence to public health safety protocols. The underlying assumption is that the cascading crises will lead to people losing trust in the authorities that are recommending or mandating safety protocols, and people will simply abandon those protocols in face of that degraded trust. Frame #2, "collective resilience, " is that in the face of cascading crises people will adopt collective resilience behaviors as a way of coping with the crises and act as a collective with a shared fate, leading to maintained or improved adherence to safety protocols. There is indeed evidence that the chaotic social context in Lebanon led to the Lebanese people losing trust in the central government prior to the pandemic, and prior to the explosion (21,22). Given this, the question is whether prior political mobilization, and the necessity of reliance on local social networks (21) led to anomic behavior (23) thus supporting the "fragile rationalism" frame, or "collective resilience" through greater collaboration and adherence to safety protocols [(24), p. 66]. We test these different orientations for understanding the effects of nested crises by comparing COVID-19 protocol adherence during humanitarian service provision in Beirut, a location experiencing nested crises during the summer of 2020, to similar service provision in Turkey and Jordan that was not directly impacted by the port explosion in Beirut and the severe economic instability in Lebanon broadly. Regional Context Lebanon, Turkey, and Jordan are among the top refugee hosting countries in the world, together hosting 7.9 million refugees 1 These three countries host large populations of Syrian and Palestinian refugees, as well as smaller populations of refugees from Iraq, Afghanistan, Iran, and very small populations from other countries. In Turkey, most refugees are Syrian and live in urban settings, outside of camps. In Lebanon and Jordan, Palestinians make up large proportions of the refugee populations, along with Syrians, and many live in long-settled camps that function much like urban settings. All three countries are facing economic stressors and political instability. And all three countries have dealt with substantial COVID-19 outbreaks, albeit at different times. Turkey experienced the earliest wave, starting in March 2020 and continued increases in identified cases over time. Infections in Lebanon started later, around the first week of July and increased sharply through the beginning of 2021. Fares et al. (14) analyzed data from the Lebanese Ministry of Public Health gathered in late July through mid-August and found that prior to the explosion the positivity rate for COVID-19 tests rose sharply from 2.7 per 100 to 6.4 per 100 afterwards. The hospitalization rate also rose from 139 patients prior to the explosion to 266 patients after the explosion, with an increase in patients admitted to the ICU ward from 36 before the explosion to 75 ICU patients afterwards. In Jordan, the daily numbers of new infections were stable and relatively low from March 2020 through most of the summer. Infections increased slowly starting in mid-August through early September. After the first week of September new infections increased sharply, and Jordan has since experienced two waves of infection. In all three countries, mitigation strategies were implemented that included stay-at-home orders, curfews, and other forms of limits on people's interactions with others in public. Personal Protection Equipment (PPE) has been available in all three countries, and commonly used by humanitarian NGOs while serving refugees. Due to both devaluation of the Lebanese pound and increased demand, the cost of PPE increased dramatically during the summer of 2020. This included face masks, making it more difficult for refugees in Lebanon to afford masks and leading to the inability of many NGO service providers to make masks available to refugee clients who arrived without them. The pandemic arrived on the heels of close to a decade of refugee flows not matched since the first part of the Twentieth Century (25). The humanitarian crisis was significantly dramatic in the Middle East region-caused largely by the chaos of the Syrian civil war and ongoing unrest in Iraq. More than 1 million refugees have fled the Syrian war into Lebanon since 2011, making it the country with the world's largest refugee population per capita (26). Given the reticence of the Lebanese authorities to provide designated camps for Syrian refugees, most live in Informal Tented Settlements (ITS) outside Beirut and in the rural areas. Those Syrian refugees who live in Beirut and other large cities live in rented houses. The explosion of stockpiled explosives at the Port of Beirut, Lebanon was preceded by years of dysfunctional, corrupt, chaotic governance that even before the pandemic of 2020 had become increasingly untenable (25). Political gridlock and corruption has led to multiple crises in sanitation management, specifically garbage disposal, shutoffs and disruptions in basic services such as water and electricity, as well as increasing economic inequality and a central bank crisis. These multiple crises were exacerbated by the influx of refugees from the Syrian civil war. The compounded crises have significantly impacted Lebanese quality of life and wellbeing, leading to sophisticated community organizing calling for an end to corruption and improved governance, including a human chain that stretched across Lebanon in 2019 (21,22). While all three countries have faced challenges from both the Syrian refugee crisis and the COVID-19 pandemic, Lebanon's political and economic difficulties have been more intense that what has been felt in Turkey or Jordan 2 . MATERIALS AND METHODS This study uses data collected 3 from humanitarian assistance organizations serving refugees in Lebanon (Amel Association and National Institution of Social Care and Vocational Training), Turkey (Safa for Development), and Jordan (Altkafal Charity Association), each with multiple service centers. The location of service centers in Turkey were in Konya (central Turkey) and Reyhanli (southeastern Turkey). The service centers in Jordan were three locations in the governorate of Irbid (northeastern Jordan). The service centers in Lebanon were dispersed throughout the country, with four locations in Beirut. The research team includes university researchers in the United States collaborating with NGO coordinators in Lebanon, Turkey, and Jordan; the NGO coordinators in Lebanon and Jordan also held university faculty positions. This team collaborated on the research design and data analysis, allowing for more complete interpretation of the data because a portion of our research team experienced the nested crises first-hand (27). Data were collected from interviews with NGO staff on how well-refugees and staff practiced social distancing (keeping 2 m distance between each other), wore face masks, and washed or sanitized hands and surfaces during services provided to refugees. Fifteen data collectors conducted interviews, asking a series of closed-ended and open-ended questions to staff either in person, over the phone, or in a few cases through video conferencing. The data collectors asked questions about how frequently safety protocols were followed based on a Likert-type scale; All of the time, Most of the time, Some of the time, or Very little of the time. The questions referred to the services that the staff provided either earlier that day or the previous day (depending upon what time the interview was conducted). Staff were asked to reflect on how frequently refugees maintained social distance and wore masks, how frequently staff followed these protocols around refugees and around other staff, and then how frequently the interviewee personally followed these protocols, all during the specific service referenced in the interview. Staff were also asked how often refugees washed their hands before, during, and after that service, how often refugees used hand sanitizer before, during, and after that service. Staff were also asked about those same hand hygiene protocols for other staff. In addition to the three primary safety protocol behaviors, we collected data on the location where services were provided, the day of the interview, the type of service, and the refugee populations served. Data collectors entered the interview responses into a Qualtrics database, so that data monitoring could occur in real time throughout the data collection period given the geographic expanse of the research team and study populations. Using interviews as the observational unit, we constructed comparisons that allowed us to identify similarities and differences in protocol adherence along axes of time, resources, and regions. The findings described in this paper are based on interviews conducted about services in Beirut (319 interviews), comparing those to interviews about services in Turkey (464 interviews) and Jordan (209 interviews). Among 319 interviews conducted in Beirut, 127 (40.8%) of them were the first interview with this certain staff, while 192 (59.2%) of them were at least the second interview conducted with the same staff. In Turkey, among 464 interviews, 423 (91.2%) of them were the first interviews with the staff while the remaining 41 (8.8%) were at least the second time when the same staff was interviewed. For Jordan, among 209, 127 (58.3%) of them were the interviews where the staff was interviewed for the first time, while 83 (41.7%) interviews were at least the second time with the same staff. Interviews were conducted between July 6 and September 15, 2020. Using the date of the interview, we constructed a variable indicating the number of weeks from the start of data collection. Because there were a small number of interviews during the first 2 weeks of data collection, we start week 1 on July 20 and include the next 8 weeks. Measuring time in this way rather than measuring time as a binary (before vs. after the explosion) allows us to observe trends that predated the explosion, so as to avoid attributing changes over time to the explosion that actually started well beforehand. For each protocol, we assigned a numeric value to the four categories of protocol adherence: All of the time = 4, Most of the time = 3, Some of the time = 2, and Very little of the time = 1. For each week, we calculated the average in responses to the frequency of protocol adherence, for a value ranging from 1 to 4, with a higher number indicating better adherence. We examined resource availability comparing Beirut to Turkey and Jordan, and protocol adherence regionwide and comparing Beirut to Turkey and Jordan. We excluded parts of Lebanon outside of Beirut, which may have experienced some ancillary effects of the explosion but would not have been as acutely affected. With a focused comparison of Beirut to Jordan and Turkey, we avoid de-emphasizing the effects of the explosion that may have also impacted other parts of Lebanon but to a lesser extent. Availability of Hand Hygiene Resources Soap, water, and hand sanitizer were broadly available during service provision across the region both before and after the explosion. Figure 1 describes the availability of soap and hand sanitizer during each week of data collection (given similar availability as soap, results for water are not shown but are available upon request). For nearly every week, soap was more commonly available in Beirut than in Turkey and Jordan. The explosion occurred during week 3, yet there was little change in soap availability between week 3 and 4 in Beirut, and during week 5 both were more commonly available in Beirut. Hand sanitizer was less commonly available in Beirut compared to Turkey and Jordan, and availability decreased during weeks 3 and 4. However, availability increased during week 5, and by the final week hand sanitizer was more commonly available in Beirut than in Turkey and Jordan. So while it appears that hand sanitizer availability in Beirut decreased immediately after the explosion, that effect was short-lived. Adherence to COVID-19 Safety Protocols We first analyze protocol adherence over time across the entire region. This provides an overall picture in patterns of adherence. The results are presented in Figures 2-5. There was some variability in how well different protocols were followed across the region, with a general pattern of staff more frequently following protocols compared to refugees, along with more consistent adherence across time for staff and slight improvement in adherence over time for refugees, particularly in mask wearing and using hand sanitizer before and after services. Social distancing remained generally consistent over time across the region. There was very little change from week to week in how well staff adhered to social distancing protocols, while refugee adherence fluctuated some but improved overall between the first and last week. Mask wearing improved markedly among refugees, going from an average of 2.60 during the first week to 3.70 during the last week. Staff mask wearing remained high throughout the period but did increase over time in staff wearing masks around other staff. Our measures of hand washing and sanitizing included how frequently refugees and staff followed these protocols before, during, and after services. Hand sanitizer use and hand washing showed similar patterns for refugees and staff, so we show only the results of the former. Refugees were more likely to use hand sanitizer before services, and least likely during services. Staff were more likely to use hand sanitizer after services, and least likely to use it during services. Refugees much more frequently used hand sanitizer than washed their hands, particularly before the start of services. The average for refugees using hand sanitizer before services started at 2.92 and increased to 3.29 by the final week (down slightly from 3.33 during week six). Staff were better at using hand sanitizer compared to refugees, but changed very little over time. Staff 's use of hand sanitizer before and after services was consistent throughout the data collection period. While there was some fluctuation in hand sanitizer use during services, adherence during the last week was nearly the same as the first week. Comparing Adherence in Beirut vs. Turkey and Jordan To analyze the effects of the explosion in Beirut, we compare adherence over time in Beirut compared to Turkey and Jordan. The explosion occurred at the beginning of week 3 (which included August 3-9). Thus, if the explosion affected protocol adherence, we would expect to see that effect starting in week 3 or week 4. Because of the large number of comparisons in these analyses, we limited our presented findings to a select number of figures (all analyses are available from the authors upon request). We present those analyses in Figures 6-10. We found four general patterns in adherence to protocols across the 8 weeks of data collection; (1) very little change occurring over time, (2) variable change over time in which adherence was worse in Beirut than in Turkey and Jordan, (3) consistent improvement over time in which adherence was better in Beirut than in Turkey and Jordan, and (4) a notable decrease in adherence in Beirut that corresponded to the time of the explosion, with adherence rebounded by the final week of data collection. We present figures that are emblematic of the second, third, and fourth patterns. Refugees and staff adherence to social distancing demonstrates a pattern of uneven adherence over time, and ending with adherence in Beirut being worse than in Turkey and Jordan. For example, with social distancing refugee adherence in Beirut fluctuated slightly, ranging from 2.60 in week 6 to 2.25 in week 8, ending with worse adherence compared to Turkey and Jordan. The decline started in week 5, not in week 3 when the explosion occurred. These findings are presented in Figure 6. Staff adherence to social distancing from refugees decreased after the first week in both Beirut and Turkey/Jordan. In Beirut, adherence continued to decline until week 6, and declined again in week 8. In Turkey and Jordan, there was a general increase in staff 's adherence to social distancing from refugees in week 4, ending with better adherence in the final week compared to Beirut. These results are presented in Figure 7. For mask wearing, adherence in Beirut was better overall compared to Turkey and Jordan, and increased over time. This was particularly true for staff wearing masks around refugees up until the final week when adherence in Beirut and Turkey/Jordan converged. The increase began prior to the explosion and does not seem related to the event. There was a decrease in adherence to all mask wearing protocols in the final week in Beirut, but no change around the time of the explosion in week 3. In Turkey and Jordan, mask wearing by refugees increased sharply from the first to last week, increased somewhat over time for staff wearing masks around other staff, and remained mostly flat for staff wearing masks around refugees. Even with the increases in mask wearing protocol adherence over time, Turkey and Jordan still had less consistent adherence compared to Beirut. These results are shown in Figure 8. The results for staff wearing masks around other staff were similar (results not shown). For hand hygiene, we examined changes over time in refugees' and staff 's washing hands and using hand sanitizer before and after services. Refugees' and staff 's adherence to washing hands prior to services was higher in Beirut and improved over time. The improvement occurred largely after the explosion, between week 5 and 6. A similar pattern emerged in hand washing after services. By contrast, in Turkey and Jordan refugees washing their hands before services decreased over time, and refugees washing their hands after services remained flat. For staff, washing hands before and after services changed very little over time in Turkey and Jordan. In Figure 9 we show the results for refugees' adherence to hand washing before services. In hand sanitizer use we saw the first clear evidence of a decrease in protocol adherence in Beirut that corresponded with the port explosion. There was little change over time in refugees' use of hand sanitizer before services, but in using sanitizer after services there was a marked dip in refugees' adherence in Beirut between weeks 3 and 4. A similar pattern was apparent for staff in hand sanitizer use both before and after services, although the dip was much smaller for staff. By contrast, in Turkey and Jordan hand sanitizer protocol adherence either changed little between weeks 3 and 4, or it improved. However, in all cases the protocol adherence in Beirut rebounded back to pre-explosion levels, although refugees' adherence to those levels remained below those in Turkey and Jordan. The most dramatic changes were seen in refugees' use of hand sanitizer before services, which we show in Figure 10. DISCUSSION While one might expect the massive explosion in Beirut to have a broad detrimental impact on refugee service provision in the city, we found little evidence that the challenges service providers undoubtedly faced had any discernible effect on the hand hygiene resources available nor detracted from the ability of refugees and staff to adhere to COVID-19 safety protocols. On the contrary, protocol adherence during services in Beirut was sometimes better than in our sampled service locations in Turkey and Jordan, and decreases in protocol adherence that occurred did not always directly correspond to the week when the explosion happened. The only instance in which we saw a decrease in protocol adherence in Beirut that corresponded to the explosion was in the use of hand sanitizer, and even in that case protocol adherence improved to near or better than pre-explosion levels by the end of our study period. Across the region there were general improvements in protocol adherence, particularly among refugees, suggesting that over time people in these service centers become better acclimated to following safety protocols. Our data show that even in the refugee communities in Beirut, there was a similar pattern of acclimation. Our data indicate that the explosion, despite clearly disrupting many aspects of economic and social life, did not lead to a major sustained disruption to improved protocol. Contrary to what would be predicted by the frame of fragile rationalism and much of the literature on health behaviors during nested crises, we did not see a decline in protocol adherence that corresponded to the timeline of the Beirut explosion. Instead, we see evidence of a collective resilience of Beirut's residents that led to steady improvement in adherence to some protocols even with the explosion. Thus, we found support for the collective resilience orientation that Drury et al. (24) described. When reflecting on the events, our project team members in Lebanon noted the resilience that people in Lebanon have had to develop as a result of multiple, nested crises. The COVID-19 pandemic is experienced by the Lebanese people as a disaster nested within political and economic crises. The explosion, while it led to devastating loss of life and destruction of infrastructure, was just one more crisis nested within the challenges wrought by pandemic, political instability, and economic volatility. Because of this, the explosion did not create the shock effect that it might have if it had occurred in a location that was not already facing multiple challenges. Our project team's own resilience supports this orientation. While our team in the United States expected data collection in Beirut to slow down following the explosion, our data collectors in Beirut continued to collect data and conducted more interviews in the 2 days following the explosion than the 2 days prior. This occurred even though one of our supervising NGO coordinators in Beirut was attending to damage incurred to her home because of the explosion. While there were new challenges presented by the explosion, our empirical findings and project team experience supports an interpretation that withstanding multiple nested crises builds resilience among people and institutions that protects against "shock" events like that of the Beirut explosion. While our findings are compelling, there are limitations to this study. These data come from interviews with NGO staff and thus represent their recollection and interpretation of protocol adherence. While this project did include direct observations of protocol adherence, that component was started later in 2020 and very few observations occurred before the explosion. The specific staff sampled each week depended on a number of factors including availability to give an interview and the kind of service they were leading (with the intent of sampling from a range of service types). Because staff were not randomly chosen for interviews, there is likely some unmeasured noise in who completed interviews on any given day and their perceptions of protocol adherence. However, we do not have any reason to suspect that unmeasured variables correlated with service locations, and so should not affect the relative adherence between the different sites. But we have no way of determining if the explosion itself might have affected how staff in Beirut recalled protocol adherence during their services. Future research on nested crises testing the collective resilience vs. fragile rationalism orientations should attempt to utilize objective measures of their outcomes, either using direct observations or video recordings, to control for effects a crisis has on how people remember and report a given outcome. There are several recommendations emerging from our research. First, while catastrophic disasters are often deeply disruptive, this analysis supports a policy of staying the course in terms of maintaining delivery of services and the basic health protocols around those services. Regardless of the disruption caused by the explosion, the NGO service providers still seemed to understand the importance of continuity in delivering medical, health, education and other refugee services-and there was no indication that those services were less well-utilized by the refugees themselves. The maintenance of COVID-19 prevention behaviors by both service providers and recipients indicates the interest in delivering and receiving services. We, therefore, recommend that donors continue funding agencies for continuity of deliverance of services to refugees. The NGOs participating in this study are embedded in communities and are personally known by service recipients, and our data indicated that the recipients and NGOs maintained relationships of trust despite the evident mistrust between the Lebanese government and the people in Lebanon. During times of crisis, this trust might be more important than peak functioning physical infrastructure. Second, we noted that our data collectors actually increased data collection after the disaster. Indeed, the data collection and the meetings with our international research team seemed to provide a needed maintenance of routine and purpose in the midst of the post-explosion chaos. This leads to the recommendation that research organizations involved in evaluation and study of humanitarian assistance follow the lead of those on the ground. It may well be the case, in the context of the cascading crises like in Lebanon, that local researchers benefit from continued work even in the wake of a catastrophic event. The continuation of routine and the reason for contact with those outside the local context may well be welcome. Third, in the context of multiple-cascading disasters, donors and humanitarian agencies should not assume that practices on the ground will change dramatically in the wake of a major disaster that makes international media headlines. For the Lebanese people, the explosion was disruptive and caused loss of life and humanitarian infrastructure, but it was only the most recent in a long experience of crises. The resilience of the Lebanese people is inherent in understanding the need to carry on with day-to-day tasks amidst apparent catastrophe. Lastly, we recommend that humanitarian actors normalize evidence-based health practices within service provision such that these practices become habit, especially during times of nested crises. Those norms can become protective habits against the disruptive effects of future shock events, such that even in the face of additional disasters, service providers and recipients will continue to follow safety protocols. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Institutional Review Board, Michigan State University. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. AUTHOR CONTRIBUTIONS SN, EK, and SG contributed to research design and implementation, data analysis and interpretation, and writing manuscript. RM and AG contributed to research design and implementation, interpretation of results, and writing manuscript. SM-P contributed to data analysis and writing manuscript. All authors contributed to the article and approved the submitted version. FUNDING Funding for this project was provided by Elrha Learning and Research for Humanitarian Assistance and Michigan State University (grant #51551).
2022-07-05T17:48:09.512Z
2022-07-05T00:00:00.000
{ "year": 2022, "sha1": "b02f87aa86e66564decb8b46911b33f2ec20a215", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "b02f87aa86e66564decb8b46911b33f2ec20a215", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
213921426
pes2o/s2orc
v3-fos-license
On the Number of Shortest Weighted Paths in a Triangular Grid : Counting the number of shortest paths in various graphs is an important and interesting combinatorial problem, especially in weighted graphs with various applications. We consider a specific infinite graph here, namely the honeycomb grid. Changing to its dual, the triangular grid, paths between triangle pixels (we abbreviate this term to trixels) are counted. The number of shortest weighted paths between any two trixels of the triangular grid is discussed. For each trixel, there are three di ff erent types of neighbor trixels, 1-, 2- and 3-neighbours, depending the Euclidean distance of their midpoints. When considering weighted distances, the positive values α , β and γ are assigned to the ‘steps’ to various neighbors. We gave formulae for the number of shortest weighted paths between any two trixels in various cases by the respective weight values. The results are nicely connected to various numbers well-known in combinatorics, e.g., to binomial coe ffi cients and Fibonacci numbers. Introduction Although in Euclidean geometry the shortest path is always unique between two points, in graph theory it is not true, since in general, paths are understood as consecutive alternating sequences of vertices and edges. Graph theory has various applications in computer and social networks and shortest paths and their numbers give significant information about the network, and it is also important for various applications, e.g., sending packages through the network. Special structured (infinite) graphs are called grids/lattices, and they play important roles, e.g., in image processing, computer graphics and cellular automata. The geometry of these grids is called digital geometry [1]. In digital geometry, based on the structure of the grid, integer coordinates are used for addressing the vertices of the graph (the points of the given space). In many of these fields instead of the original graph its Voronoi dual is used, that it instead of the vertices, pixels/voxels are used. The square and cubic grids are self-dual. The honeycomb grid is dual of the triangular grid, that is instead of having vertices of the hexagons in the honeycomb (or hexagonal) grid, we may use the triangle pixels of the triangular grid keeping both the coordinate system and neighborhood structure [2,3]. In this paper we also use the triangle pixels (also called trixels) as the elements of the grid. Pixels in any grid are generally adjacent if they share at least one point on their border. In the square grid, two integer coordinates are used for addressing the pixels, and, consequently, there are two types of neighbor relations between any two adjacent pixels in the grid: cityblock and chessboard neighborhood, they are also called 1-neighbors and 2-neighbors, respectively, indicating the number of coordinates that changes the value by ±1 'stepping' from a pixel to the indicated neighbor one. By using only one type of neighborhood in each step of the path, the cityblock and chessboard paths, and consequently the first studied digital distances, the cityblock (also called the Manhattan taxicab) distance and the chessboard Preliminaries Each pixel in the triangular grid (we call it a trixel from now on) is addressed uniquely by a triplet of coordinates having axes with directions x,y and z to reflect the symmetry of the grid structure. In this grid, to the sum of the coordinate values reflects the orientation of the trixels, thus we differentiate two types of trixels: even (zero sum trixels has orientation ∆) and odd trixels (triangle pixels having orientation ∇ are addressed by triplets with 1-sum). Figure 1 shows the origin (trixel with coordinates (0,0,0)), the axes of the coordinate system and also a part of the grid with the assigned coordinate values. Each trixel in the triangular grid has three types of neighbors: there are three 1-neighbors, each of them shares a side with the original trixel, there are six more 2-neighbors and, further, there are three more 3-neighbor trixels. All twelve neighbors share at least one point on the boundary with the Each trixel in the triangular grid has three types of neighbors: there are three 1-neighbors, each of them shares a side with the original trixel, there are six more 2-neighbors and, further, there are three more 3-neighbor trixels. All twelve neighbors share at least one point on the boundary with the original trixel (see Figure 2). One can also define formally the neighborhood relations based on the coordinates of the trixels: original trixel (see Figure 2). One can also define formally the neighborhood relations based on the coordinates of the trixels:  The trixels p(p(1),p (2),p (3)) and q(q(1),q (2),q (3)) are in m-neighbor relation (m ∈ {1,2,3}) if: o |p(k) − q(k)| ≤ 1 for every k ∈ {1, 2, 3} and o ∑ | ( ) − ( )| = . We note here that, when working with a given neighborhood and also at neighborhood sequences, in the second condition the sign ≤ is used and the neighborhood relation is having the extensive property, that is all m-neighbors are also (m − 1)-neighbors (for m > 1). In case of equality of the last condition, the trixels are usually referred as strict m-neighbors in the literature. However, for chamfer distances, this strict neighborhood is more adequate, thus we use the definition as we have formally described above. Notice that 1-and 3-neighbors have different orientation than the original trixel (i.e., if the original trixel is even, then these neighbors are odd and vice versa), while 2-neighbors have the same orientation as the original trixel [21]. Figure 2 shows these three neighbor types for an even trixel. In the triangular grid, the trixels that have a fixed coordinate value form a lane, i.e., if two trixels share either x, y or z coordinate, then they are in a common lane (e.g., y = −3 for the trixels of the top lane in Figure 1). If the two trixels are not is a common lane, then these two trixels can be connected by two lanes with angle between the two lanes. Shortest weighted paths are used in order to define weighted/chamfer distance in the triangular grid. According to the three types of neighbor relations, we assign three weights to the steps to adjacent trixels. Weight α is assigned to steps to 1-neighbor trixels. Weight β is used for stepping from a trixel to one of its 2-neighbors, while γ is the weight from a trixel to its 3-neighbor trixels. All these three weights are positive real values. We will also use the concepts of α-step, β-step and γ-step, accordingly, to refer to a step with the given weight. In this context chamfer distance is defined as the least total weight needed for connecting the two trixels by α-, β-and/or γ-steps. However, this value may refer not to a unique path, but many paths. (Two paths are identical if they consist the same sequence of trixels.) This is very natural, and we also use the assumption in this paper that the weights reflect the Euclidean distance of the midpoints of the trixels in such a way that closer neighbors can be reached by smaller weight, more precisely, 0 < α < β < γ holds. Without loss of generality we discuss and find the number of shortest paths between trixels P1(0,0,0) and P2(x,y,z) with the condition x,y > 0 and z < 0 (or x > 0 and y,z < 0). Based on translations and mirroring the grid to itself (see [26] for details), any respective position of any two trixels can be mapped to P1 and P2 with this condition. The trixels p(p(1),p(2),p(3)) and q(q(1),q(2),q(3)) are in m-neighbor relation (m ∈ {1,2,3}) if: • |p(k) − q(k)| ≤ 1 for every k ∈ {1, 2, 3} and We note here that, when working with a given neighborhood and also at neighborhood sequences, in the second condition the sign ≤ is used and the neighborhood relation is having the extensive property, that is all m-neighbors are also (m − 1)-neighbors (for m > 1). In case of equality of the last condition, the trixels are usually referred as strict m-neighbors in the literature. However, for chamfer distances, this strict neighborhood is more adequate, thus we use the definition as we have formally described above. Notice that 1-and 3-neighbors have different orientation than the original trixel (i.e., if the original trixel is even, then these neighbors are odd and vice versa), while 2-neighbors have the same orientation as the original trixel [21]. Figure 2 shows these three neighbor types for an even trixel. In the triangular grid, the trixels that have a fixed coordinate value form a lane, i.e., if two trixels share either x, y or z coordinate, then they are in a common lane (e.g., y = −3 for the trixels of the top lane in Figure 1). If the two trixels are not is a common lane, then these two trixels can be connected by two lanes with angle between the two lanes. Shortest weighted paths are used in order to define weighted/chamfer distance in the triangular grid. According to the three types of neighbor relations, we assign three weights to the steps to adjacent trixels. Weight α is assigned to steps to 1-neighbor trixels. Weight β is used for stepping from a trixel to one of its 2-neighbors, while γ is the weight from a trixel to its 3-neighbor trixels. All these three weights are positive real values. We will also use the concepts of α-step, β-step and γ-step, accordingly, to refer to a step with the given weight. In this context chamfer distance is defined as the least total weight needed for connecting the two trixels by α-, β-and/or γ-steps. However, this value may refer not to a unique path, but many paths. (Two paths are identical if they consist the same sequence of trixels.) This is very natural, and we also use the assumption in this paper that the weights reflect the Euclidean distance of the midpoints of the trixels in such a way that closer neighbors can be reached by smaller weight, more precisely, 0 < α < β < γ holds. Without loss of generality we discuss and find the number of shortest paths between trixels P1(0,0,0) and P2(x,y,z) with the condition x,y > 0 and z < 0 (or x > 0 and y,z < 0). Based on translations and mirroring the grid to itself (see [26] for details), any respective position of any two trixels can be mapped to P1 and P2 with this condition. We note here that, when working with a given neighborhood and also at neighborhood sequences, in the second condition the sign ≤ is used and the neighborhood relation is having the extensive property, that is all m-neighbors are also (m − 1)-neighbors (for m > 1). In case of equality of the last condition, the trixels are usually referred as strict m-neighbors in the literature. However, for chamfer distances, this strict neighborhood is more adequate, thus we use the definition as we have formally described above. Notice that 1-and 3-neighbors have different orientation than the original trixel (i.e., if the original trixel is even, then these neighbors are odd and vice versa), while 2-neighbors have the same orientation as the original trixel [21]. Figure 2 shows these three neighbor types for an even trixel. In the triangular grid, the trixels that have a fixed coordinate value form a lane, i.e., if two trixels share either x, y or z coordinate, then they are in a common lane (e.g., y = −3 for the trixels of the top lane in Figure 1). If the two trixels are not is a common lane, then these two trixels can be connected by two lanes with angle 2 3 π between the two lanes. Shortest weighted paths are used in order to define weighted/chamfer distance in the triangular grid. According to the three types of neighbor relations, we assign three weights to the steps to adjacent trixels. Weight α is assigned to steps to 1-neighbor trixels. Weight β is used for stepping from a trixel to one of its 2-neighbors, while γ is the weight from a trixel to its 3-neighbor trixels. All these three weights are positive real values. We will also use the concepts of α-step, β-step and γ-step, accordingly, to refer to a step with the given weight. In this context chamfer distance is defined as the least total weight needed for connecting the two trixels by α-, βand/or γ-steps. However, this value may refer not to a unique path, but many paths. (Two paths are identical if they consist the same sequence of trixels.) This is very natural, and we also use the assumption in this paper that the weights reflect the Euclidean distance of the midpoints of the trixels in such a way that closer neighbors can be reached by smaller weight, more precisely, 0 < α < β < γ holds. Without loss of generality we discuss and find the number of shortest paths between trixels P 1 (0,0,0) and P 2 (x,y,z) with the condition x,y > 0 and z < 0 (or x > 0 and y,z < 0). Based on translations and mirroring the grid to itself (see [26] for details), any respective position of any two trixels can be mapped to P 1 and P 2 with this condition. Number of Shortest Weighted Paths in Triangular Grid Let p(x 1 ,y 1 ,z 1 ) and q(x 2 ,y 2 ,z 2 ) be two trixels in a triangular grid, and let w 1 , w 2 and w 3 be the absolute differences between the coordinates of p and q such that w 1 = |x 1 − x 2 |, w 2 = |y 1 − y 2 | and w 3 = |z 1 − z 2 |. Let S = w 1 + w 2 + w 3 , and min(w 1 , w 2 , w 3 ), mid(w 1 , w 2 , w 3 ) and max(w 1 , w 2 , w 3 ) are minimum, middle (median) and maximum of w 1 , w 2 and w 3 respectively. The number of shortest weighted paths between p(x 1 ,y 1 ,z 1 ) and q(x 2 ,y 2 ,z 2 ) depends on the values of the weights α, β and γ. According to this fact, we analyze the various cases in the next subsections. 3.1. The Binomial Case: 2α < β and 3α < γ Theorem 1. Let α, β and γ be the weights of steps to a 1-, 2-and a 3-neighbor, respectively, such that 2α < β and 3α < γ. Let p(x 1 ,y 1 ,z 1 ) and q(x 2 ,y 2 ,z 2 ) be two trixels of the triangular grid. Further, let w 1 = |x 1 − x 2 |, w 2 = |y 1 − y 2 | and w 3 = |z 1 − z 2 | be the absolute differences between the corresponding coordinates of the trixels. Then, the number of the shortest paths between p and q, denoted by f (w 1 ,w 2 ,w 3 ), is computed as Proof. By the given conditions on the weights, every shortest path is built up only by 1-steps since each 2-steps can be substituted by two consecutive 1-steps and each 3-step can be substituted by three consecutive 1-steps with less sum of weights. The proof goes by induction on the smallest coordinate difference min(w 1 , w 2 , w 3 ) of the points. Let us consider the base case when min(w 1 , w 2 , w 3 ) = 0, i.e., the two trixels are in the same lane: The number of steps, i.e., the number of α-steps in a shortest weighted path between the two trixels is mid(w 1 , w 2 , w 3 ) + max(w 1 , w 2 , w 3 ), since min(w 1 , w 2 , w 3 ) = 0. There is only 1 shortest path between any two trixels on the same lane, through the trixels 'between' the two trixels in the same lane. Thus, there is only one path, any by (1) we will also get ( Now, let us consider the cases when the two trixels are not in the same lane: For simplicity we will take the two trixels to be (0,0,0) and (i,j,k), and let us assume, that the sector of triangular grid that we are interested in having values, i,j > 0 and k < 0, or j,k < 0 and i > 0. As we have already mentioned, based on the transformations detailed in [26], by mirroring of these sectors, one may obtain the whole triangular grid. Firstly, case i,j > 0 and k < 0 is considered. As we have already mentioned, we prove the Formula (1) by induction. The base case of induction is min(w 1 , w 2 , w 3 ) = 0, which is already proven: Formula (1) is satisfied, i.e., it gives 1 for these cases. Now, as the induction hypothesis, let us assume that Formula (1) holds for every trixel with |i| + |j| + |k| = i + j − k < M, with a positive integer M. Further, let us consider a trixel with coordinates |i| + |j| + |k| = i + j − k = M. We may also assume that min(w 1 , w 2 , w 3 ) > 0. Since every trixel has either 0-sum or 1-sum triplet, this condition also means that in this region of the grid, one of i and j has the value min(w 1 , w 2 , w 3 ) and the other has the value mid(w 1 , w 2 , w 3 ). Then, let us analyze, first, the case when q is an odd trixel. In this case, all shortest paths from (0,0,0) to (i,j,k) must contain, as the last step, a step either from (i − 1,j,k) or from (i,j − 1,k) the target trixel (i,j,k). However, both (i − 1,j,k) and (i,j − 1,k) are even trixels such that the sum of the absolute values of their coordinates is less than M. Thus, the number of the shortest paths to the trixels (i − 1,j,k) and (i,j − 1,k) are given by the Formula (1) by our hypothesis, i.e., i − 1 + j i − 1 and i + j − 1 i , respectively, not depending on which of i or j (or both) have the minimal value, since, e.g., Moreover, the number of shortest paths to (i,j,k) is, then, exactly the sum of those two values, that is, which was to be proven. Now, let us analyze the case when q is an even trixel. In this case all the shortest paths from (0,0,0) to (i,j,k) has the last step from the trixel (i,j,k + 1) = (i,j,−(|k| − 1)) to the trixel (i,j,k). Thus, the number of shortest path to the even trixel q(i,j,k) is exactly the same as the number of shortest paths to the odd trixel q (i,j,k + 1). However, for q the sum of absolute coordinate values is |i| + |j| + |k + 1| = i + j + |k| − 1 = M − 1. Therefore, based on the hypothesis, the number of shortest path is Observing that q and q shares the coordinates i and j, which are, in fact, min(w 1 , w 2 , w 3 ) and mid(w 1 , w 2 , w 3 ), the formula also holds for the trixel q. Secondly, let us consider the case i > 0 and j,k < 0. Here, max(w 1 , w 2 , w 3 ) = w 1 = |i| = i, since every trixel has either 0-sum or 1-sum triplet, i.e., i + j + k ∈ {0,1}. Again the induction based on the cases, where min(w 1 , w 2 , w 3 ) = 0, for which cases it is already proven that Formula (1) is satisfied. Now, as the induction hypothesis, let us assume that Formula (1) holds for every trixel with |i| + |j| + |k| = i − j − k < M, with a positive integer M. Further, let us consider a trixel with coordinates |i| + |j| + |k| = i − j − k = M. The number of shortest paths from trixel (0,0,0) to an even trixel (i,j,k) equals to the sum of the number of shortest paths to trixels (i,j + 1,k) = (i, −(|j| − 1), −|k|) and (i,j,k + 1) =(i, −|j|, −(|k| − 1)), since each the shortest path from (0,0,0) to (i,j,k) is passing through exactly one of these two trixels having the last step from there to (i,j,k). However, by the induction hypothesis, Formula (1) is correct for trixels (i,j + 1,k) and (i,j,k + 1) since their absolute coordinate sum is M − 1. Therefore, for trixel (i,j,k) we have For odd trixel (i,j,k), with j,k < 0 and i > 0, the number of shortest weighted paths equals to the number of shortest weighted paths for even trixel (i − 1,j,k), since in each shortest path the last step is from the even trixel (i − 1,j,k) to the odd trixel (i,j,k). Here mid(w 1 , w 2 , w 3 ) and min(w 1 , w 2 , w 3 ) have the values | j| and |k| (in an order) both for the trixels (i − 1,j,k) and (i,j,k). Therefore, the number of shortest paths to both of them is given by: The proof has been finished. As one may also observe in the next example, the binomial coefficients appear in the figure (hence the name of the subsection), in fact, the space is cut to six parts, and in each part one can observe the Pascal's triangle. We also note here that in [24], based on a different approach, but, in fact, very similar results were presented (as the case of path counting for 1-neighborhood). Figure 3 illustrates the number of weighted shortest paths from trixel (0,0,0) to all other trixels in case 2α < β and 3α < γ. Case of Double-Steps: Case of 2α > β and 3α < γ In this case the shortest path between p(x1,y1,z1) and q(x2,y2,z2) contains a number of -steps (plus one -step in case p and q are have different parities).The number of -steps is equal to , hence the subsection head. Furthermore, the number of shortest weighted paths, f(w1,w2,w3), between p(x1,y1,z1) and q(x2,y2,z2) is computed based on two sub-cases which are given by the following subsections. 3.2.1. Sub-Case (2α > β and 3α < γ) and S Is an Even Number Theorem 2. Let , and be the weights for movements to 1-, 2-and 3-neighbor trixels in the triangular grid, respectively, such that 2 > and 3 < . Let p(x1,y1,z1) and q(x2,y2,z2) be two trixels of the triangular grid, and w1 = |x1 − x2|, w2 = |y1 − y2 | and w3 = |z1 − z2| be the absolute differences between the corresponding coordinates of the trixels such that = w1 + w2 + w3 is an even number. Then, the number of the shortest paths between p and q, denoted by f(w1,w2,w3), is computed as Proof. By the given values of the weights, it is clear that the number of -steps equals to in any of the shortest paths and there is no other steps are considered (any 3-step can be broken to three consecutive 1-steps, and any 2 consecutive 1-steps can be joined to a -step such that the total weight of the paths is decreasing if the path has other type of steps originally). First, we deal with the case when the two trixels are on the same lane, i.e., they share one of the coordinates values. In this case, clearly, exactly one shortest path between them, built up by 2-steps on the given common lane; and since ( , , ) = 0, Formula (2) also provides this result. Now, without loss of generality, assume that p is the origin and q does not share any lane with p. By the symmetry of the grid, there are various, but equivalent cases. Let us consider, first, the case that q(i,j,k) has coordinates with the properties i,j > 0 and k < 0. The base of the induction consists the value 1 for the cases when ( , , ) = 0. We use induction on the sum of the coordinate difference, that is, in this case, i + j − k. By the induction hypothesis let us assume that Equation (2) holds also for each even trixel (i,j,k) with i + j − k < M for any given positive integer M. Then, let us consider an even trixel (i,j,k) with i + j − k = M and count the number of the shortest Case of Double-Steps: Case of 2α > β and 3α < γ In this case the shortest path between p(x 1 ,y 1 ,z 1 ) and q(x 2 ,y 2 ,z 2 ) contains a number of β-steps (plus one α-step in case p and q are have different parities).The number of β-steps is equal to S 2 , hence the subsection head. Furthermore, the number of shortest weighted paths, f (w 1 ,w 2 ,w 3 ), between p(x 1 ,y 1 ,z 1 ) and q(x 2, y 2 ,z 2 ) is computed based on two sub-cases which are given by the following subsections. 3.2.1. Sub-Case (2α > β and 3α < γ) and S Is an Even Number Theorem 2. Let α, β and γ be the weights for movements to 1-, 2-and 3-neighbor trixels in the triangular grid, respectively, such that 2α > β and 3α < γ. Let p(x 1 ,y 1 ,z 1 ) and q(x 2 ,y 2 ,z 2 ) be two trixels of the triangular grid, and w 1 = |x 1 − x 2 |, w 2 = |y 1 − y 2 | and w 3 = |z 1 − z 2 | be the absolute differences between the corresponding coordinates of the trixels such that S = w 1 + w 2 + w 3 is an even number. Then, the number of the shortest paths between p and q, denoted by f (w 1 ,w 2 ,w 3 ), is computed as Proof. By the given values of the weights, it is clear that the number of β-steps equals to S 2 in any of the shortest paths and there is no other steps are considered (any 3-step can be broken to three consecutive 1-steps, and any 2 consecutive 1-steps can be joined to a β-step such that the total weight of the paths is decreasing if the path has other type of steps originally). First, we deal with the case when the two trixels are on the same lane, i.e., they share one of the coordinates values. In this case, clearly, exactly one shortest path between them, built up by 2-steps on the given common lane; and since min(w 1 , w 2 , w 3 ) = 0, Formula (2) also provides this result. Now, without loss of generality, assume that p is the origin and q does not share any lane with p. By the symmetry of the grid, there are various, but equivalent cases. Let us consider, first, the case that q(i,j,k) has coordinates with the properties i,j > 0 and k < 0. The base of the induction consists the value 1 for the cases when min(w 1 , w 2 , w 3 ) = 0. We use induction on the sum of the coordinate difference, that is, in this case, i + j − k. By the induction hypothesis let us assume that Equation (2) holds also for each even trixel (i,j,k) with i + j − k < M for any given positive integer M. Then, let us consider an even trixel (i,j,k) with i + j − k = M and count the number of the shortest paths from the origin to (i,j,k). It is clear that since only 2-steps are used, each shortest path goes through only on even trixels. On the other hand, to reach (i,j,k) in a shortest path the last step could be from exactly one of the two trixels (i − 1,j,k + 1) = (i − 1,j,−(|k| − 1)) and (i,j − 1,−(|k| − 1)). However, for these two trixels, the condition that their absolute coordinate sum is less than M holds (it is actually, M − 2 for any of these two trixels). Therefore, by the hypothesis, the number of shortest paths to them can be computed by Formula (2), that is, actually, since the first two coordinates correspond to the minimum and to the middle coordinate differences (in one of the others). Then the number of shortest paths to the trixel (i,j,k) is exactly the sum of the previous two values, i.e., , which is the value we wanted to prove. Now we show the proof in the case i > 0 and j,k < 0. We use induction again on the value M = |i| + |j| + |k|=i − j − k. The base of the induction gives 1 for the cases when min(w 1 , w 2 , w 3 ) = 0. Now, by the induction hypothesis, let us assume that Equation (2) holds for each even trixel (i,j,k) with i + j − k < M for any given positive integer M. Let us consider the shortest paths from (0,0,0) to q(i,j,k) with M = |i| + |j| + |k|. In this case all shortest paths from (0,0,0) to q have the last step either from (i − 1,j + 1,k) = (|i| − 1,−(|j| − 1),k) or from (i − 1,j,k + 1)= (|i| − 1,j,−(|k| − 1)) to q. By the addition rule, we need the sum of the number of shortest paths to those two trixels to get the number of shortest paths between (0,0,0) and q. Observing that in this sixth of the grid, |j| = −j and |k| = −k play the role of min(w 1 , w 2 , w 3 ) and mid(w 1 , w 2 , w 3 ), That is exactly the value of Formula (2) for the number of shortest paths between (0,0,0) and q, thus the proof has been finished. Now, let us give a comment on the previous case, since the result is exactly the same as in the first case, when only 1-steps were used. Remark 1. Every two consecutive 1-steps can be joined to a 2-step and any 2-step can be broken to two consecutive 1-steps, in fact there is a bijection between the set of shortest paths used in Theorem 1 and the set of shortest paths used in Theorem 2 between the same trixels (since Theorem 2, only same parity trixels are considered here). By the used sixth of the grid one of the directions of any two consecutive 1-steps is a necessary direction step in a shortest path, while there could be two choices in the other (if the actual trixel is not in the same lane as the target trixel). Thus, by describing every second steps of a shortest path with only 1-steps (case of Theorem 1), one can still uniquely defined the whole path, and in fact, this description gives a shortest path between the same two trixels if only 2-steps are used (i.e., in the case of Theorem 2). 3.2.2. Sub-Case (2α > β and 3α < γ) and S Is an Odd Number Theorem 3. Let α, β and γ be the weights of the 1-, 2-and 3-steps, respectively, with the conditions 2α > β and 3α < γ. Let p(x 1 ,y 1 ,z 1 ) and q(x 2 ,y 2 ,z 2 ) be two trixels of the triangular grid, and let w 1 = |x 1 − x 2 |, w 2 = |y 1 − y 2 | and w 3 = |z 1 − z 2 | be the absolute differences between the corresponding coordinates of the trixels, such that S = w 1 + w 2 + w 3 is odd. Then, the number of shortest paths between p and q, denoted by f (w 1 ,w 2 ,w 3 ), is computed as Proof. In this case the shortest path composed of S − 1 2 β-steps, and one α-step. Thus, the total number of steps in any shortest path is S + 1 2 . By Remark 1, we know that each shortest path in this case corresponds to a shortest path with only 1-steps (in the sense that only those trixels are used during the path of the actual case which are included in that shortest path with only 1-steps). However, the mapping is not a bijection in this case. There could be many actual shortest paths that correspond to the same shortest path with only 1-steps: in fact, any one of the S + 1 2 steps can be the 1-step, and then, all others are 2-steps. This gives us the possibility to use the multiplication rule to count the number of shortest paths, first we can fix a shortest path with only 1-steps in mid(w 1 , w 2 , w 3 ) + min(w 1 , w 2 , w 3 ) min(w 1 , w 2 , w 3 ) many ways, and then, we can choose S + 1 2 different ways the place of the 1-step in the path. Thus, the formula of (3) has been proven. Example 2. Figure 4 shows the number of shortest weighted paths from (0,0,0) to the displayed trixels of the grid in case 2α > β and 3α < γ. Figure 4 shows the number of shortest weighted paths from (0,0,0) to the displayed trixels of the grid in case 2α > β and 3α < γ. Case of Triple-Steps: Case of 2α < β < γ < 3α In this case 3-steps have the smallest relative weight, this is from where the name of the subsection comes. Moreover, two consecutive 1-steps give less sum of weights than a 2-step, thus, in this case 2-steps are not used in any shortest paths. Theorem 4. Let , and be the weights of 1-, 2-and 3-steps, respectively, such that the weights satisfy the conditions 2α < β < γ < 3α. Let p(x1,y1,z1) and q(x2,y2,z2) be trixels of the triangular grid, further, let w1 = |x1 − x2|, w2 = |y1 − y2| and w3 = |z1 − z2 | be the absolute differences between the corresponding coordinates of the trixels. Then, the number of the shortest paths between p and q, denoted by f (w1,w2,w3), is computed as Proof. The proof consists of various cases. We start it with the case when the two trixels are in the same lane. Then the number of -steps (and also the number of -steps) is zero. The number of shortest paths becomes one, having exactly one path through 1-neighbors between the two trixels in their common lane. Since Let us consider the cases when the two trixels are not lying on a common lane. Because of the symmetry of the grid, further we need to differentiate two cases, i.e., we do the proof for two of the sixths of the grid. The sixths of the triangular grid that we are interested in is i,j > 0 and k < 0 (2 positive coordinates and 1 negative), and i > 0 and j,k < 0 (2 negative and 1 positive coordinates). By mirroring these sixths one gets the whole triangular grid. Further, without loss of generality, we Case of Triple-Steps: Case of 2α < β < γ < 3α In this case 3-steps have the smallest relative weight, this is from where the name of the subsection comes. Moreover, two consecutive 1-steps give less sum of weights than a 2-step, thus, in this case 2-steps are not used in any shortest paths. Theorem 4. Let α, β and γ be the weights of 1-, 2-and 3-steps, respectively, such that the weights satisfy the conditions 2α < β < γ < 3α. Let p(x 1 ,y 1 ,z 1 ) and q(x 2 ,y 2 ,z 2 ) be trixels of the triangular grid, further, let w 1 = |x 1 − x 2 |, w 2 = |y 1 − y 2 | and w 3 = |z 1 − z 2 | be the absolute differences between the corresponding coordinates of the trixels. Then, the number of the shortest paths between p and q, denoted by f (w 1 ,w 2 ,w 3 ), is computed as Proof. The proof consists of various cases. We start it with the case when the two trixels are in the same lane. Then the number of γ-steps (and also the number of β-steps) is zero. The number of shortest paths becomes one, having exactly one path through 1-neighbors between the two trixels in their common lane. Since min(w 1 , w 2 , w 3 ) = 0, the Formula (4) leads also to this result: Let us consider the cases when the two trixels are not lying on a common lane. Because of the symmetry of the grid, further we need to differentiate two cases, i.e., we do the proof for two of the sixths of the grid. The sixths of the triangular grid that we are interested in is i,j > 0 and k < 0 (2 positive coordinates and 1 negative), and i > 0 and j,k < 0 (2 negative and 1 positive coordinates). By mirroring these sixths one gets the whole triangular grid. Further, without loss of generality, we assume that p(0,0,0) is the origin and q(i,j,k) with the above properties. Let us consider the possible cases one by one. Case 1. If the two trixels p(0,0,0) and q(i,j,k) are not in the same lane and i,j > 0 and k < 0. Further, let us assume, first that q is an even trixel and let us see how a shortest path is built up from (0,0,0) to q. The shortest path contains the possible maximum number of "γα-combo" steps (any of those is a γ-step followed by an α-step, such that both of the first 2 coordinates are increased by 1 and the third coordinate is decreased by 2 during such a "combo" step). Thus, the number of these "combo" steps equals min(w 1 , w 2 , w 3 ). Notice that both the order and the direction of these steps is fixed by the coordinate values of q. In case i = j, one can reach q in this way, otherwise, "double" α-steps (those are two consecutive α-steps by increasing one of the first two coordinates, the one that had the value mid(w 1 , w 2 , w 3 ), and by also decreasing the value of the third coordinate). The direction and the order of these α-steps is also fixed in a "double" step. Now, any of the shortest paths built up altogether by mid(w 1 , w 2 , w 3 ) many "γα-combo" and "double" α-steps. The order of these combined steps, however, can be arbitrary, and min(w 1 , w 2 , w 3 ) of them are "γα-combo". This leads to the formula that we wanted to prove. Observe that in each of the shortest paths in this case the last step, in fact, is an α-step, which decreases the third coordinate. This leads us the solution of the next case. The next case includes the same sixth of the grid, but q is an odd trixel (i.e., i + j + k = 1). Instead of this odd trixel, let us consider the even trixel q'(i,j,k − 1). The number of shortest paths from (0,0,0) to q is the same as the number of shortest paths to q , in fact, there is a bijection between these sets of paths, such that to any paths to q the last α-step from q to q is concatenated. However, in this sixth of the grid i and j are playing the role of min(w 1 , w 2 , w 3 ) and mid(w 1 , w 2 , w 3 ) (in some order), thus the Formula (4) also holds for this case. Case 2. Let us consider the other sixth of the grid, thus let p(0,0,0) and q(i,j,k) be given such that i > 0 and j, k < 0 (two negative coordinates and one positive coordinate). First, let q be even. A shortest path contains the possible maximum number of "αγ-combo" steps (their number is min(w 1 , w 2 , w 3 ), each of them is increasing the first coordinate by 2 and decreasing each of the other two by 1) and "double" α-steps (two consecutive α-steps, their directions are also fixed by q). Altogether the path contains mid(w 1 , w 2 , w 3 ) number of those combined steps from which min(w 1 , w 2 , w 3 ) is "αγ-combo" steps. Since the order of these combined steps is arbitrary, the number of the shortest paths is which was to be proven. Finally, let us consider the case in the same sixth of the grid when q(i,j,k) is odd (i + j + k = 1). In this case every shortest path from (0,0,0) will have the last step an α-step from the even trixel q (i − 1,j,k) to q. Therefore, the number of shortest paths from the origin to q coincides to the number of shortest paths to q . However, in this case j and k have the values −min(w 1 , w 2 , w 3 ) and −mid(w 1 , w 2 , w 3 ) (in some order) both for q and q , and therefore, the Formula (4) gives also the number of the shortest paths to q. The theorem is proven. Example 3. In Figure 5, one can observe the number of shortest weighted paths from the trixel (0,0,0) to some other trixels in case 2α < β < γ < 3α. Two-Dimensional Extension of Fibonacci Numbers: Case of 2α = β and 3α < γ In this case every shortest path is built up by 1-steps and 2-steps, they are equally preferred, since their relative weight for changing a coordinate value is the same. Since 3-steps has a larger respective weight, they are not used in any shortest path. Moreover, every two consecutive 1-steps can be changed to a 2-step and vice versa without changing the sum of the weights in the path. , as we have seen in Formula (1). In each of these paths there are exactly S -steps. In each path, we can always substitute two consecutive -steps by a -step, such that the original path only 1-steps is clearly identifiable. The obtained -step can be broken down to two consecutive -steps in a unique way. Therefore, we may apply the multiplication rule, first by counting the number of base paths with only 1-steps, and then, to count the number of paths when various number of 2-steps are used in various places. That is actually, the number of the possible orders of 1′s and 2′s such that their sum is S = w1 + w2 + w3. Let, then i be the number of -steps (0 ≤ i ≤ ), and each time we increase the number of -steps by 1, we decrease the number of -steps by 2. Thus, there is i 2-steps, and the path contains totally S − i steps. Therefore, we need to sum up the values to get the number of possible ways. Actually, ∑ = , that is the S-th Fibonacci number: One can see it as follows. When p = q, or they are 1-neighbors, there is exactly 1 shortest path, without any steps (any Two-Dimensional Extension of Fibonacci Numbers: Case of 2α = β and 3α < γ In this case every shortest path is built up by 1-steps and 2-steps, they are equally preferred, since their relative weight for changing a coordinate value is the same. Since 3-steps has a larger respective weight, they are not used in any shortest path. Moreover, every two consecutive 1-steps can be changed to a 2-step and vice versa without changing the sum of the weights in the path. Theorem 5. Let α, β and γ be the weights of 1-, 2-and 3-steps, respectively, such that the weights satisfy the conditions 2α = β and 3α < γ. Let p(x 1 ,y 1 ,z 1 ) and q(x 2 ,y 2 ,z 2 ) be two trixels of the triangular grid, further, let w 1 = |x 1 − x 2 |, w 2 = |y 1 − y 2 | and w 3 = |z 1 − z 2 | be the absolute differences between the corresponding coordinates of the trixels, and S be the sum of these absolute differences. Further, let F S denote the S-th Fibonacci number, i.e., the S-th element of the Fibonacci sequence defined by F 0 = F 1 = 1, F i+2 = F i + F i+1 (for every nonnegative integer i). Then, the number of the shortest paths between p and q, denoted by f (w 1 ,w 2 ,w 3 ), is computed as Proof. In this case a shortest weighted path may contain only α-steps, or only β-steps or both of them. Each β-step can be substituted by two α-steps and vice versa. The number of shortest weighted paths with only α-steps equals to mid(w 1 , w 2 , w 3 ) + min(w 1 , w 2 , w 3 ) min(w 1 , w 2 , w 3 ) , as we have seen in Formula (1). In each of these paths there are exactly S α-steps. In each path, we can always substitute two consecutive α-steps by a β-step, such that the original path only 1-steps is clearly identifiable. The obtained β-step can be broken down to two consecutive α-steps in a unique way. Therefore, we may apply the multiplication rule, first by counting the number of base paths with only 1-steps, and then, to count the number of paths when various number of 2-steps are used in various places. That is actually, the number of the possible orders of 1 s and 2 s such that their sum is S = w 1 + w 2 + w 3 . Let, then i be the number of β-steps (0 ≤ i ≤ S 2 ), and each time we increase the number of β-steps by 1, we decrease the number of α-steps by 2. Thus, there is i 2-steps, and the path contains totally S − i steps. Therefore, we need to sum up the values S − i i to get the number of possible ways. Actually, S − i i = F S , that is the S-th Fibonacci number: One can see it as follows. When p = q, or they are 1-neighbors, there is exactly 1 shortest path, without any steps (any number) or with a 1-step (one number 1), respectively. Also F 0 and F 1 have the value 1, as the initial values of the sequence. Now, as an induction hypothesis, let us assume that the number of possible orders of 1 s and 2 s such that their sum is S is exactly F S when S < M for any fixed M (where M is at least 2). Now there are exactly two ways to have such a sequence of 1 s and 2 s such that their sum is M: either the last element is a 1 or a 2 (1-step and 2-step, respectively, considering paths). However, by the assumption, the number of those sequences (paths) with sum M that have a 1 as their last element is exactly F M−1 while the number of those that have a 2 as their last element is exactly F M−2 . Using the addition rule, we get that F M = F M−1 + F M−2 , which is exactly the recursive formula for the Fibonacci sequence, thus F M is exactly the M-th element of this sequence. Summarizing it, we have the formula what we wanted to prove: Example 4. Figure 6 shows the number of shortest paths from trixel (0,0,0) to the displayed trixels in the case 2α = β and 3α < γ. number) or with a 1-step (one number 1), respectively. Also and have the value 1, as the initial values of the sequence. Now, as an induction hypothesis, let us assume that the number of possible orders of 1′s and 2′s such that their sum is S is exactly when S < M for any fixed M (where M is at least 2). Now there are exactly two ways to have such a sequence of 1′s and 2′s such that their sum is M: either the last element is a 1 or a 2 (1-step and 2-step, respectively, considering paths). However, by the assumption, the number of those sequences (paths) with sum M that have a 1 as their last element is exactly while the number of those that have a 2 as their last element is exactly . Using the addition rule, we get that = + , which is exactly the recursive formula for the Fibonacci sequence, thus is exactly the M-th element of this sequence. Summarizing it, we have the formula what we wanted to prove: . □ Figure 6 shows the number of shortest paths from trixel (0,0,0) to the displayed trixels in the case 2α = β and 3α < γ. Observe that if the two trixels are in a common lane (i.e., ( , , ) = 0), the number of the shortest paths between them is, in fact, the S-th (i.e., ( + + )-th) element of the Fibonacci sequence, starting with F0 = F1 = 1. (Actually, as we have shown, see also, e.g., [25], the number of {1,2}-sequences having sum S is FS, that is the S-th element of the Fibonacci sequence.) Based on that, we can see the results as a kind of two-dimensional extension of the Fibonacci sequence. Generalising the Sequence of Odd Numbers: Case of 2α < β and 3α = γ In this case the shortest weighted path between p(x1,y1,z1) and q(x2,y2,z2) is composed from -steps and -steps, and we will never use -steps, where the number of -steps is between 0 and ( , , ) in a shortest weighted path. Note that a -step can always be substituted by three consecutive -steps, but the converse does not hold. The number of shortest weighted paths Observe that if the two trixels are in a common lane (i.e., min(w 1 , w 2 , w 3 ) = 0), the number of the shortest paths between them is, in fact, the S-th (i.e., (w 1 + w 2 + w 3 )-th) element of the Fibonacci sequence, starting with F 0 = F 1 = 1. (Actually, as we have shown, see also, e.g., [25], the number of {1,2}-sequences having sum S is F S , that is the S-th element of the Fibonacci sequence.) Based on that, we can see the results as a kind of two-dimensional extension of the Fibonacci sequence. 3.5. Generalising the Sequence of Odd Numbers: Case of 2α < β and 3α = γ In this case the shortest weighted path between p(x 1 ,y 1 ,z 1 ) and q(x 2 ,y 2 ,z 2 ) is composed from α-steps and γ-steps, and we will never use β-steps, where the number of γ-steps is between 0 and min(w 1 , w 2 , w 3 ) in a shortest weighted path. Note that a γ-step can always be substituted by three consecutive α-steps, but the converse does not hold. The number of shortest weighted paths between p(x 1 ,y 1 ,z 1 ) and q(x 2 ,y 2 ,z 2 ), f (w 1 ,w 2 ,w 3 ), can be computed according to four sub-cases which are given in the next theorem. Theorem 6. Let α, β and γ be the weights of 1-, 2-and 3-steps, respectively, such that 2α < β and 3α = γ hold. Let p(0,0,0) and q(x,y,z) be two trixels of the triangular grid, further, let the absolute coordinate differences w 1 = |x|, w 2 = |y| and w 3 = |z| and the sum of them S = w 1 + w 2 + w 3 be given. Then, there are the following cases for counting the number of the shortest paths, denoted by f (w 1 ,w 2 ,w 3 ), between p and q: If the trixels are in a common lane, that is if min(w 1 , w 2 , w 3 ) = 0, then there is exactly 1 shortest path. If the trixels are not in a common lane then, if S is even, then If S is odd and q has 2 positive and a negative coordinate, then If S is odd and q has 2 negative and 1 positive coordinate, then Proof. If the trixels are in the same lane, clearly, the shortest path built up by 1-steps including each trixel between them, and there is only 1 such path. Now, let us consider the shortest weighted paths from trixel p(0,0,0) to q(x,y,z) where none of the coordinates of q is zero, i.e., the two trixels are not in the same lane. A shortest path may contain α-steps only. It may contain "γα-combo" steps if q has 1 negative and 2 positive coordinates or it may contain "αγ-combo" steps if q has 2 negative coordinates and 1 positive coordinate. A shortest path may also contain many α-steps and also many combo steps (based on the case as they were described above). Now, let us consider the remaining three cases one after the other. When q is an even trixel, i.e., S is an even number, every shortest path is built up by combo steps and "double" α-steps (two consecutive α-steps). The type of the combo steps, i.e., either "αγ-combo" or "γα-combo" depends on the number of negative and positive values among the coordinates x, y and z (as we have already described). We can partition the set of shortest paths to equivalent classes based on the number of combo steps used in them. Consequently, we will compute the number of shortest paths in each such blocks and we sum up those values. On the one hand, we may have only α-steps in a shortest path, which means that 0 combo steps are used. On the other hand, the maximal number of combo steps in a shortest path (since they change all the three coordinate values), is min(w 1 , w 2 , w 3 ) = min(|x|, |y|, |z|). When only α-steps are used, the number of such shortest paths is mid(w 1 , w 2 , w 3 ) + min(w 1 , w 2 , w 3 ) min(w 1 , w 2 , w 3 ) (from Theorem 1). The number of shortest paths with the maximal number of combo steps is mid(w 1 , w 2 , w 3 ) min(w 1 , w 2 , w 3 ) (from Theorem 4). In one combo step the sum of the coordinate changes in absolute value is 4 (3 + 1 in γα and 1 + 3 in αγ-combo), while a double α-step changes 2 of the coordinates with sum 2 in absolute value, that implies that a combo step can be changed to two double α-steps (although the reverse may not hold). Let i be the number of the combo steps in a shortest path where 0 ≤ i ≤ min(w 1 , w 2 , w 3 ), then the number of combo and double steps in such shortest path is S 2 − i. It is actually, S 2 = mid(w 1 , w 2 , w 3 ) + min(w 1 , w 2 , w 3 ) double α-steps, if combo steps are not used: and min(w 1 , w 2 , w 3 ) of them in one direction, while mid(w 1 , w 2 , w 3 ) of them in other direction (60 degree between the two directions). Moreover, each combo steps decrease the number of double steps by 2, i.e., by one and one both directions double α-steps, and therefore the sum of all types of combined steps by 1. In any path the used combo and double α-steps can be put in any order. Thus, the combination of the i combo, mid(w 1 , w 2 , w 3 ) − i double α-steps in one direction and mid(w 1 , w 2 , w 3 ) − i double α-steps in the other direction, altogether S 2 − i steps, gives the number of shortest paths in this block. This number can be written as By summing up these values for the possible values of i, one gets exactly the Formula (6). The proof of this case is finished. In the case S = |x| + |y| + |z| is odd and q(x,y,z) has 2 positive and one negative coordinate, by the symmetry of the triangular grid, we consider only x, y > 0 and z < 0. In this sixth of the grid, in the shortest paths "γα-combo" and "double" α-steps can be used (to any even trixel). In what follows, for any even trixel q the shortest path finishes with an α-step into the opposite direction than axis z., i.e., by decreasing the third coordinate and not changing the other two. Therefore, the number of shortest paths from the trixel (0,0,0) to the odd trixel q(x,y,z) is exactly the same as the number of shortest paths from (0,0,0) to the even trixel q (x,y,z − 1). However, the number of shortest paths to q is already computed in the previous case. Knowing that in this sixth of the grid, one of x and y plays the role of mid(w 1 , w 2 , w 3 ) and the other plays the role of min(w 1 , w 2 , w 3 ) and for q the sum of the coordinate differences is one more than it is for q (it is S + 1 for q ), the number of shortest paths is proven to be: Finally, considering the last case, because of symmetry, we use x > 0 and y,z < 0. In this sixth of the grid, all shortest paths to an even trixel q built up by "αγ-combo" and "double" α-steps. To reach an odd trixel q(x,y,z) every shortest path from (0,0,0) has the last step as an α-step from the trixel q (x − 1,y,z) to q(x,y,z). From this fact, it flows that the number of shortest path from (0,0,0) to q is the same as to q . However, the latter one is already proven and it is computed by Formula (6). In this sixth of the grid −y and −z play the role of min(w 1 , w 2 , w 3 ) and mid(w 1 , w 2 , w 3 ) in an order. Further, the sum of the absolute coordinate values is S for q, then it is S − 1 for q . Therefore, one needs to modify the Formula (6) according to this and gets which was to be proven. Thus, each case of the theorem is proven. Remark 2. Observe that if min(w 1 , w 2 , w 3 ) = 0, f (w 1 , w 2 , w 3 ) = 1. We also know that the number of shortest paths to any odd trixel is the same as the number of shortest paths to an even neighbor trixel. To make a comment on the title of the subsection let us consider the special case when, e.g., two even trixels are located such that min(w 1 , w 2 , w 3 ) = 1. Then max(w 1 , w 2 , w 3 ) = mid(w 1 , w 2 , w 3 ) + 1. Let denote max(w 1 , w 2 , w 3 ) by n. Then, S = 2n. The variable i of Formula (6) could be either 0 or 1 (the number of combo steps containing an αand a γ-step). Thus, we have the sum of two values, namely, n 1 1 0 and n − 1 1 1 1 . This gives, n + n − 1 = 2n − 1, which is exactly the n-th positive odd number. Hence the name of the case. Example 5. In Figure 7 the number of weighted shortest paths from the origin (0,0,0) to some trixels are presented in case of 2α < β and 3α = γ. The sequence of odd number can be seen as embedded next to the lanes containing the origin and having only 1 s. Conclusions This paper discusses five of the most popular cases for the number of shortest weighted paths between any two trixels in the triangular grid. The number of these paths depends on the weights , and of the movements from the trixel to its various types of neighbors. In Section 3.1 (2α < β and 3α < γ), is preferred, it has the smallest relative weight for changing a coordinate value in a path, and thus, no shortest path contains any -and -steps. In this case, clearly, binomial coefficients based on coordinate differences of the trixels provide the number of shortest paths. In Section 3.2 (2α > β and 3α < γ), -steps are preferred, has the smallest relative weight for changing a coordinate value in a path, and thus, no shortest path contain any -steps (and at most one -step is used, since we may need to change the parity when we are looking for a shortest path between an odd and an even trixel). The results are given either by binomial coefficients or by their multipliers, depending on the parities of the trixels. In contrast to this, in Section 3.3 (when the weights satisfy 2α < β < γ < 3α) -steps are not used at all, in fact -steps are preferred (even by adding also many -steps because of the parities of the trixels). The numbers of shortest paths are again described by binomial coefficients. In the case considered in Section 3.4 (2α = β and 3α < γ), -steps and -steps are equally preferred, and no -step can occur in a shortest path. We have seen that the results, in this case, can be seen as a two dimensional extension of the Fibonacci sequence. In our last studied case, with weights satisfying 2α < β and 3α = γ, -steps and -steps are equally preferred, and no -step can occur. While in some cases the computation results clearly well-known binomials, the structure of the grid gives some more interesting cases. We believe that the cases presented here are among the most basic and usual ones: we have studied the cases, when exactly one or two types of steps are not preferred, and therefore they have not appeared in any of the shortest paths. However, there are also some other interesting cases that can be discussed later on, e.g., when 2 = and 3 = , when all the three types of steps are equally preferred. Another possible future task is to consider "inhomogeneous" distribution of the weights, which involves counting the number of shortest weighted paths of one case concatenated by the shortest weighted paths of another case. A somehow connected result was discussed in [22]: "digital disks" were defined and used to approximate the Euclidean disks where the set of gridpoints having less (or equal) distance than a given radius from a given gridpoint defined the digital disk. Conclusions This paper discusses five of the most popular cases for the number of shortest weighted paths between any two trixels in the triangular grid. The number of these paths depends on the weights α, β and γ of the movements from the trixel to its various types of neighbors. In Section 3.1 (2α < β and 3α < γ), α is preferred, it has the smallest relative weight for changing a coordinate value in a path, and thus, no shortest path contains any βand γ-steps. In this case, clearly, binomial coefficients based on coordinate differences of the trixels provide the number of shortest paths. In Section 3.2 (2α > β and 3α < γ), β-steps are preferred, β has the smallest relative weight for changing a coordinate value in a path, and thus, no shortest path contain any γ-steps (and at most one α-step is used, since we may need to change the parity when we are looking for a shortest path between an odd and an even trixel). The results are given either by binomial coefficients or by their multipliers, depending on the parities of the trixels. In contrast to this, in Section 3.3 (when the weights satisfy 2α < β < γ < 3α) β-steps are not used at all, in fact γ-steps are preferred (even by adding also many α-steps because of the parities of the trixels). The numbers of shortest paths are again described by binomial coefficients. In the case considered in Section 3.4 (2α = β and 3α < γ), α-steps and β-steps are equally preferred, and no γ-step can occur in a shortest path. We have seen that the results, in this case, can be seen as a two dimensional extension of the Fibonacci sequence. In our last studied case, with weights satisfying 2α < β and 3α = γ, α-steps and γ-steps are equally preferred, and no β-step can occur. While in some cases the computation results clearly well-known binomials, the structure of the grid gives some more interesting cases. We believe that the cases presented here are among the most basic and usual ones: we have studied the cases, when exactly one or two types of steps are not preferred, and therefore they have not appeared in any of the shortest paths. However, there are also some other interesting cases that can be discussed later on, e.g., when 2α = β and 3α = γ, when all the three types of steps are equally preferred. Another possible future task is to consider "inhomogeneous" distribution of the weights, which involves counting the number of shortest weighted paths of one case concatenated by the shortest weighted paths of another case. A somehow connected result was discussed in [22]:
2020-01-16T09:05:33.207Z
2020-01-13T00:00:00.000
{ "year": 2020, "sha1": "cced1d1c575030210bd6bbdb1e37ae7191d16c71", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7390/8/1/118/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "97190d3e979b945ed9faee2dd6d130a49010ab0c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
269237393
pes2o/s2orc
v3-fos-license
Psychosocial Correlates of the Experience of Caregiving Among Caregivers of Patients With Schizophrenia Background: Family caregivers provide essential support to their loved ones with schizophrenia with profound outcomes for themselves. The caregiver burden fails to consider the entire caregiving experience, which also incorporates positive aspects of caring. Many potentially significant variables are associated with this. Aim: To examine the correlates of the experience of caregiving in caregivers of patients with schizophrenia. The specific objectives were to examine the socio-demographic variables of the patients and caregivers, clinical variables of the patient, caregivers’ knowledge of illness, caregivers’ perspectives of family functioning, caregiver coping, their social support, psychological distress, quality of life, and their spirituality, religiosity and personal beliefs and the associations of these variables with the caregivers’ experience of caregiving. Methods: This cross-sectional observational study was conducted between August 2018 and January 2021 at All India Institute of Medical Sciences, New Delhi, India. One hundred and fifty-eight dyads of patients with schizophrenia and their family caregivers were recruited using purposive sampling. Experience of Caregiving Inventory was used to evaluate the caregiving experience. The caregivers were also assessed on socio-demographics, knowledge of illness, family functioning, coping, social support, general mental health, quality of life, and spiritual, religious, and personal beliefs. Patient socio-demographics and clinical variables were also assessed. Results: A negative experience of caregiving was reported by caregivers of patients who had higher positive or negative symptoms of schizophrenia. Impaired Communication, Roles, Affective Responsiveness, Affective Involvement, and General Functioning aspects of family functioning were associated with a negative experience of caregiving. Denial/blame and seeking social support as coping were also associated with a negative experience of caregiving. A negative experience of caregiving was significantly positively correlated with greater psychological distress and poorer quality of life. Greater inner peace was associated with a less negative experience of caregiving. Spiritual strength was associated with a more positive experience of caregiving. Knowledge of mental illness and caregiver social support were not significantly associated with the experience of caregiving. Conclusion: Experience of caregiving is a relevant construct, the understanding of which can help inform caregiver-directed interventions in the future. Specifically, family-based interventions, which include ameliorating patient symptomatology, improving the family environment, strengthening caregivers’ coping strategies, attending to caregiver distress, and encouraging spirituality among caregivers, may lead to a less negative and more positive experience of caregiving; and a better quality of life for caregivers. Introduction Schizophrenia is a chronic, severe, and disabling psychotic illness characterized by disturbances in thought processes, perceptions, emotional responsivity, and social interactions [1].Schizophrenia affects about 24 million persons or one in 300 persons (0.32%) around the world [2] and is one of the top 15 causes of disability worldwide [3].About 90% of untreated schizophrenia patients reside in low-to-middle-income countries [4].The humanistic impact of schizophrenia is massive, affecting the quality of life, causing stigmatization and violence, leading to low rates of employment, marriage, and fertility, and high rates of substance abuse [5].A caregiver's role is crucial in this illness.The origins of caregiving behavior can be traced back to anthropology, where the survival of persons with significant impairments is evidence of compassion and moral decency demonstrated in prehistoric societies [6].Hermanns and Mastel-Smith (2012) identified the essential elements of caring; which include the caregiver's commitment to the improvement of the care recipient [7].The emotions identified by them were love, affection, empathy and compassion, satisfaction, and fulfilment in the caregiving role.The most important element was the emotional connection between the caregiver and the care recipient; which characterized the caregiving situation. Caregiving in schizophrenia can be quite stressful.The common impacts on caregivers include depression, anxiety, and physical issues.Negative experiences of caregiving comprise physical, emotional, and economic impacts; and stigma in caregivers [8].A very important factor in how caregivers respond is their cognitive appraisal regarding the illness, how they interpret and make sense of the disorder, and how they assess their coping capabilities [9].Caregiving appraisal is a neutral concept comprising subjective cognitive and affective appraisals of the likely stressors and the efficacy of an individual's coping strategies [10]. Recently, some positive and beneficial aspects of the caregiving role have come to the forefront, like caregivers acquiring greater sensitivity and strength [11].The issue with the concept of caregiver 'burden' is that it cannot highlight the potential rewarding aspects of caregiving, and that it does not come within a psychological or social theoretical framework that pertains to determinants, mediating factors, and outcomes [12]. The clinical relevance of caregiving in mental illness underscores that the concept of burden considers all negative outcomes in the caregiver's life to the patient's illness and does not take into account other factors [12].Also, it has been shown that the experience of caregiving more accurately predicts caregivers' psychological well-being rather than burden [9]. A study on the experience of caregiving is required because caregiving is now seen to be a complex healthcare activity.Starting from an informal family duty, it is currently a major part of health care [13].There are very few large sample studies on caregiving in India.Most studies have conducted research taking samples of about 50 to 100 participants.Also, most Indian studies have concentrated on delineating caregiver burden.Caregiving appraisal (as opposed to burden) has been taken up and studied to a limited extent in India [14][15][16][17].Even in the West, the experience of caregiving has been studied infrequently, especially in the area of schizophrenia per se.Regarding Indian studies, these have assessed the association of the experience of caregiving with variables such as patient symptomatology [14][15][16][17], coping [14,15,17], social support [14,17], caregiver distress [14,16,17]; and one study that considered familism and family cohesion [17] Out of these studies, one considered only two main variables [15], one compared caregiving in schizophrenia and BPAD [16], and one studied the correlates of caregiver distress [17].One old study has discussed some variables as correlates of the experience of caregiving [14], but many pertinent variables have not been included in it.Hence, very few studies have been carried out on the knowledge of schizophrenia, positive aspects of caregiving, and spiritual, religious, and personal beliefs in caregivers; and further research is needed in these areas.Thus, we wanted to extend the previous work by including a different combination of variables not studied earlier (including knowledge of illness, positive caregiving experiences, spirituality, religiosity, and personal beliefs) and to consider these aspects; which have not been emphasized in other studies; in the context of quality of life, psychological distress, and social support of the caregivers.Burden captures only a part of the entire caregiving experience, and that too, a negative part; whereas the latter is a much broader construct.Hence, the aim of the present research was to examine the correlates of the experience of caregiving among the caregivers of patients with schizophrenia.Specifically, we examined caregivers' knowledge of the illness, their perspective of family functioning, coping, social support, psychological distress, quality of life, spirituality, religiosity, and personal beliefs in relation to their experience of caregiving.It was expected that both a negative and positive experience of caregiving would be reported by caregivers; and that it would be associated with the socio-demographics of the patients and caregivers, clinical variables of the patient, caregivers' knowledge of illness, their perspective of family functioning, their coping, social support, psychological distress, quality of life, and their spiritual, religious and personal beliefs. Setting and participants This cross-sectional observational study was carried out in the Department of Psychiatry at All India Institute of Medical Sciences (AIIMS), New Delhi.It provides treatment to referred and non-referred patients from different parts of India.Both inpatient and outpatient services are available.The treatment is provided by a team of trained psychiatrists, psychologists, nurses, and other health professionals.The treatment is considerably subsidized. Inclusion and exclusion criteria Patients of either gender were included if they had a diagnosis of schizophrenia as per International Statistical Classification of Diseases and Related Health Problems (ICD)-10 criteria of more than one year duration, and were aged between 18-55 years.Caregivers of either gender were included if they were living with the patient for more than one year and were primarily responsible for caring for the patient and supervising their treatment (including medication and hospital visits); could read Hindi (the local language) and comprehend instructions for the assessment and provided informed consent.Patients were excluded if they had comorbid and disabling chronic physical or psychiatric disorders, had substance dependence (except tobacco), or had organic brain syndromes.Caregivers were excluded if they had diagnosed disabling physical or psychiatric disorder or if they did not have regular contact with the patient, and were not living with the patient.Participants with a family member with a diagnosed disabling chronic physical illness or chronic disabling psychiatric disorder staying in the same dwelling unit were also excluded. Procedure The study was commenced after receiving ethical clearance from the Institute Ethics Committee.Existing or new patients suffering from schizophrenia as diagnosed by a consultant psychiatrist, coming to psychiatry outpatient services were screened.Those accompanied by a caregiver and fulfilling the eligibility criteria were taken up for the study.They were explained the nature and purpose of the study and informed consent was sought from both the patient and his/ her primary caregiver.Patients and caregivers who gave their informed consent were then interviewed and were administered the scales.The assessments were completed in one sitting with breaks.Patients were assessed for their demographic (e.g., age, gender, residence) and clinical details (e.g. total duration of illness, total duration of treatment).They were assessed for positive and negative symptoms of schizophrenia using the Scale for the Assessment of Positive Symptoms and Scale for the Assessment of Negative Symptoms respectively.Similarly, the caregivers were assessed for their demographic details.Caregiving experiences were assessed using the Experience of Caregiving Inventory.They were also assessed for knowledge of illness using the Knowledge of Mental Illness scale, family functioning by Family Assessment Device (FAD), coping using the Coping Checklist, social support using the Social Support Questionnaire-Hindi, psychological distress using General Health Questionnaire-12, quality of life using the World Health Organization's Quality of Life Questionnaire (WHOQOL-Bref), and spirituality, religiosity and personal beliefs using the WHOQOL-SRPB (spiritual, religious and personal beliefs).Figure 1 shows the sample recruitment process.183 caregivers were eligible and gave consent; however, the assessment was aborted for 25 caregivers who could not comprehend the assessment questionnaires.Therefore, 158 caregivers for whom baseline assessments were completed, were included in the study. Measures The Experience of Caregiving Inventory [12] was used to assess the caregiving experience as appraised by caregivers.In this scale, caregiving is conceptualized in a stress-appraisal coping framework.It is a 66-item self-report instrument measuring the experience of caring for a relative with serious mental illness.It consists of 10 independent dimensions in relatives' appraisal of caregiving: eight negative and two positive. The negative subscales include: difficult behaviors, negative symptoms, stigma, problems with services, effects on the family, loss, dependency, and need for backup.The positive subscales include positive personal experiences and good aspects of the relationship with the patient.The items are rated on a 5-point Likert scale.The negative and positive domains can be summed up to give two measures: total negative score and total positive score.The subscale Cronbach alphas range from 0.74 for dependency to 0.91 for difficult behaviors.This scale was translated into Hindi by three experts and checked for equivalence by three back translations. The Knowledge of Mental Illness (KMI) Scale [18] assesses the knowledge regarding diagnosis, symptoms, causes, medication, and treatment of mental illness.It consists of five items rated on a 2-point scale.Higher scores represent better knowledge of the illness. The Family Assessment Device (FAD) [19] was used to assess family functioning.It is a 60-item self-report questionnaire that has its basis in the McMaster model of family functioning.A family member rates how well an item describes their family and each item is scored on a 4-point scale.Higher scores mean poorer family functioning.It has the following seven dimensions: Problem-solving, communication, roles, affective responsiveness, affective involvement, behavioral control, and general family functioning.General functioning assesses overall family functioning.The Cronbach's alphas in the various domains range from 0.72-0.92.This scale was translated and back-translated for use. A Coping Checklist [20] was used to assess coping.This 70-item questionnaire covers a wide range of cognitive, behavioral, and emotional responses that are utilized to cope with stress.It has seven subscales, out of which one is problem-focused (Problem-Solving), five are emotion-focused (Distraction-positive, Distraction-negative, Acceptance/Redefinition, Religion/Faith, Denial/Blame), and Social Support which has both problem and emotion-focused elements.The internal consistency of this scale has been seen as adequate (full scale: alpha=0.86). Social Support Questionnaire-Hindi [21], which is a Hindi adaptation of Pollack and Harris' (1983) Social Support Questionnaire; was used to assess social support.It consists of 18 items on a 4-point Likert scale.A higher score indicates greater social support.The test-retest reliability of the modified version of the SSQ is high (correlation coefficient= 0.91, p<0.01). General Health Questionnaire-12 (Hindi Version) [22] was used to assess psychological distress.It is a widely used screening instrument in both clinical and non-clinical settings.A cutoff score of <2 indicates that the caregiver is free from any psychiatric illness. The WHOQOL-Bref (Hindi Version) [23] The WHOQOL-SRPB (Hindi Version) [24] was used to assess spirituality, religiosity, and personal beliefs.The Hindi version scale consists of 32 items divided into 8 facets, namely, spiritual connection, meaning and purpose in life, experiences of awe and wonder, wholeness and integration, spiritual strength, inner peace, hope and optimism; and faith.Each facet has 4 questions rated on a 5-point Likert scale.The Cronbach alpha ranges from 0.77 to 0.95; and for the complete instrument, it is excellent (0.93). The patients were assessed using Scale for the Assessment of Positive Symptoms (SAPS), a well-recognized 34-item scale, designed to assess the positive symptoms of schizophrenia patients [25], and Scale for the Assessment of Negative Symptoms (SANS), another commonly used 25-item instrument to assess the negative symptoms of schizophrenia [26]. Statistical analysis The data were analyzed by the licensed statistical package SPSS 25.0 version software (IBM SPSS Statistics for Windows, Version 25).The data were checked for normalcy using the Kolmogorov-Smirnov test and were summarized using descriptive statistics.Mean and standard deviation and median and inter-quartile range were used for continuous variables, and frequency and percentage were used for the categorical variables.The Spearman rho correlation was used for estimating the relationship between the caregiving experience and the other continuous psychosocial variables.Other parametric and non-parametric tests were used as applicable.The two-sided p<0.05 was considered statistically significant, and for multiple exploratory correlations, correction for multiple comparisons was done using the Bonferroni method. Correlations and comparisons (the latter for discontinuous variables) were carried out with the Total Negative Score and Total Positive Score of caregiving experience respectively. Results A total of 158 dyads of patients and caregivers were recruited.Among the caregivers, 27.2% (n=43) were siblings, 26.6% (n=42) were fathers, 25.9% (n=41) were mothers, 13.9% (n=22) were spouses, 5.1% (n=8) were children and 1.3% (n=2) were others (sister-in-law).The mean duration of the caregiving role was 7.7 years (± 5.7 years).Table 1 shows the socio-demographic data of the patients and caregivers.The caregivers were more likely to be of a higher mean age, married, educated above higher secondary, and employed as compared to the patients.The dyads primarily came from an urban background, were of Hindu religion, and belonged to nuclear families. Socio-Demographics Patients variables such as knowledge about mental illness, family functioning, caregiver coping, social support, psychological distress, quality of life, and spiritual, religious, and personal beliefs.Patient clinical variables are also shown.In caregiving experience, the dependency domain had the highest mean score, followed by negative symptoms and difficult behavior.The good aspects of the relationship outweighed the positive personal experiences while caregiving.In family functioning, the scores on roles and behavioral control were elevated, demonstrating family distress in these areas.Among the coping strategies employed by caregivers, problem-solving and acceptance/redefinition coping scored the highest.The knowledge of illness was low, the social support was moderate, and psychological distress was present in most caregivers; as 63.3% scored above the cut-off of >=2 on the GHQ-12, indicating probable psychological morbidity.The quality of life was highest in physical health and lowest in psychological health.In spiritual, religious, and personal beliefs, hope and optimism scored the highest.The lower part of Table 2 presents the clinical variables of the patients.The mean duration of illness was 9.5 (± 6.3) years, and the mean duration of treatment was 7.9 (± 5.8) years.34.8% (n=55) of the patients were hospitalized at least once.All the patients received pharmacotherapy, 13.9% (n=22) had received psychotherapy, and 7.5% (n=12) had received electroconvulsive therapy (ECT).Table 4 presents the relationship of the experience of caregiving (both positive and negative) with caregiver and patient-related variables.Since there were 34 comparisons with these, along with 14 comparisons with socio-demographics (shown in Table 1, plus the caregiver's relationship to the patient), we applied the Bonferroni correction for p-value at 0.05/48 (~0.00104).We found that none of the patients' and caregivers' socio-demographics had a significant association with either the negative or positive experience of caregiving.The duration of the caregiving role also had no significant correlation with the experience of caregiving.In family functioning, the total negative score of the experience of caregiving was significantly positively correlated with communication, roles, affective responsiveness, affective involvement, and general functioning.Denial/blame and seeking social support as coping strategies were significantly positively correlated with a negative experience of caregiving.A negative experience of caregiving was associated with psychological distress as measured by the GHQ-12 in the caregivers.A higher negative experience of caregiving was also associated with poorer quality of life in all domains.Greater Inner peace in spiritual, religious, and personal beliefs was associated with a lower negative experience of caregiving. Psychosocial Variables Mean ± SD Higher spiritual strength was associated with a more positive experience of caregiving.Higher SAPS scores and higher SANS scores were associated with a higher negative experience of caregiving.Knowledge of mental illness and caregiver social support were not significantly associated with either a positive or negative experience of caregiving. Discussion This study aimed to examine the correlates of the experience of caregiving among caregivers of patients with schizophrenia.In India, most patients with schizophrenia are residing with their families [27].The experience of caregiving can be exhausting, but, at the same time, enriching for the caregivers, and Indian society can utilize this commitment of the caregivers [27].Thus, the caregiving experiences of the caregivers are of value.The present scores on negative experiences of caregiving were somewhat higher than those reported by Aggarwal et al., 2009 [14] and Martens and Addington, 2001 [9], but lower than those reported by Doval et al., 2018 [15] and Grover et al., 2012 [16].The scores on negative experiences were quite close to the baseline scores of Jorge et al., 2019 [28] among caregivers of patients of first-episode psychosis.The positive experiences of caregiving were higher than those reported by most studies [9,[14][15][16] but lower than those reported by Jorge et al. 2019 [28] in caregivers of patients with first-episode psychosis.In the domain scores, scores were highest on dependency, followed by negative symptoms and difficult behaviors in our study.Similar results were seen in [14][15][16].These findings may reflect the symptomatology and the fact that the patients in this study were mostly young, unemployed, and single.The good aspects of the patient-caregiver relationship exceeded the positive personal experiences in our study; and this is also supported by other researches [14,15]. Nonetheless, the present study adds to the literature on the extent of positive and negative experiences of caregiving in patients with schizophrenia.The differences across the studies may be attributable to the differences in samples, duration of caregiving role, familial circumstances, and cultural factors.The relevant factors in our setting pertained to having a sample of outpatients with chronic but relatively stable illness of mild to moderate severity; with caregivers having a greater level of education and being economically better off compared to other areas in the country.Also, the sample was from a tertiary care hospital with accessible care where they hope for the improvement of their patient; as well as coming from a collectivist culture that places value on family obligations for caring.Moreover, the duration of illness and caregiving were not limited, which may lead to adjusting to the illness over its course; and caregivers without chronic disabling conditions were included, providing a sample closer to real life. The study reveals several correlates of the experiences of caregiving.After the Bonferroni correction, we did not find significant correlations between the socio-demographics of the patient and the caregivers.This is similar to Grover et al., [16].Our results are at variance with Aggarwal et al. [14], where patient and caregiver education were associated with a positive experience of caregiving; and Doval et al. [15], where caregiver education, family income, and urban residence were associated with the same.We found that negative experiences of caregiving were correlated with the severity of symptoms of schizophrenia in the patients.Previous literature also suggests negative aspects of caregiving to be associated with the severity of psychopathology in patients with schizophrenia [14,15].Greater manifestations of symptoms of schizophrenia are likely to have a greater effect on the family members and caregivers.One could assume that manifestations like hallucinations, delusions, apathy, and avolition result in a huge stressful impact on the caregivers. The present study found negative experiences of caregiving to be significantly associated with several aspects of impaired family functioning like communication, roles, affective responsiveness, affective involvement, and general functioning.We cannot infer the directionality of the relationship based on this association, though we could comment that certain aspects of the family environment are related to the experience of caregiving.The plausible explanations could be that caregiving is stressful, which affects family functioning, or altered and distressed family functioning may result in a negative caregiving appraisal; or there can be common underlying factors (like personality) that alter the appraisal of caregiving and family functioning.We did find denial/blame or seeking social support as coping to be significantly associated with a negative experience of caregiving.Aggarwal et al. [14] and Doval et al. [15] also report these to be important coping methods; although the former reported an association of seeking social support with a positive experience of caregiving, which our study did not replicate after the Bonferroni correction. Our findings are in agreement with most previous research [9,14,16,17,28] that a negative experience of caregiving is associated with caregiver distress and psychological morbidity.In our study, a large number (63.3%) of the caregivers were seen to have probable psychological morbidity.This figure is much higher than those found in other Indian studies (24% in Aggarwal et al. [14], 41.4% in Grover et al. [16] and 48% in Hegde et al. [17].The probable explanations for this could be that first, caregivers in the capital city of India are stressed due to many other factors as well; and second, this being a study from a tertiary care hospital, many difficult-to-treat cases with their stressed caregivers are more likely to come here.Expectedly, a greater negative appraisal of the caregiving experience was also associated with a poorer quality of life. Previous literature also suggests schizophrenia to be associated with a poorer quality of life among caregivers [29].The greatest negative impact was on the psychological quality of life of the caregivers.The analysis delves into the association of the different aspects of caregiving with the spiritual, religious, and personal beliefs of the caregivers.A negative experience of caregiving was associated with lower inner peace among the caregivers, while a higher spiritual strength was associated with a positive caregiving experience. Possibly, those who have inner peace are less likely to be affected negatively by the circumstances encompassing the caregiving role.A corollary to that is individuals experiencing higher spiritual strength may appraise the caregiving role as an important epoch in one's life, and have a positive appraisal of the caregiving role. The findings of the paper have several implications.Caregivers are very important stakeholders for the wellbeing of their patients, and, therefore, should be treated as equal partners in care.Their voices should be heard, and their concerns given due attention.They appraise caregiving in both a negative and a positive light, and this should be acknowledged.Firstly, ameliorating patient symptomatology through a regular and effective medication regime with proper adherence should be ensured; to reduce the negative experience of caregiving.Second, family-based interventions should be considered for caregivers; as they have a positive impact on both patient and caregiver outcomes.Improving the family environment through interventions that emphasize proper communication, demarcation of each family member's role, responding appropriately to each member's needs and interests, and improving general family functioning will go a long way in reducing the negative experience of caregiving in caregivers.Third, caregiver coping can be strengthened by promoting adaptive coping strategies and reducing maladaptive emotion-focused strategies such as denial and blaming others.There should also be a constructive use of social support, and caregivers should be given access to support groups to build their social networks.Fourth, 'Caring for the caregivers' is a very relevant cause; as caregiver stress can negatively impact their ability to care.Thus, caregivers' psychological issues should be addressed adequately by professionals; to reduce their psychological distress.Lastly, encouraging spirituality among caregivers may lead to an improvement in their inner peace; and bolster their spiritual strength; specifically, the latter may increase the positive experience of caregiving.Implementing suggestions in all these areas may lead to a less negative experience of caregiving and ultimately, a better quality of life for the caregivers. Although not a part of the study findings, research on the experience of caregiving, especially the positive aspects of caregiving, ought to be conducted to a larger extent.Also, proper psychoeducation may increase the knowledge of illness and result in better management of the illness and may improve the caregiving experience in the future.In terms of caregiver psychoeducation programs, a recently published RCT of family psychoeducation demonstrated that for the patients, it significantly decreased the risk of relapse, and for caregivers, it decreased burden, depression, and increased knowledge of illness [30].Such programs could be routinely carried out in other settings, including ours. The strength of the study lies in using a fairly adequate sample and assessing the experience of caregiving, which is a currently preferred construct; and its relationship with several relevant variables, some of which are less studied, using some culture and region-specific tools.The limitations of the study are that only correlational analysis could be done and no validation process was carried out for the scales translated into Hindi language.A control group to analyze the differences in caregiving was not included.Only one family member (the main caregiver) was assessed using the Family Assessment Device.The patients' compliance with medication was not assessed and could have been another variable to consider for the associations.Furthermore, the sample was purposive and limited to one location.Generalizations should be hence drawn with caution.Further research can be conducted using a control group; in order to examine how the experience of caregiving differs in caregivers of other medical conditions and caregivers of other psychiatric disorders.A qualitative appraisal of caregivers can be done in order to get rich data on their lived experiences of living and caring for a schizophrenia patient.A longer follow-up can be conducted to accurately measure the changes in caregiving experience as well as its correlates over time; to move a step closer towards establishing causality.The assessment of family functioning could be carried out on all family members to get an idea of how the family functions as a unit as the perspectives of different family members could be different. Conclusions To conclude, the study elaborates on the positive and negative aspects of caregiving among caregivers of patients with schizophrenia.Several other correlates, such as family functioning, quality of life, coping, caregiver distress, and others were also studied as a part of the research.The primary findings of this study were that a greater level of positive and negative symptomatology in patients, impaired family functioning in various domains, and a greater use of maladaptive coping strategies were associated with a more negative experience of caregiving; which was associated with a greater psychological distress in caregivers and a poorer quality of life in all domains.It was also associated with decreased inner peace among caregivers; whereas improving the spiritual strength of caregivers was associated with a more positive experience of caregiving.The knowledge of mental illness and the social support available to caregivers were not significantly associated with the caregiving experience.Comprehensively assessing the experience of caregiving in caregivers is the first step and the foundation for the future development of caregiver and family-based interventions as well as designing policy recommendations for assisting caregivers in their role and responsibility. The clientele comprises patients with different psychiatric disorders.Ethical approval for this study was obtained from the Institute Ethics Committee, All India Institute of Medical Sciences (AIIMS), New Delhi (Approval Number: IECPG-204/10.05.2018). Sample 2024 Kochhar et al.Cureus 16(4): e58531.DOI 10.7759/cureus.58531 2 of 15 was used to assess the quality of life.It is a multidimensional generic instrument.It gives four domain scores: Physical Health, Psychological Health, Social Relationships, and Environment.It has 26 items, scored from 1 to 5. The psychometric properties of this scale are comparable to the full version.It has good discriminant validity, concurrent validity, internal consistency and test-retest reliability. TABLE 2 : Scores on psychosocial variables of the caregiver and clinical variables of the patients Data shown as mean and SD (Standard Deviation).SAPS: Scale for Assessment of Positive Symptoms; SANS: Scale for Assessment of Negative Symptoms. Table 3 presents the associations of various socio-demographic variables of the patient and caregiver with the negative and positive experiences of caregiving.After Bonferroni Correction, none of the sociodemographics of the patient and caregiver had any significant association with the experience of caregiving. TABLE 3 : Association of patients' and caregivers' socio-demographics with the experience of caregiving Shown as rho (Spearman correlations) or mean and standard deviation.Parametric and non-parametric tests utilized as applicable.Significance set at p<0.00104 (after Bonferroni correction). TABLE 4 : Correlations of the experience of caregiving with other psychosocial variables of the caregiver Data shown as rho (Spearman correlations).*Significant at p<0.00104 (after Bonferroni correction); SAPS: Scale for Assessment of Positive Symptoms; SANS: Scale for Assessment of Negative Symptoms.
2024-04-20T05:07:44.876Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "834725df285eba1d4d0cfc21900f911f5cf57ca5", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "834725df285eba1d4d0cfc21900f911f5cf57ca5", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }